What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

Recently Viewed

JacketFlap Sponsors

Spread the word about books.
Put this Widget on your blog!
  • Powered by JacketFlap.com

Are you a book Publisher?
Learn about Widgets now!

Advertise on JacketFlap

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Tag

In the past 7 days

Blog Posts by Date

Click days in this calendar to see posts by day or month
<<June 2024>>
SuMoTuWeThFrSa
      01
02030405060708
09101112131415
16171819202122
23242526272829
30      
new posts in all blogs
Viewing: Blog Posts Tagged with: morality, Most Recent at Top [Help]
Results 26 - 41 of 41
26. The problem with moral knowledge

Traveling through Scotland, one is struck by the number of memorials devoted to those who lost their lives in World War I. Nearly every town seems to have at least one memorial listing the names of local boys and men killed in the Great War (St. Andrews, where I am spending the year, has more than one).

Scotland endured a disproportionate number of casualties in comparison with most other Allied nations as Scotland’s military history and the Scots’ reputation as particularly effective fighters contributed to both a proportionally greater number of Scottish recruits as well as a tendency for Allied commanders to give Scottish units the most dangerous combat assignments.

Many who served in World War I undoubtedly suffered from what some contemporary psychologists and psychiatrists have labeled ‘moral injury’, a psychological affliction that occurs when one acts in a way that runs contrary to one’s most deeply-held moral convictions. Journalist David Wood characterizes moral injury as ‘the pain that results from damage to a person’s moral foundation’ and declares that it is ‘the signature wound of [the current] generation of veterans.’

By definition, one cannot suffer from moral injury unless one has deeply-held moral convictions. At the same time that some psychologists have been studying moral injury and how best to treat those afflicted by it, other psychologists have been uncovering the cognitive mechanisms that are responsible for our moral convictions. Among the central findings of that research are that our emotions often influence our moral judgments in significant ways and that such judgments are often produced by quick, automatic, behind-the-scenes cognition to which we lack conscious access.

Thus, it is a familiar phenomenon of human moral life that we find ourselves simply feeling strongly that something is right or wrong without having consciously reasoned our way to a moral conclusion. The hidden nature of much of our moral cognition probably helps to explain the doubt on the part of some philosophers that there really is such a thing as moral knowledge at all.

Edinburgh_Castle,_Scottish_National_War_Memorial_rear
Scottish National War Memorial, Edinburgh Castle. Photo by Nilfanion, CC BY-SA 3.0 via Wikimedia Commons.

In 1977, philosopher John Mackie famously pointed out that defenders of the reality of objective moral values were at a loss when it comes to explaining how human beings might acquire knowledge of such values. He declared that believers in objective values would be forced in the end to appeal to ‘a special sort of intuition’— an appeal that he bluntly characterized as ‘lame’. It turns out that ‘intuition’ is indeed a good label for the way many of our moral judgments are formed. In this way, it might appear that contemporary psychology vindicates Mackie’s skepticism and casts doubt on the existence of human moral knowledge.

Not so fast. In addition to discovering that non-conscious cognition has an important role to play in generating our moral beliefs, psychologists have discovered that such cognition also has an important role to play in generating a great many of our beliefs outside of the moral realm.

According to psychologist Daniel Kahneman, quick, automatic, non-conscious processing (which he has labeled ‘System 1′ processing) is both ubiquitous and an important source of knowledge of all kinds:

‘We marvel at the story of the firefighter who has a sudden urge to escape a burning house just before it collapses, because the firefighter knows the danger intuitively, ‘without knowing how he knows.’ However, we also do not know how we immediately know that a person we see as we enter a room is our friend Peter. … [T]he mystery of knowing without knowing … is the norm of mental life.’

This should provide some consolation for friends of moral knowledge. If the processes that produce our moral convictions are of roughly the same sort that enable us to recognize a friend’s face, detect anger in the first word of a telephone call (another of Kahneman’s examples), or distinguish grammatical and ungrammatical sentences, then maybe we shouldn’t be so suspicious of our moral convictions after all.

The good news is that hope for the reality of moral knowledge remains.

The good news is that hope for the reality of moral knowledge remains. – See more at: http://blog.oup.com/?p=75592&preview=true#sthash.aozalMuy.dpuf

In all of these cases, we are often at a loss to explain how we know, yet it is clear enough that we know. Perhaps the same is true of moral knowledge.

Still, there is more work to be done here, by both psychologists and philosophers. Ironically, some propose a worry that runs in the opposite direction of Mackie’s: that uncovering the details of how the human moral sense works might provide support for skepticism about at least some of our moral convictions.

Psychologist and philosopher Joshua Greene puts the worry this way:

‘I view science as offering a ‘behind the scenes’ look at human morality. Just as a well-researched biography can, depending on what it reveals, boost or deflate one’s esteem for its subject, the scientific investigation of human morality can help us to understand human moral nature, and in so doing change our opinion of it. … Understanding where our moral instincts come from and how they work can … lead us to doubt that our moral convictions stem from perceptions of moral truth rather than projections of moral attitudes.’

The challenge advanced by Greene and others should motivate philosophers who believe in moral knowledge to pay attention to findings in empirical moral psychology. The good news is that hope for the reality of moral knowledge remains.

And if there is moral knowledge, there can be increased moral wisdom and progress, which in turn makes room for hope that someday we can solve the problem of war-related moral injury not by finding an effective way of treating it but rather by finding a way of avoiding the tragedy of war altogether. Reflection on ‘the war to end war’ may yet enable it to live up to its name.

The post The problem with moral knowledge appeared first on OUPblog.

0 Comments on The problem with moral knowledge as of 9/21/2014 5:46:00 AM
Add a Comment
27. 10 reasons why it is good to be good

The first question of moral philosophy, going back to Plato, is “how ought I to live my life?”. Perhaps the second, following close on the heels of the first, can be taken to be “ought I to live morally or not?”, assuming that one can “get away with” being immoral. Another, more familiar way of phrasing this second question is “why be moral?”, where this is elliptical for something like, “will it be good for me and my life to be moral, or would I be better off being immoral, as long as I can get away with it?”.

Bringing together the ancient Greek conception of happiness with a modern conception of self-respect, it turns out to be bad to be a bad person, while in fact, it is good to be a good person. Here are some reasons why:

(1)   Because being bad is bad. Some have thought that being bad or immoral can be good for a person, especially when we can “get away with it”, but there are some good reasons for thinking this is false. The most important reason is that being bad or immoral is self-disrespecting and it is hard to imagine being happy without self-respect. Here’s one quick argument:

Being moral (or good) is necessary for having self-respect.
Self-respect is necessary for happiness.
____________________________________________
Therefore, being good is necessary for happiness.

Of course, a full defense of this syllogism would require more than can be given in a blog post, but hopefully, it isn’t too hard to see the ways in which lying, cheating, and stealing – or being immoral in general – is incompatible with having genuine self-respect. (Of course, cheaters may think they have self-respect, but do you really think Lance Armstrong was a man of self-respect, whatever he may have thought of himself?)

(2)   Because it is the only way to have a chance at having self-respect. We can only have self-respect if we respect who we actually are, we can’t if we only respect some false image of ourselves. So, self-respect requires self-knowledge. And only people who can make just and fair self-assessments can have self-knowledge. And only just and fair people, good, moral people can make just and fair self-assessments. (This is a very compacted version of a long argument.)

(3)   Because being good lets you see what is truly of value in the world. Part of what being good requires is that good people know what is good in the world and what is not. Bad people have bad values, good people have good values. Having good values means valuing what deserves to be valued and not valuing what does not deserve to be valued.

(4)   Because a recent study of West Point cadets reveals that cadets with mixed motivations – some selfish, instrumental, and career-oriented, while others are “intrinsic” and responsive to the value of the job itself – do not perform as well cadets whose motivations are not mixed and are purely intrinsic. (See “The Secret of Effective Motivation”)

Plato and Aristotle, from the Palazzo Pontifici, Vatican. Public domain via Wikimedia Commons.
Plato and Aristotle, from the Palazzo Pontifici, Vatican. Public domain via Wikimedia Commons.

(5)   Because being good means taking good care of yourself. It doesn’t mean that you are the most important thing in the world, or that nothing is more important than you. But, in normal circumstances, it does give you permission to take better care of yourself and your loved ones than complete strangers.

(6)   Because being good means that while you can be passionate, you can choose what you are passionate about; it means that you don’t let your emotions, desires, wants, and needs “get the better of you” and “make” you do things that you later regret. It gives you true grit.

(7)   Because being good means that you will be courageous and brave, in the face of danger and pain and social rejection. It gives you the ability to speak truth to power and “fight the good fight”. It helps you assess risk, spot traps, and seize opportunities. It helps you be successful.

(8)   Because being good means that you will be wise as you can be when you are old and grey. Deep wisdom may not be open to everyone, since some simply might not have the intellectual wherewithal for it. (Think of someone with severe cognitive disabilities.) But we can all, of course, be as wise as it is possible for us to be. This won’t happen, however, by accident. Wise people have to be able to perspicuously see into the “heart of the matter”, and this won’t happen unless we care about the right things. And we won’t care about the right things unless we have good values, so being good will help make us be as wise as we can be.

(9)   Because being good means that we are lovers of the good and, if we are lucky, it means that we will be loved by those who are themselves good. And being lovers of the good means that we become good at loving what is good, to the best of our ability. So, being good makes us become good lovers. And it is good to be a good lover, isn’t it? And good lovers who value what is good are more likely to be loved in return by people who also love the good. What could be better than being loved well by a good person who is your beloved?

(10)   Because of 1-9 above, only good people can live truly happy lives. Only good people live the Good Life.

Headline image credit: Diogenes and Plato by Mattia Preti 1649. Capitoline Museums. Public domain via Wikimedia Commons

The post 10 reasons why it is good to be good appeared first on OUPblog.

0 Comments on 10 reasons why it is good to be good as of 9/8/2014 10:30:00 AM
Add a Comment
28. Moral pluralism and the dismay of Amy Kane

There’s a scene in the movie High Noon that seems to me to capture an essential feature of our moral lives. Actually, it’s not the entire scene. It’s one moment really, two shots — a facial expression and a movement of the head of Grace Kelly.

The part she’s playing is that of Amy Kane, the wife of Marshal Will Kane (Gary Cooper). Amy Kane is a Quaker, and as such is opposed to violence of any kind. Indeed, she tells Kane she will marry him only if he resigns as marshal of Hadleyville and vows to put down his guns forever. He agrees. But shortly after the wedding Kane learns that four villains have plans to terrorize the town, and he comes to think it is he who must try to stop them. He picks up his guns in preparation to meet the villains, and in so doing breaks his vow to Amy.

Unrelenting in her passivism, Amy decides to leave Will. She boards the noon train out of town. Then she hears gunfire, and, just as the train is about to depart, she disembarks and rushes back. Meanwhile, Kane is battling the villains. He manages to kill two of them, but the remaining two have him cornered. It looks like the end for Kane. Then one of them falls.

Amy has picked up a gun and shot him in the back.

We briefly glimpse Amy’s face immediately after she has pulled the trigger. She is distraught, stricken. When the camera angle changes to a view from behind, we see her head fall with great sadness under the weight of what she’s done.

What’s going on with Amy at that moment? It’s possible, I suppose, that she believes she shouldn’t have shot the villain, that she let her emotions run away with her, that she thinks she did the wrong thing. But I doubt that’s it. More likely is that when Amy heard the gunshots she decided that the right thing for her to do was return to town and help her husband in his desperate fight. But why then is Amy dismayed? If she performed the action she thought was right, shouldn’t she feel only moral contentment with what she has done?

Studio publicity still of Grace Kelly for the film Rear Window (1954). Public domain via Wikimedia Commons.
Studio publicity still of Grace Kelly for the film Rear Window (1954). Public domain via Wikimedia Commons.

Grace Kelly could have played it differently. She could have whooped with delight at having offed the bad guy, perhaps dropping some “hasta la vista”-like catchphrase along the way. Or she could have set her ample square jaw in grim determination and gone after the remaining villain, signaling to us her decision to discard namby-pamby pacifism for the robust alternative of visceral western justice. But Amy Kane’s actual reaction is psychologically more plausible — and morally more interesting. While she believes she’s done what she had to do, she’s still dismayed. Why?

What Amy’s reaction shows, I believe, is that morality is pluralist, not monist.

Monistic moral theories tell us that there is one and only one ultimate moral end. If monism is true, in every situation it will always be at least theoretically possible to justify the right course of action by showing that everything of fundamental moral importance supports it. Jeremy Bentham is an example of a moral monist.

He held that pleasure is the single ultimate end. Another example is Immanuel Kant, who held that the single base for all of morality is the Categorical Imperative. According to monists,successful moral justification will always ends at a single point (even if they disagree among themselves about what that point is).

Pluralist moral theories, in contrast, hold that there is a multitude of basic moral principles that can come into conflict with each other. David Hume and W.D. Ross were both moral pluralists. They believed that various kinds of moral conflict can arise — justice can conflict with beneficence, keeping a promise can conflict with obeying the law, courage can conflict with prudence — and that there are no underlying rules that explain how such conflicts are to be resolved.

If Hume and Ross are right and pluralism is true, even after you have given the best justification for a course of action that it is possible to give, you may sometimes have to acknowledge that to follow that course will be to act in conflict with something of fundamental moral importance. Your best justification may fail to make all of the moral ends meet.

With that understanding of monism and pluralism on board, let’s now return to Grace Kelly as Amy Kane. Let’s return to the moment her righteous killing of the bad guy causes her to bow her head in moral remorse.

If we assume monism, Amy’s response will seem paradoxical, weird, in some way inappropriate. If there is one and only one ultimate end, then to think that a course of action is right will be to think that everything of fundamental importance supports it. And it would be paradoxical or weird — inappropriate in some way — for someone to regret doing something in line with everything of fundamental moral importance. If the moral justification of an action ends at a single point, then what could the point be of feeling remorse for doing it?

But Amy’s reaction is perfectly explicable if we take her to have a plurality of potentially-conflicting basic moral commitments. Moral pluralists will explain that Amy has decided that in this situation saving Kane from the villains has a fundamental moral importance that overrides the prohibition on killing, even while she continues to believe that there is something fundamentally morally terrible about killing. For pluralists, there is nothing strange about feeling remorse toward acting against something one takes to be of fundamental moral importance.

Indeed, feeling remorse in such a situation is just what we should expect. This is why we take Amy’s response to be apt, not paradoxical or weird. We think that she, like most of us, holds a plurality of fundamental moral commitments, one of which she rightly acted on even though it meant having to violate another.

The upshot is this. If you think Grace Kelly played the scene right — and if you think High Noon captures something about our moral lives that “hasta la vista”-type movies do not — then you ought to believe in moral pluralism.

Headline image: General Store Sepia Toned Wild West Town. © BCFC via iStock

The post Moral pluralism and the dismay of Amy Kane appeared first on OUPblog.

0 Comments on Moral pluralism and the dismay of Amy Kane as of 8/24/2014 5:39:00 AM
Add a Comment
29. Philippines pork barrel scam and contending ideologies of accountability

By Garry Rodan


When Benigno Aquino III was elected Philippine President in 2010, combating entrenched corruption was uppermost on his projected reform agenda. Hitherto, it has been unclear what the full extent and nature of reform ambitions of his administration might be. The issue has now been forced by ramifications from whistleblowers’ exposure of an alleged US$224 scam involving discretionary funds by Congress representatives. Fallout has already put some prominent Senators in the hot seat, but will deeper and more systemic reforms follow?

A crucial but often overlooked factor shaping prospects for reform in the Philippines, and elsewhere, is contestation over the meaning and purposes of accountability. Accountability means different things to different people. Even authoritarian rulers increasingly lay claim to it. Therefore, whether it is liberal, moral or democratic ideology that exerts greatest reform influence matters greatly.

Liberal accountability champions legal, constitutional, and contractual institutions to restrain the ability of state agencies to violate the political authority of the individual. Moral accountability ideologues emphasize how official practices must be guided by a moral code, invoking religious, monarchical ethnic, nationalist, and other externally constituted political authority. Democratic accountability ideologies are premised on the notion that official action at all levels should be subject to sanction, either directly or indirectly, in a manner promoting popular sovereignty.

Anti-corruption movements usually involve coalitions incorporating all three ideologies. However, governments tend to be least responsive to democratic ideologies because their reforms are directed at fundamental power relations. The evolving controversy in the Philippines is likely to again bear this out.

Dollars in envelope

What whistleblowers exposed in July 2013 was an alleged scam masterminded by business figure Janet Lim Napoles. Money was siphoned from the Priority Development Assistance Fund (PDAF), or ‘pork barrel’ as it is popularly known, providing members of Congress with substantial discretionary project funding.

This funding has been integral to political patronage and corruption in the Philippines, precisely why ruling elites have hitherto resolutely defended PDAF despite many scandals and controversies linked to it.

However, public reaction to this scam was on a massive scale. Social and mass media probing and campaigning combined with the ‘Million People March’ in Manila’s Rizal Park involving a series of protests starting in August 2013. After initially defending PDAF despite his anti-corruption platform, Aquino announced PDAF’s abolition. Subsequently, the Supreme Court reversed three earlier rulings to unanimously declare the PDAF unconstitutional for violating the separation of powers principle.

Then, on 1 April 2014, the Office of the Ombudsman (OMB) announced it found probable cause to indict three opposition senators – including the powerful Juan Ponce Enrile, who served as Justice Secretary and Defense Minister under Marcos and Senate President from 2008 until June 2013 – for plunder and multiple counts of graft for kickbacks or commissions channeled through bogus non-governmental organizations (NGOs).

These are the Philippines’ first senatorial indictments for plunder, conviction for which can lead to life imprisonment. Napoles and various state officials and employees of NGOs face similar charges. Aquino’s rhetoric about instituting clean and accountable governance is translating into action. But which ideologies are exerting greatest influence and what are the implications?

Moral ideology influences were evident under Aquino even before the abolition of PDAF through new appointments to enhance the integrity of key institutions. Conchita Morales, selected by the President in mid-2011 as the new Ombudsman, was strongly endorsed by Catholic Church leaders. Aquino also appointed Heidi Mendoza as a commissioner to the Commission of Audit. Mendoza played a vital whistleblower role leading to the resignation of the previous Ombudsman Merceditas Gutierrez and was depicted by the Church as a moral role model for Christians.

However, there have been many episodes in the past where authorities have selectively pruned ‘bad apples,’ but with a focus on those from competing political or economic orchards. Will Aquino this time go beyond appeals to moral ideology and intra-elite combat to progress liberal institutional reform?

The accused senators ask why they have been singled out from 40 named criminally liable following the whistleblowers’ claims, inferring political persecution. Yet if continuing investigations lead to charges against people closer to the administration it would indicate not. In a clear alignment with liberal ideology, Communications Secretary Herminio Coloma recently raised expectations of such a change: ‘We are a government of laws, not of men. Let rule of law take its course.’

The jury is still out too on just how substantive the institutional change to the PDAF will prove. The President’s own pork barrel lump sum appropriations in the national budget are unaltered, despite public calls for it too to go. Indeed, some argue the President is now even more powerful a pork dispenser through de facto PDAF concentration in his hands.

PDAF’s abolition is also in a transitional phase with the 2014 budget taking account of existing PDAF commitments. The P25-billion PDAF was directed to the major public funding implementing agencies incorporating these commitments on a line item basis. There is a risk, though, that a precedent has been set for legislators’ pet projects to be negotiated with departmental heads in private rather than scrutinized in the legislature.

Certainly the coalition for change is building. Alongside popular forces, internationally competitive globalized elements of the Philippines bourgeoisie are a growing support base for liberal accountability ideology. Yet longstanding inaction on corruption reflects entrenched power structures inside and outside Congress antithetical to the routine and institutionalized promotion of liberal and, especially, democratic accountability.

Thus, while the instigation of official action on the pork barrel scam following the whistleblowers’ actions is testimony to the power of public mobilizations and campaigns, there are serious obstacles to more effective accountability institutionalization promoting popular sovereignty.

Acute concentrations of wealth and social power in the Philippines not only affect relationships between public officials and some elites, they also fundamentally constrain political competition. Oligarchs enjoy massive electoral resource advantages including the capacity for vote buying and other questionable campaign strategies. Outright intimidation, including extrajudicial killings of some of the most concerted opponents of elite rule and vested interests, remains widespread.

Therefore, parallel with popular anti-pork demands is yet another push for Congress to pass enabling law to finally give effect to the provision in the 1987 Constitution to ban political dynasties. The proliferation of political dynasties and corruption has been mutually reinforcing. Congressional dominance by wealthy elites and political clans shapes the laws overseen by officials, the appointment of those officials and, in turn, the culture and practices of public institutions.

When Congress resumes sessions in May, it will have before it the first Anti-Dynasty Bill to have passed the committee level. Public mood has made it more difficult for the rich and powerful in Congress to be as dismissive as previously of such reform attempts. The prospects of the current Bill passing are nevertheless dim but the struggle for democratic accountability will continue.

Garry Rodan is Professor of Politics & International Studies at the Asia Research Centre, Murdoch University, Australia and the co-author (with Caroline Hughes) of The Politics of Accountability in Southeast Asia: The Dominance of Moral Ideologies.

Subscribe to the OUPblog via email or RSS.
Subscribe to only politics articles on the OUPblog via email or RSS.
Image credit: Dollars in envelope. By OlgaLIS, via iStockphoto.

The post Philippines pork barrel scam and contending ideologies of accountability appeared first on OUPblog.

0 Comments on Philippines pork barrel scam and contending ideologies of accountability as of 6/11/2014 6:13:00 PM
Add a Comment
30. Morality, science, and Belgium’s child euthanasia law

vsi

By Tony Hope


Science and morality are often seen as poles apart. Doesn’t science deal with facts, and morality with, well, opinions? Isn’t science about empirical evidence, and morality about philosophy? In my view this is wrong. Science and morality are neighbours. Both are rational enterprises. Both require a combination of conceptual analysis, and empirical evidence. Many, perhaps most moral disagreements hinge on disagreements over evidence and facts, rather than disagreements over moral principle.

Consider the recent child euthanasia law in Belgium that allows a child to be killed – as a mercy killing – if: (a) the child has a serious and incurable condition with death expected to occur within a brief period; (b) the child is experiencing constant and unbearable suffering; (c) the child requests the euthanasia and has the capacity of discernment – the capacity to understand what he or she is requesting; and, (d) the parents agree to the child’s request for euthanasia. The law excludes children with psychiatric disorders. No one other than the child can make the request.

Is this law immoral? Thought experiments can be useful in testing moral principles. These are like the carefully controlled experiments that have been so useful in science. A lorry driver is trapped in the cab. The lorry is on fire. The driver is on the verge of being burned to death. His life cannot be saved. You are standing by. You have a gun and are an excellent shot and know where to shoot to kill instantaneously. The bullet will be able to penetrate the cab window. The driver begs you to shoot him to avoid a horribly painful death.

Would it be right to carry out the mercy killing? Setting aside legal considerations, I believe that it would be. It seems wrong to allow the driver to suffer horribly for the sake of preserving a moral ideal against killing.

Thought experiments are often criticised for being unrealistic. But this can be a strength. The point of the experiment is to test a principle, and the ways in which it is unrealistic can help identify the factual aspects that are morally relevant. If you and I agree that it would be right to kill the lorry driver then any disagreement over the Belgian law cannot be because of a fundamental disagreement over mercy killing. It is likely to be a disagreement over empirical facts or about how facts integrate with moral principles.

Euthanasia_and_the_Law

There is a lot of discussion of the Belgian law on the internet. Most of it against. What are the arguments?

Some allow rhetoric to ride roughshod over reason. Take this, for example: “I’m sure the Belgian parliament would agree that minors should not have access to alcohol, should not have access to pornography, should not have access to tobacco, but yet minors for some reason they feel should have access to three grams of phenobarbitone in their veins – it just doesn’t make sense.”

But alcohol, pornography and tobacco are all considered to be against the best interests of children. There is, however, a very significant reason for the ‘three grams of phenobarbitone’: it prevents unnecessary suffering for a dying child. There may be good arguments against euthanasia but using unexamined and poor analogies is just sloppy thinking.

I have more sympathy for personal experience. A mother of two terminally ill daughters wrote in the Catholic Herald: “Through all of their suffering and pain the girls continued to love life and to make the most of it…. I would have done anything out of love for them, but I would never have considered euthanasia.”

But this moving anecdote is no argument against the Belgian law. Indeed, under that law the mother’s refusal of euthanasia would be decisive. It is one thing for a parent to say that I do not believe that euthanasia is in my child’s best interests; it is quite another to say that any parent who thinks euthanasia is in their child’s best interests must be wrong.

To understand a moral position it is useful to state the moral principles and the empirical assumptions on which it is based. So I will state mine.

Moral Principles

  1. A mercy killing can be in a person’s best interests.
  2. A person’s competent wishes should have very great weight in what is done to her.
  3. Parents’ views as to what it right for their children should normally be given significant moral weight.
  4. Mercy killing, in the situation where a person is suffering and faces a short life anyway, and where the person is requesting it, can be the right thing to do.

Empirical assumptions

  1. There are some situations in which children with a terminal illness suffer so much that it is in their interests to be dead.
  2. There are some situations in which the child’s suffering cannot be sufficiently alleviated short of keeping the child permanently unconscious.
  3. A law can be formulated with sufficient safeguards to prevent euthansia from being carried out in situations when it is not justified.


This last empirical claim is the most difficult to assess. Opponents of child euthanasia may believe such safeguards are not possible: that it is better not to risk sliding down the slippery slope. But the ‘slippery slope argument’ is morally problematic: it is an argument against doing the right thing on some occasions (carrying out a mercy killing when that is right) because of the danger of doing the wrong thing on other occasions (carrying out a killing when that is wrong). I prefer to focus on safeguards against slipping. But empirical evidence could lead me to change my views on child euthanasia. My guess is that for many people who are against the new Belgian law, it is the fear of the slippery slope that is ultimately crucial. Much moral disagreement, when carefully considered, comes down to disagreement over facts. Scientific evidence is a key component of moral argument.

Tony Hope is Emeritus Professor of Medical Ethics at the University of Oxford and the author of Medical Ethics: A Very Short Introduction.

The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday, subscribe to Very Short Introductions articles on the OUPblog via email or RSS, and like Very Short Introductions on Facebook.

Subscribe to the OUPblog via email or RSS.
Subscribe to only science and medicine articles on the OUPblog via email or RSS.
Image credit: Legality of Euthanasia throughout the world By Jrockley. Public domain via Wikimedia Commons

The post Morality, science, and Belgium’s child euthanasia law appeared first on OUPblog.

0 Comments on Morality, science, and Belgium’s child euthanasia law as of 5/23/2014 5:00:00 AM
Add a Comment
31. Gloomy terrors or the most intense pleasure?

By Philip Schofield


In 1814, just two hundred years ago, the radical philosopher Jeremy Bentham (1748–1832) began to write on the subject of religion and sex, and thereby produced the first systematic defence of sexual liberty in the history of modern European thought. Bentham’s manuscripts have now been published for the first time in authoritative form. He pointed out that ‘regular’ sexual activity consisted in intercourse between one male and one female, within the confines of marriage, for the procreation of children. He identified the source of the view that only ‘regular’ or ‘natural’ sexual activity was morally acceptable in the Mosaic Law and in the teachings of the self-styled Apostle Paul. ‘Irregular’ sexual activity, on the other hand, had many variations: intercourse between one man and one woman, when neither of them were married, or when one of them was married, or when both of them were married, but not to each other; between two women; between two men; between one man and one woman but using parts of the body that did not lead to procreation; between a human being and an animal of another species; between a human being and an inanimate object; and between a living human and a dead one. In addition, there was the ‘solitary mode of sexual gratification’, and innumerable modes that involved more than two people. Bentham’s point was that, given that sexual gratification was for most people the most intense and the purest of all pleasures and that pleasure was a good thing (the only good thing in his view), and assuming that the activity was consensual, a massive amount of human happiness was being suppressed by preventing people, whether from the sanction of the law, religion, or public opinion, from engaging in such ‘irregular’ activities as suited their taste.

Bentham

Bentham was writing at a time when homosexuals, those guilty of ‘the crime against nature’, were subject to the death penalty in England, and were in fact being executed at about the rate of two per year, and were vilified and ridiculed in the press and in literature. If an activity did not cause harm, Bentham had argued as early as the 1770s and 1780s, then it should not be subject to legal punishment, and had called for the decriminalization of homosexuality. By the mid-1810s he was prepared to link the problem not only with law, but with religion. The destruction of Sodom and Gomorrah was taken by ‘religionists’, as Bentham called religious believers, to prove that God had issued a universal condemnation of homosexuality. Bentham pointed out that what the Bible story condemned was gang rape. Paul’s injunctions against homosexuality were also taken to be authoritative by the Church. Bentham pointed out that not only did Jesus never condemn homosexuality, but that the Gospels presented evidence that Jesus engaged in sexual activity, and that he had his male lovers — the disciple whom he loved, traditionally said to be John, and the boy, probably a male prostitute, who remained with Jesus in the Garden of Gethsemane after all the disciples had fled (for a more detailed account see ‘Not Paul, but Jesus’).

Bentham was writing after Malthus had in 1798 put forward his argument that population growth would always tend to outstrip food supply, resulting in starvation and death until an equilibrium was restored, whereupon the process would recommence. Bentham had been convinced by Malthus, but Malthus’s solution to the problem, that people should abstain from sex, was not acceptable to him. He pointed out that one advantage of non-procreative sex was that it would not add to the increase of population. Bentham also took up the theme of infanticide. He had considerable sympathy for unmarried mothers who, because of social attitudes, were ostracized and had little choice but to become prostitutes, with the inevitable descent into drink, disease, and premature death. It would be far better, argued Bentham, to destroy the child, rather than the woman. Moreover, it was kinder to kill an infant at birth than allow it to live a life of pain and suffering.

Bentham looked to ancient Greece and Rome, where certain forms of homosexual activity were not only permitted but regarded as normal, as more appropriate models for sexual morality than that which existed in modern Christian Europe. Bentham attacked the notion, still propagated by religious apologists, that homosexuality was ‘unnatural’. All that ‘unnatural’ meant, argued Bentham, was ‘not common’. The fact that something was not common was not a ground for condemning it. Neither was the fact that something was not to your taste. It was a form of tyranny to say that, because you did not like to do a particular thing, you were going to punish another person for doing it. Because you thought something was ‘disgusting’ did not mean that everyone else thought it was disgusting. You might not want to have sex with a sow, but the father of her piglets thought differently.

These writings were, for Bentham, a critical part of a much broader attack on religion and the ‘gloomy terrors’ inspired by the religious mentality. By putting forward the case for sexual liberty, he was undermining religion in one of the areas where, in his view, it was most pernicious. Bentham did not dare publish this material. He believed that his reputation would have been ruined had he done so. He died in 1832. He would have been saddened that it still retains massive relevance in today’s world.

Philip Schofield is Professor of the History of Legal and Political Thought in the Faculty of Laws, University College London, Director of the Bentham Project, and General Editor of the new authoritative edition of The Collected Works of Jeremy Bentham. The latest volume in the edition, Of Sexual Irregularities, and other writings on Sexual Morality, was published on 30 January 2014. The research that led to the preparation of the volume was funded by the Leverhulme Trust. The Bentham Project is responsible for Transcribe Bentham, the prize-winning scholarly crowdsourcing initiative, where volunteers transcribe previously unread Bentham manuscripts.

Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.
Image credit: Jeremy Bentham, aged about 80. Frontispiece to Jeremy Bentham, Principles of Legislation, edited by John Neal, Boston: Wells and Lilly, 1830. Public domain

The post Gloomy terrors or the most intense pleasure? appeared first on OUPblog.

0 Comments on Gloomy terrors or the most intense pleasure? as of 3/19/2014 5:51:00 AM
Add a Comment
32. Unfit for the future: The urgent need for moral enhancement

By Julian Savulescu and Ingmar Persson


First published in Philosophy Now Issue 91, July/Aug 2012.

For the vast majority of our 150,000 years or so on the planet, we lived in small, close-knit groups, working hard with primitive tools to scratch sufficient food and shelter from the land. Sometimes we competed with other small groups for limited resources. Thanks to evolution, we are supremely well adapted to that world, not only physically, but psychologically, socially and through our moral dispositions.

But this is no longer the world in which we live. The rapid advances of science and technology have radically altered our circumstances over just a few centuries. The population has increased a thousand times since the agricultural revolution eight thousand years ago. Human societies consist of millions of people. Where our ancestors’ tools shaped the few acres on which they lived, the technologies we use today have effects across the world, and across time, with the hangovers of climate change and nuclear disaster stretching far into the future. The pace of scientific change is exponential. But has our moral psychology kept up?

With great power comes great responsibility. However, evolutionary pressures have not developed for us a psychology that enables us to cope with the moral problems our new power creates. Our political and economic systems only exacerbate this. Industrialisation and mechanisation have enabled us to exploit natural resources so efficiently that we have over-stressed two-thirds of the most important eco-systems.

A basic fact about the human condition is that it is easier for us to harm each other than to benefit each other. It is easier for us to kill than it is for us to save a life; easier to injure than to cure. Scientific developments have enhanced our capacity to benefit, but they have enhanced our ability to harm still further. As a result, our power to harm is overwhelming. We are capable of forever putting an end to all higher life on this planet. Our success in learning to manipulate the world around us has left us facing two major threats: climate change – along with the attendant problems caused by increasingly scarce natural resources – and war, using immensely powerful weapons. What is to be done to counter these threats?

Our Natural Moral Psychology
Our sense of morality developed around the imbalance between our capacities to harm and to benefit on the small scale, in groups the size of a small village or a nomadic tribe – no bigger than a hundred and fifty or so people. To take the most basic example, we naturally feel bad when we cause harm to others within our social groups. And commonsense morality links responsibility directly to causation: the more we feel we caused an outcome, the more we feel responsible for it. So causing a harm feels worse than neglecting to create a benefit. The set of rights that we have developed from this basic rule includes rights not to be harmed, but not rights to receive benefits. And we typically extend these rights only to our small group of family and close acquaintances. When we lived in small groups, these rights were sufficient to prevent us harming one another. But in the age of the global society and of weapons with global reach, they cannot protect us well enough.

There are three other aspects of our evolved psychology which have similarly emerged from the imbalance between the ease of harming and the difficulty of benefiting, and which likewise have been protective in the past, but leave us open now to unprecedented risk:

  1. Our vulnerability to harm has left us loss-averse, preferring to protect against losses than to seek benefits of a similar level.
  2. We naturally focus on the immediate future, and on our immediate circle of friends. We discount the distant future in making judgements, and can only empathise with a few individuals based on their proximity or similarity to us, rather than, say, on the basis of their situations. So our ability to cooperate, applying our notions of fairness and justice, is limited to our circle, a small circle of family and friends. Strangers, or out-group members, in contrast, are generally mistrusted, their tragedies downplayed, and their offences magnified.
  3. We feel responsible if we have individually caused a bad outcome, but less responsible if we are part of a large group causing the same outcome and our own actions can’t be singled out.


Case Study: Climate Change and the Tragedy of the Commons
There is a well-known cooperation or coordination problem called ‘the tragedy of the commons’. In its original terms, it asks whether a group of village herdsmen sharing common pasture can trust each other to the extent that it will be rational for each of them to reduce the grazing of their own cattle when necessary to prevent over-grazing. One herdsman alone cannot achieve the necessary saving if the others continue to over-exploit the resource. If they simply use up the resource he has saved, he has lost his own chance to graze but has gained no long term security, so it is not rational for him to self-sacrifice. It is rational for an individual to reduce his own herd’s grazing only if he can trust a sufficient number of other herdsmen to do the same. Consequently, if the herdsmen do not trust each other, most of them will fail to reduce their grazing, with the result that they will all starve.

The tragedy of the commons can serve as a simplified small-scale model of our current environmental problems, which are caused by billions of polluters, each of whom contributes some individually-undetectable amount of carbon dioxide to the atmosphere. Unfortunately, in such a model, the larger the number of participants the more inevitable the tragedy, since the larger the group, the less concern and trust the participants have for one another. Also, it is harder to detect free-riders in a larger group, and humans are prone to free ride, benefiting from the sacrifice of others while refusing to sacrifice themselves. Moreover, individual damage is likely to become imperceptible, preventing effective shaming mechanisms and reducing individual guilt.

Anthropogenic climate change and environmental destruction have additional complicating factors. Although there is a large body of scientific work showing that the human emission of greenhouse gases contributes to global climate change, it is still possible to entertain doubts about the exact scale of the effects we are causing – for example, whether our actions will make the global temperature increase by 2°C or whether it will go higher, even to 4°C – and how harmful such a climate change will be.

In addition, our bias towards the near future leaves us less able to adequately appreciate the graver effects of our actions, as they will occur in the more remote future. The damage we’re responsible for today will probably not begin to bite until the end of the present century. We will not benefit from even drastic action now, and nor will our children. Similarly, although the affluent countries are responsible for the greatest emissions, it is in general destitute countries in the South that will suffer most from their harmful effects (although Australia and the south-west of the United States will also have their fair share of droughts). Our limited and parochial altruism is not strong enough to provide a reason for us to give up our consumerist life-styles for the sake of our distant descendants, or our distant contemporaries in far-away places.

Given the psychological obstacles preventing us from voluntarily dealing with climate change, effective changes would need to be enforced by legislation. However, politicians in democracies are unlikely to propose such legislation. Effective measures will need to be tough, and so are unlikely to win a political leader a second term in office. Can voters be persuaded to sacrifice their own comfort and convenience to protect the interests of people who are not even born yet, or to protect species of animals they have never even heard of? Will democracy ever be able to free itself from powerful industrial interests? Democracy is likely to fail. Developed countries have the technology and wealth to deal with climate change, but we do not have the political will.

If we keep believing that responsibility is directly linked to causation, that we are more responsible for the results of our actions than the results of our omissions, and that if we share responsibility for an outcome with others our individual responsibility is lowered or removed, then we will not be able to solve modern problems like climate change, where each person’s actions contribute imperceptibly but inevitably. If we reject these beliefs, we will see that we in the rich, developed countries are more responsible for the misery occurring in destitute, developing countries than we are spontaneously inclined to think. But will our attitudes change?

Moral Bioenhancement
Our moral shortcomings are preventing our political institutions from acting effectively. Enhancing our moral motivation would enable us to act better for distant people, future generations, and non-human animals. One method to achieve this enhancement is already practised in all societies: moral education. Al Gore, Friends of the Earth and Oxfam have already had success with campaigns vividly representing the problems our selfish actions are creating for others – others around the world and in the future. But there is another possibility emerging. Our knowledge of human biology – in particular of genetics and neurobiology – is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process. We could use these techniques to overcome the moral and psychological shortcomings that imperil the human species. We are at the early stages of such research, but there are few cogent philosophical or moral objections to the use of specifically biomedical moral enhancement – or moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility of developing moral bioenhancement technologies – not to replace traditional moral education, but to complement it. We simply can’t afford to miss opportunities. We have provided ourselves with the tools to end worthwhile life on Earth forever. Nuclear war, with the weapons already in existence today could achieve this alone. If we must possess such a formidable power, it should be entrusted only to those who are both morally enlightened and adequately informed.

Objection 1: Too Little, Too Late?
We already have the weapons, and we are already on the path to disastrous climate change, so perhaps there is not enough time for this enhancement to take place. Moral educators have existed within societies across the world for thousands of years – Buddha, Confucius and Socrates, to name only three – yet we still lack the basic ethical skills we need to ensure our own survival is not jeopardised. As for moral bioenhancement, it remains a field in its infancy.

We do not dispute this. The relevant research is in its inception, and there is no guarantee that it will deliver in time, or at all. Our claim is merely that the requisite moral enhancement is theoretically possible – in other words, that we are not biologically or genetically doomed to cause our own destruction – and that we should do what we can to achieve it.

Objection 2: The Bootstrapping Problem
We face an uncomfortable dilemma as we seek out and implement such enhancements: they will have to be developed and selected by the very people who are in need of them, and as with all science, moral bioenhancement technologies will be open to abuse, misuse or even a simple lack of funding or resources.

The risks of misapplying any powerful technology are serious. Good moral reasoning was often overruled in small communities with simple technology, but now failure of morality to guide us could have cataclysmic consequences. A turning point was reached at the middle of the last century with the invention of the atomic bomb. For the first time, continued technological progress was no longer clearly to the overall advantage of humanity. That is not to say we should therefore halt all scientific endeavour. It is possible for humankind to improve morally to the extent that we can use our new and overwhelming powers of action for the better. The very progress of science and technology increases this possibility by promising to supply new instruments of moral enhancement, which could be applied alongside traditional moral education.

Objection 3: Liberal Democracy – a Panacea?
In recent years we have put a lot of faith in the power of democracy. Some have even argued that democracy will bring an ‘end’ to history, in the sense that it will end social and political development by reaching its summit. Surely democratic decision-making, drawing on the best available scientific evidence, will enable government action to avoid the looming threats to our future, without any need for moral enhancement?

In fact, as things stand today, it seems more likely that democracy will bring history to an end in a different sense: through a failure to mitigate human-induced climate change and environmental degradation. This prospect is bad enough, but increasing scarcity of natural resources brings an increased risk of wars, which, with our weapons of mass destruction, makes complete destruction only too plausible.

Sometimes an appeal is made to the so-called ‘jury theorem’ to support the prospect of democracy reaching the right decisions: even if voters are on average only slightly more likely to get a choice right than wrong – suppose they are right 51% of the time – then, where there is a sufficiently large numbers of voters, a majority of the voters (ie, 51%) is almost certain to make the right choice.

However, if the evolutionary biases we have already mentioned – our parochial altruism and bias towards the near future – influence our attitudes to climatic and environmental policies, then there is good reason to believe that voters are more likely to get it wrong than right. The jury theorem then means it’s almost certain that a majority will opt for the wrong policies! Nor should we take it for granted that the right climatic and environmental policy will always appear in manifestoes. Powerful business interests and mass media control might block effective environmental policy in a market economy.

Conclusion
Modern technology provides us with many means to cause our downfall, and our natural moral psychology does not provide us with the means to prevent it. The moral enhancement of humankind is necessary for there to be a way out of this predicament. If we are to avoid catastrophe by misguided employment of our power, we need to be morally motivated to a higher degree (as well as adequately informed about relevant facts). A stronger focus on moral education could go some way to achieving this, but as already remarked, this method has had only modest success during the last couple of millennia. Our growing knowledge of biology, especially genetics and neurobiology, could deliver additional moral enhancement, such as drugs or genetic modifications, or devices to augment moral education.

The development and application of such techniques is risky – it is after all humans in their current morally-inept state who must apply them – but we think that our present situation is so desperate that this course of action must be investigated.

We have radically transformed our social and natural environments by technology, while our moral dispositions have remained virtually unchanged. We must now consider applying technology to our own nature, supporting our efforts to cope with the external environment that we have created.

Biomedical means of moral enhancement may turn out to be no more effective than traditional means of moral education or social reform, but they should not be rejected out of hand. Advances are already being made in this area. However, it is too early to predict how, or even if, any moral bioenhancement scheme will be achieved. Our ambition is not to launch a definitive and detailed solution to climate change or other mega-problems. Perhaps there is no realistic solution. Our ambition at this point is simply to put moral enhancement in general, and moral bioenhancement in particular, on the table. Last century we spent vast amounts of resources increasing our ability to cause great harm. It would be sad if, in this century, we reject opportunities to increase our capacity to create benefits, or at least to prevent such harm.

© Prof. Julian Savulescu and Prof. Ingmar Persson 2012

Julian Savulescu is a Professor of Philosophy at Oxford University and Ingmar Persson is a Professor of Philosophy at the University of Gothenburg. This article is drawn from their book Unfit for the Future: The Urgent Need for Moral Enhancement (Oxford University Press, 2012).

Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.
View more about this book on the  

0 Comments on Unfit for the future: The urgent need for moral enhancement as of 1/1/1900
Add a Comment
33. The justification of punishment

By Victor Tadros When an offender commits a crime most of us think that the state is justified, and perhaps also required, to punish him or her. But punishment causes offenders a great deal of harm, it costs a lot of money, and it not only harms offenders, it also harms their family and friends. What could possibly justify doing these things?

0 Comments on The justification of punishment as of 1/1/1900
Add a Comment
34. Conscience today

By Paul Strohm


Among ethical concepts, conscience is a remarkable survivor.  During the 2000 years of its existence it has had ups and downs, but has never gone away.  Originating as Roman conscientia, it was adopted by the Catholic Church, redefined and competitively claimed by Luther and the Protestants during the Reformation, adapted to secular philosophy during the Enlightenment, and is still actively abroad in the world today.  Yet the last few decades have been cloudy ones for conscience, a unique time of trial.

The problem for conscience has always been its precarious authorization.   It is both a uniquely personal impulse and a matter of institutional consensus, a strongly felt personal view and a shared norm upon which all reasonable or ethical people are expected to agree.   As a result of its mixed mandate, conscience performs in differing and even contradictory ways.   It lends support to the dissenting individual or exponent of unpopular or even aberrant claims.  But it is also summoned in support of the norm, and broadly accepted ethical standards.

Each of these authorizations—the personal and the institutional—has its pitfalls.  The fervent individual, summoned by burning personal conviction about the rightness of his or her cause, lies open to suspicions of solipsism or arrogance. But, on the other hand, institutionally or state-sponsored conscience, or conscience speaking for settled public opinion, risk complacency or ethically stunted orthodoxy.  One recalls the predicament of Huckleberry Finn, who suffers what he identifies as conscience pangs for his decision to assist Jim to escape from enslavement, when this bourgeois or ‘churchified’ conscience is obviously a false friend and enemy to his superior ethical intuitions.

Despite such issues, conscience remains a force for much good in the world.  Its most crucial function, and perhaps the one most in need of support, is its encouragement to the private  individual struggling with institutional tyrannies—most dramatically, with various forms of state tyranny.  We have witnessed the incarceration and continued surveillance of China’s Ai Weiwei.  Ai has recently been called ‘China’s conscience’, but his more urgent need might be less public and more personal, the need to enjoy his own conscience undisturbed by governmental or other external intervention.  Remarkable individuals like Ai have proven willing to endure sacrifice for conscientious belief–and sacrifice they have.   Recently Lasantha Wickramatunge, a courageous Sri Lankan journalist, gave his life to expose corruption.  He wrote a farewell dispatch, which amounted to his own obituary letter, which concluded, ‘There is a calling that is yet above high office, fame, lucre and security.  It is the call of conscience.’ Salman Taseer, governor of the Punjab province in Pakistan, declared in a 1 Jaunary 2011 television interview that ‘If I do not stand by my conscience, then who will?’—three days before his assassination. Less dramatically, but still tellingly, one may consider some of the smaller cases of conscience that people confront daily.  Explaining his break with his political party to support a faltering gay marriage bill, Fred W. Thiele Jr, a  New York state Assemblyman, explained, ‘There’s that little voice inside of you that tells you when you’ve done something right, and when you’ve done something wrong. . .  That little voice kept gnawing away at me.’ 0 Comments on Conscience today as of 1/1/1900

Add a Comment
35. From Gilgamesh to Wall Street

In Economics of Good and Evil, Tomas Sedlacek asks: does it pay to be good? In order to answer this question, he looks at the way societies have reconciled their moral values with economic forces. He explores economic ideas in world literature, from concepts of productivity and employment in Gilgamesh to consumerism in Fight Club. In the videos below Sedlacek talks about why he wrote the book, before going on to explain ‘the story of Joseph and bastard-Keynesianism’.

Click here to view the embedded video.

Click here to view the embedded video.

Tomas Sedlacek works for the National Economic Council in the Czech Republic and is a former economic advisor to President Václav Havel. The Yale Economic Review called him one of the ‘5 Hot Minds in Economics’. He will be talking about his book at the RSA in London on Thursday 16 June.

View more about this book on the

0 Comments on From Gilgamesh to Wall Street as of 1/1/1900
Add a Comment
36. Was Iraq a just war?

By David Fisher

There has been much recent debate about whether the 2003 Iraq War was legal, with both Tony Blair and his Attorney General summoned before the Chilcot enquiry to give evidence on this. But a more fundamental question is whether the war was moral.

On this question the Chilcot enquiry has been silent, perhaps reflecting a more general scepticism in society about whether moral questions can have objective answers. But there is a way of thinking going back to Aquinas, Aristotle and beyond that insists that there are rationally based ways to answer moral questions.

A key contribution to this is furnished by the just war tradition. This sets a number of tests which have to be met if a war is to be just. It has to be undertaken: for a just cause, with right intention, with competent authority, as a last resort, and the harm judged likely to result should not outweigh the good achieved, taking into account the probability of success; while in its conduct the principles of proportion and non-combatant immunity have to met; and the war end in a just peace.

This may appear over-prescriptive: erecting so many hurdles that war would become impossible. But the just war tradition recognises that wars can be just and may sometimes be necessary. What the tradition insists on are two fundamental requirements, as simple as they are rationally compelling: is there a just cause and will the harm likely to be caused by military action outweigh the good to be achieved by that cause? In other words, is war likely to bring about more good than harm?

So how does the Iraq War fare against these criteria?

Different reasons were adduced at different times for the war. But the declared grounds common to both the US and UK Governments was to rid Iraq of its weapons of mass destruction, so enforcing UN Security Council Resolutions.

We now know that Iraq did not have weapons of mass destruction. But even that startling disclosure by the Iraq Survey Group would not necessarily invalidate the coalition’s disarmament objective as just cause if there had been strong grounds for believing that Saddam had such weapons.

The problem is that the evidence for such weapons was ‘sporadic and patchy’ in the words of the official Butler report. The Governments’ claim that they were acting on behalf of the UN was also weakened by the lack of substantial international support for military operations, evidenced by the reluctance of the Security Council explicitly to endorse such action through a second resolution. This, in turn, reflected concern that military action was not being undertaken as a last resort: that Saddam should have been given more time to convince the inspectors he had abandoned WMD.  Doubt over whether each of these just war conditions was met did not amount to a knock-down argument against war. But the doubts taken together mutually reinforced each other and so strengthened concern that there was not a sufficient just cause. 

It is, moreover, the single most serious charge against those who planned the Iraq War that they massively under-estimated the harm that would be likely to be caused by military action. Coalition leaders could not reasonably be expected to have forecast the precise casualty levels that would follow military action. But the coalition leaders can be criticised for failing to give sufficient consideration to what would be the effects of regime change and for not formulating robust plans promptly to re-establish civil governance in its wake and ensure a peaceful transition to democracy. They thus acted with a degree of recklessness. Just as they had undertaken worst case assessments of Saddam’s WMD capability, so they had undertaken best case assessments of what would happen after the regime had been changed.

The Iraq War was, like most wars, fought from a mixture of

0 Comments on Was Iraq a just war? as of 1/1/1900
Add a Comment
37. Is free will required for moral accountability?

By Joshua Knobe


Imagine that tomorrow’s newspaper comes with a surprising headline: ‘Scientists Discover that Human Behavior is Entirely Determined.’ Reading through the article, you learn more about precisely what this determinism entails. It turns out that everything you do – every behavior, thought and decision – is completely caused by prior events, which are in turn caused by earlier events… and so forth, stretching back in a long chain all the way to the beginning of the universe.

A discovery like this one would naturally bring up a difficult philosophical question. If your actions are completely determined, can you ever be morally responsible for anything you do? This question has been a perennial source of debate in philosophy, with some philosophers saying yes, others saying no, and millennia of discussion that leave us no closer to a resolution.

As a recent New York Times article explains, experimental philosophers have been seeking to locate the source of this conundrum in the nature of the human mind. The key suggestion is that the sense of puzzlement we feel in response to this issue arises from a conflict between two different psychological processes. Our capacity for abstract, theoretical reasoning tells us: ‘Well, if you think about it rationally, no one can be responsible for an act that is completely determined.’ But our capacity for immediate emotional responses gives us just the opposite answer: ‘Wait! No matter how determined people might be, they just have to be responsible for the terrible things they do…’

To put this hypothesis to the test, the philosopher Shaun Nichols and I conducted a simple experiment. All participants were asked to imagine a completely deterministic universe (‘Universe A’). Then different participants were given different questions that encouraged different modes of thought. Some were given a question that encouraged more abstract theoretical reasoning:

In Universe A, is it possible for a person to be fully morally responsible for their actions?

Meanwhile, other participants were given a question that encouraged a more emotional response:

In Universe A, a man named Bill has become attracted to his secretary, and he decides that the only way to be with her is to kill his wife and three children. He knows that it is impossible to escape from his house in the event of a fire. Before he leaves on a business trip, he sets up a device in his basement that burns down the house and kills his family.

Is Bill fully morally responsible for killing his wife and children?

The results showed a striking difference between the two conditions. Participants in the abstract reasoning condition overwhelmingly answered that no one could ever be morally responsible for anything in Universe A. But participants in the more emotional condition had a very different reaction. Even though Bill was described as living in Universe A, they said that he was fully morally responsible for what he had done. (Clearly, this involves a kind of contradiction: it can’t be that no one in Universe A is morally responsible for anything but, at the same time, this one man in Universe A actually is morally responsible for killing his family.)

Of course, it would be foolish to suggest that experiments like this one can somehow solve the problem of free will all by themselves. Still, it does appear that a close look at the empirical data can afford us a certain kind of insight. The results help us to get at the roots of our sense that there is a puzzle here and, thereby, to open up new avenues of inquiry that might not otherwise have been possible.

0 Comments on Is free will required for moral accountability? as of 1/1/1900

Add a Comment
38. What might be a constructive vision for the US?

By Ervin Staub


In difficult times like today, people need a vision or ideology that gives them hope for the future. Unfortunately, groups often adopt destructive visions, which identify other groups as enemies who supposedly stand in the way of creating a better future. A constructive, shared vision, which joins groups, reduces the chance of hostility and violence in a society.

A serious failure of the Obama administration has been not to offer, and help people embrace, such a vision. Policies by themselves, such as health care and limited regulation of the financial system, even if beneficial, don’t necessarily do this. A constructive vision or ideology must combine an inspiring vision of social arrangements, of the relations between individuals and groups and the nature of society, and actions that aim to fulfill the vision. A community that includes all groups, recreating a moral America, and rebuilding connections to the rest of the world could be elements of such a vision.

In difficult times, people need security, connection to each other, a feeling of effectiveness, and an understanding of the world and their place in it.  Being part of a community can help fulfill  these needs. The work programs of the Roosevelt administration during the Great Depression provided people with livelihood. But they also gave them dignity and told them that they were part of the national community.

Community means accepting and embracing differences among us. Especially important among the influences that lead groups of people to turn against each other is drawing a line between us and them, and seeing the other in a negative light. The words and actions of Nelson Mandela and Abraham Lincoln propagated acceptance of the other even after extreme violence. The U.S. is a hugely varied country, and for every one of us, there can be many of “them.” But others’ differentness can enrich us. People travel to distant places just to glimpse at other people and their lives. As much research shows, real contact, deep engagement, working for shared goals across races, religions, classes, and political beliefs helps to overcome prejudice, helps us to see our shared humanity. Engaging with each other’s differentness here at home can connect us to each other–and increase our satisfaction in life.

Creating a vision—and reality—of community also requires addressing the huge financial inequality in America. Research shows that during periods of greater inequality in income, people are less satisfied. This is true of liberals; perhaps surprisingly, to a lesser degree, it is also true of conservatives. Inequality presumably reduces people’s feeling of community. The financial crisis provided an opportunity to begin to address inequality, to use laws, policies, and public opinion to limit compensation in financial institutions and corporations. Roosevelt had to fight for his programs. This time there has not been enough “political will,” that is, commitment and courage, to do this

Good connection to the rest of the world also increases our experience of community—and our security. For many decades, the United States was greatly respected and admired. Now, as I travel around the world in the course of my work on preventing violence between groups and promoting reconciliation, most of the people I talk to are highly critical of us. But my sense is that many yearn to again trust and respect us.

In his Cairo speech, as President Obama reached out to the Muslim world, he offered an image of connection between countries and peoples. But words alone are not enough, and there has been little follow up. He also continued with policies of the Bush administration, such as extraordinary rendition, handing over suspected terrorists to other countries for interrogation using torture. We Americans believe we are a moral people; both for

0 Comments on What might be a constructive vision for the US? as of 1/1/1900
Add a Comment
39. 6 Myths about Teens & Christian Faith in America

You may have read the recent CNN article, “More teens becoming ‘fake’ Christians,” which extensively cited the research of Kenda Creasy Dean and her book Almost Christian: What the Faith of Our Teenagers is Telling the American Church. In the original article below, Dean expands on these ideas, clarifies others, and explains just how American teens are practicing their Christian faith.

By Kenda Creasy Dean


Have you heard this one? Mom is angling to get 16-year-old Tony to come to church on Sunday, and Tony will have none of it. “Don’t you get it?” he yells, pushing his chair away from the table. “I hate church! I am not like you! The church is full of hypocrites!” Dramatic exit, stage right.

This story sounds true – but it isn’t. Today’s parents and teenagers rarely fight about religion, according to the 2005 National Study of Youth and Religion – the largest study of teenage faith to date. Interviews with more than 3300 teenagers and their parents showed that American teenagers mirror their parents’ religious faith to an astonishing degree. Teenagers and parents seem to be on good terms about religion because 1) they believe pretty much the same things; and 2) religion doesn’t matter enough to them to fight about it.

3 out of 4 American teenagers between the ages of 13 and 17 call themselves Christians, yet most adhere to a default religious setting that does not truly reflect any of the world’s great religions. Instead, say NSYR researchers, American teenagers’ de facto religious creed is “Moralistic Therapeutic Deism,” a view that religion is a “very nice thing” that makes us feel good but leaves God in the background.

How did that happen? Short answer: This is what parents and churches are teaching them.

Moralistic Therapeutic Deism – the view that religion is supposed to make us feel good about ourselves and turn us into nicer people – appears in American teenagers of all religious persuasions. On the surface, that sounds like a good thing; at the very least, perhaps it is a corrective to abuses conducted in the name of religion.

Yet MTD is also a self-serving approach to religious faith. Moralistic therapeutic deist youth view God as a divine butler, invisible unless called upon, whose primary purpose is to make them feel good and to sanction things that they want to do anyway. Researchers were mum on MTD’s effects on other religious traditions (the number of non-Christian religious teenagers in the sample was small enough that researchers were cautious about their claims), but they were unsparing when it came to American churches. In Soul Searching: The Religious and Spiritual Lives of American Teenagers, lead researcher Christian Smith claims that Moralistic Therapeutic Deism is now the “dominant religion in the United States, having supplanted Christianity in American churches.”

I helped interview teenagers for the NSYR, an exercise that convinced me more than ever that parents, congregations, and pastors are operating on some pretty shaky assumptions about Christian faith and teenagers. Other religious leaders may comment on the implications of this study for their own faith traditions, but let me

0 Comments on 6 Myths about Teens & Christian Faith in America as of 1/1/1900
Add a Comment
40. Morality, Irony, and Fiction

I shouldn't use such a vast and portentous title for a post that is essentially just saying, "Go read this," but I will anyway, because I think Wyatt Mason's latest post at Harper's hints toward some ideas that are worth considering:

The animating idea of such a book, whether for children or adults, is morally objectionable. To account with the death of 6,000,000 innocents, the author invents a fictional “innocent” whose ironic fate is meant to offer a poignant window onto actual mass murder. Why morally objectionable? It is not that I object to fictionalization of the factual. Rather, I object to the notion that the fake death of a fake German child–through a series of contrivances that guarantee his irony-drenched death–is put forward as a representative means for readers to empathize anew with real children and real adults who really died. How else, such a narrative strategy suggests, could one empathize with the gruesome abstraction 6,000,000 innocents but by the creation of an ironical “innocent”?

Here we see the limits of irony as a narrative strategy.
I've held various views about fiction and morality over the years, sometimes rejecting any relationship between the two terms, sometimes even rejecting the idea of morality itself as a too-convenient catch-all to just mean "stuff I don't like". Over the past few years, though, I've inched closer and closer to seeing that fiction writers need to have some sort of (for lack of a term I'm more comfortable with) moral awareness. I still hate the sound of those two words together, I still remain deeply skeptical of any use of the word "morality", and yet I haven't come up with something better to describe my discomfort and sometimes flat-out anger at the ways many writers create fiction about, for instance, atrocities. Child abuse and sexual abuse are other subjects I more often than not find exploitative in fiction -- the ways writers write about them frequently make me think they are taking shortcuts to emotion, and using such things as relatively easy ways to make their readers feel things. In most cases, fiction (in the broad sense, including movies) that doesn't complicate its own desire to make an audience feel things is fiction that I am, generally speaking, annoyed by. (I was once going to write about this tendency in Amanda Eyre Ward's Forgive Me and Christian Jungersen's The Exception, but I had such vehement disagreement with the moral equations of the novels' narratives that I was incapable of writing about either book: I gaped at their awfulness and could only emit sputters and gasps.)

And yet at the same time, the subject matter that causes writers and artists to create imaginative structures that feel, yes, morally objectionable to me is also the subject matter I most want writers to tackle -- the atrocities, the horrors, the ghastly things that we humans commit against each other, the stuff that often makes me cynical and even misanthropic, the evidence that exhausts my better nature, the material of our worst tendencies. Perhaps that passion, that desire is what makes my disappointment so strong and often leads me, when trying to critique such things, to be inarticulate.

In any case, I hope Wyatt Mason continues to write about this subject.

4 Comments on Morality, Irony, and Fiction, last added: 11/11/2008
Display Comments Add a Comment
41.

Random House 'Morality Clause' Only in UK ...

I just found a post that sheds some light on the Random House morality clause issue which I mentioned recently. According to GalleyCat, the purported morality clause is present only in UK Random House contracts. According to an agent questioned about the issue,"there's a lot of strange language that goes into UK contracts that has little bearing on the American market."

So US Random House authors...have a good time.

1 Comments on , last added: 8/26/2008
Display Comments Add a Comment