What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

JacketFlap Sponsors

Spread the word about books.
Put this Widget on your blog!
  • Powered by JacketFlap.com

Are you a book Publisher?
Learn about Widgets now!

Advertise on JacketFlap

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing: Blog Posts Tagged with: philosophy, Most Recent at Top [Help]
Results 1 - 25 of 315
1. Efficient causation: Our debt to Aristotle and Hume

Causation is now commonly supposed to involve a succession that instantiates some lawlike regularity. This understanding of causality has a history that includes various interrelated conceptions of efficient causation that date from ancient Greek philosophy and that extend to discussions of causation in contemporary metaphysics and philosophy of science. Yet the fact that we now often speak only of causation, as opposed to efficient causation, serves to highlight the distance of our thought on this issue from its ancient origins. In particular, Aristotle (384-322 BCE) introduced four different kinds of “cause” (aitia): material, formal, efficient, and final. We can illustrate this distinction in terms of the generation of living organisms, which for Aristotle was a particularly important case of natural causation. In terms of Aristotle’s (outdated) account of the generation of higher animals, for instance, the matter of the menstrual flow of the mother serves as the material cause, the specially disposed matter from which the organism is formed, whereas the father (working through his semen) is the efficient cause that actually produces the effect. In contrast, the formal cause is the internal principle that drives the growth of the fetus, and the final cause is the healthy adult animal, the end point toward which the natural process of growth is directed.

Aristotle_by_Raphael
Aristotle, by Raphael Sanzio. Public domain via Wikimedia Commons.

From a contemporary perspective, it would seem that in this case only the contribution of the father (or perhaps his act of procreation) is a “true” cause. Somewhere along the road that leads from Aristotle to our own time, material, formal and final aitiai were lost, leaving behind only something like efficient aitiai to serve as the central element in our causal explanations. One reason for this transformation is that the historical journey from Aristotle to us passes by way of David Hume (1711-1776). For it is Hume who wrote: “[A]ll causes are of the same kind, and that in particular there is no foundation for that distinction, which we sometimes make betwixt efficient causes, and formal, and material … and final causes” (Treatise of Human Nature, I.iii.14). The one type of cause that remains in Hume serves to explain the producing of the effect, and thus is most similar to Aristotle’s efficient cause. And so, for the most part, it is today.

However, there is a further feature of Hume’s account of causation that has profoundly shaped our current conversation regarding causation. I have in mind his claim that the interrelated notions of cause, force and power are reducible to more basic non-causal notions. In Hume’s case, the causal notions (or our beliefs concerning such notions) are to be understood in terms of the constant conjunction of objects or events, on the one hand, and the mental expectation that an effect will follow from its cause, on the other. This specific account differs from more recent attempts to reduce causality to, for instance, regularity or counterfactual/probabilistic dependence. Hume himself arguably focused more on our beliefs concerning causation (thus the parenthetical above) than, as is more common today, directly on the metaphysical nature of causal relations. Nonetheless, these attempts remain “Humean” insofar as they are guided by the assumption that an analysis of causation must reduce it to non-causal terms. This is reflected, for instance, in the version of “Humean supervenience” in the work of the late David Lewis. According to Lewis’s own guarded statement of this view: “The world has its laws of nature, its chances and causal relationships; and yet — perhaps! — all there is to the world is its point-by-point distribution of local qualitative character” (On the Plurality of Worlds, 14).

David_Hume
Portrait of David Hume, by Allan Ramsey (1766). Public domain via Wikimedia Commons.

Admittedly, Lewis’s particular version of Humean supervenience has some distinctively non-Humean elements. Specifically — and notoriously — Lewis has offered a counterfactural analysis of causation that invokes “modal realism,” that is, the thesis that the actual world is just one of a plurality of concrete possible worlds that are spatio-temporally discontinuous. One can imagine that Hume would have said of this thesis what he said of Malebranche’s occasionalist conclusion that God is the only true cause, namely: “We are got into fairy land, long ere we have reached the last steps of our theory; and there we have no reason to trust our common methods of argument, or to think that our usual analogies and probabilities have any authority” (Enquiry concerning Human Understanding, §VII.1). Yet the basic Humean thesis in Lewis remains, namely, that causal relations must be understood in terms of something more basic.

And it is at this point that Aristotle re-enters the contemporary conversation. For there has been a broadly Aristotelian move recently to re-introduce powers, along with capacities, dispositions, tendencies and propensities, at the ground level, as metaphysically basic features of the world. The new slogan is: “Out with Hume, in with Aristotle.” (I borrow the slogan from Troy Cross’s online review of Powers and Capacities in Philosophy: The New Aristotelianism.) Whereas for contemporary Humeans causal powers are to be understood in terms of regularities or non-causal dependencies, proponents of the new Aristotelian metaphysics of powers insist that regularities and dependencies must be understood rather in terms of causal powers.

Should we be Humean or Aristotelian with respect to the question of whether causal powers are basic or reducible features of the world? Obviously I cannot offer any decisive answer to this question here. But the very fact that the question remains relevant indicates the extent of our historical and philosophical debt to Aristotle and Hume.

Headline image: Face to face. Photo by Eugenio. CC-BY-SA-2.0 via Flickr

The post Efficient causation: Our debt to Aristotle and Hume appeared first on OUPblog.

0 Comments on Efficient causation: Our debt to Aristotle and Hume as of 10/19/2014 6:03:00 AM
Add a Comment
2. Chicken by Chicken: Accepting Who We Are.

Hi, folks, this week is another response  blog. I heard a song called Constellations by Brendan James and it resonated with me. This is a long ramble, a thought journey, inspired by that song, and I hope that you find something to take with you.

I feel like don't really understand the world, and it makes me cry. I feel so out of step with the seasons and times. I can't stand reading the news, or even checking out my Facebook half the time. There are too many wars. Nation against nation. Neighbor against neighbor. Here inside me, I hunger to see people come together, to take a deep breath and just figure out where to go from here. I hope bridges are built, coalitions are made, and every voice is heard. I dream that we would all listen and find better ways. I don't want to join the madding crowd that wants to heckle the stupid, drop bombs, and dehumanize others, all in the name of a better world.

I see the Universe at night and how it is able to spin out wondrous things and at the same time wreak great destruction. I feel the transience of life and yet eternity hums in my heart. Everyone I know is trying to get through the day without dwelling on the darkness. Some take the "be positive about everything" route. Some take the "find a cause" route. I swing between the route of despair and the route of hope, that I might be the voice that breaks through the noise and says something helpful.

I have had unshakable confidence throughout my life that if I got a chance on a stage that I would move the hearts of those shivering on the edges. I have believed that I would grow like a wild weed, but now see so clearly that my life is just a breath and is gone. A Monarch butterfly was caught in between the window and the screen in my house. Some hapless caterpillar crawled between the window and screen and formed a chrysalis. The butterfly emerged and now would die if I did not figure out a gentle way to remove the screen and let it go on it's way to the graveyards of Mexico for the day of dead. When I figured out a way to set the butterfly free, it occurred to me that all of my life might be just for that. Perhaps those beautiful wings have more purpose than I will ever have.

This brings me to the heart of this thought journey. I have hungered for purpose. I have believed all my life that a day was coming that the gifts within me would become visible, like the span over us -- Orion, the Pleiades, the evening star, the moon, and the swath of the Milky Way. I have believed my gifts would come clear like those lights in the heavens. But here I am making less than minimum wage and imploding under the stress of another miss in terms of my intended goal.

In the end we are not in control of our story, and hence I must embrace the days given us. I find embracing the smallness of who I am is difficult. Megalomania is expected in rock stars, but not here in Suburbia. I have to laugh at myself a little and laugh at my little dramas.There is certainly a ridiculousness to me.

Ah, you are just a onion flower in the yard. Most folks will pass by the onion flower but, hey, go ahead and bloom. Touch ten hearts, fifty hearts, A copper star for you.  Not the silver, not the gold. That's all, dear. Work it out.

Thank you for dropping by and remember every little thing shines.  See you next week.

This week is a page from my Halloween project: CHICKENS TAKE OVER HALLOWEEN. 


Here is a quote for your pocket.
The end of law is not to abolish or restrain, but to preserve and enlarge freedom. For in all the states of created beings capable of law, where there is no law, there is no freedom.John Locke.

0 Comments on Chicken by Chicken: Accepting Who We Are. as of 10/18/2014 3:48:00 PM
Add a Comment
3. Reading Marcus Aurelius’s Meditations with a modern perspective

Marcus Aurelius’s Meditations is a remarkable phenomenon, a philosophical diary written by a Roman emperor, probably in 168-80 AD, and intended simply for his own use. It offers exceptional insights into the private thoughts of someone who had a very weighty public role, and may well have been composed when he was leading a military campaign in Germany. What features might strike us today as being especially valuable, bearing in mind our contemporary concerns?

At a time when the question of public trust in politicians is constantly being raised, Marcus emerges, in this completely personal document, as a model of integrity. Not only does he define for himself his political ideal (“a monarchy that values above all things the freedom of the subject”) and spell out what this ideal means in his reflections on the character and lifestyle of his adoptive father and predecessor as emperor, Antoninus Pius, but he also reminds himself repeatedly of the triviality of celebrity, wealth and status, describing with contempt the lavish purple imperial robe he wore as stained with “blood from a shellfish”. Of course, Marcus was not a democratic politician and, with hindsight, we can find things to criticize in his acts as emperor — though he was certainly among the most reasonable and responsible of Roman emperors. But I think we would be glad if we knew that our own prime ministers or presidents approached their role, in their most private hours, with an equal degree of thoughtfulness and breadth of vision.

Another striking feature of the Meditations, and one that may well resonate with modern experience, is the way that Marcus aims to combine a local and universal perspective. In line with the Stoic philosophy that underpins his diary, Marcus often recalls that the men and women he encounters each day are fellow-members of the brotherhood of humanity and fellow-citizens in the universe. He uses this fact to remind himself that working for his brothers is an essential part of his role as an emperor and a human being. This reminder helps him to counteract the responses of irritation and resentment that, he admits, the behavior of other people might otherwise arouse in him. At a time when we too are trying to bridge and negotiate local and global perspectives, Marcus’s thoughts may be worth reflecting on. Certainly, this seems to me a more balanced response than ignoring the friend or partner at your side in the café while engrossed in phone conversations with others across the world.

By Pierre-Selim. CC-BY-SA-3.0 via Wikimedia Commons.
Bust of Marcus Aurelius, Musée Saint-Raymond. By Pierre-Selim. CC-BY-SA-3.0 via Wikimedia Commons.

More broadly, Marcus, again in line with Stoic thinking, underlines that the ethics of human behavior need to take account of the wider fact that human beings form an integral part of the natural universe and are subject to its laws. Of course, we may not share his confidence that the universe is shaped by order, structure and providential care — though I think it is worth thinking seriously about just how much of that view we have to reject. But the looming environmental crisis, along with the world-wide rise in obesity and the alarming healthcare consequences, represent for us a powerful reminder that we need to rethink the ethics of our relationship to the natural world and re-examine our understanding of what is natural in human life. Marcus’s readiness to see himself, and humanity, as inseparable parts of a larger whole, and to subordinate himself to that whole, may serve as a striking example to us, even if the way we pursue that thought is likely to be different from that of Stoicism.

Another striking theme in the Meditations is the looming presence of death, our own and those of others we are close to. This might seem very alien to the modern secular Western world, where death is often either ignored or treated as something too terrible to mention. But the fact that Marcus’s attitude is so different from our own may be precisely what makes it worth considering. He not only underlines the inevitability of death and the fact that death is a wholly natural process, and for that reason something we should accept. He couples this with the claim that knowledge of the certainty of death does not undermine the value of doing all that what we can while alive to lead a good human life and to develop in ourselves the virtues essential for this life. Although such ideas have often formed part of religious responses to death (which have lost their hold over many people today) Marcus puts them in a form that modern non-religious people can accept. This is another reason, I think, why Marcus’s philosophical diary can speak to us today in a language we can make sense of.

Featured image: Marcus Aurelius’s original statue in Rome, by Zanner. Public domain via Wikimedia Commons.

The post Reading Marcus Aurelius’s Meditations with a modern perspective appeared first on OUPblog.

0 Comments on Reading Marcus Aurelius’s Meditations with a modern perspective as of 1/1/1900
Add a Comment
4. When tragedy strikes, should theists expect to know why?

My uncle used to believe in God. But that was before he served in Iraq. Now he’s an atheist. How could a God of perfect power and perfect love allow the innocent to suffer and the wicked to flourish?

Philosophers call this the problem of evil. It’s the problem of trying to reconcile two things that at first glance seem incompatible: God and evil. If the world were really governed by a being like God, shouldn’t we expect the world to be a whole lot better off than it is? But given the amount, kind, and distribution of evil things on earth, many philosophers conclude that there is no God. Tragedy, it seems, can make atheism reasonable.

Theists—people who believe in God—may share this sentiment in some ways, but in the end they think that the existence of God and the existence of evil are compatible. But how could this be? Well, many theists attempt to offer what philosophers call a theodicy – an explanation for why God would allow evils of the sort we find.

Perhaps good can’t exist without evil. But would that make God’s existence dependent on another? Perhaps evil is the necessary byproduct of human free will. But would that explain evils like ebola and tsunamis? Perhaps evil is a necessary ingredient to make humans stronger and more virtuous. But would that justify a loving human father in inflicting similar evil on his children? Other theists reject the attempt to explain the existence of evils in our world and yet deny that the existence of unexplained evil is a problem for rational belief in God.

The central idea is simple: just as a human child cannot decipher all of the seemingly pointless things that her parent does for her benefit, so, too, we cannot decipher all of the seemingly pointless evils in our world. Maybe they really are pointless, but maybe they aren’t — the catch is that things would look the same to us either way. And if they would look the same either way, then the existence of these evils cannot be evidence for atheism over theism.

The_Good_and_Evil_Angels_Tate_Blake
The Good and Evil Angels by William Blake. Public domain via Wikimedia Commons.

Philosophers call such theists ‘skeptical’ theists since they believe that God exists but are skeptical of our abilities to decipher whether the evils in our world are justified just by considering them.

The debate over the viability of skeptical theism involves many issues in philosophy including skepticism and ethics. With regard to the former, how far does the skepticism go? Should theists also withhold judgment about whether a particular book counts as divine revelation or whether the apparent design in the world is actual design? With regard to the latter, if we should be skeptical of our abilities to determine whether an evil we encounter is justified, does that give us a moral reason to allow it to happen?

It seems that skeptical theism might invoke a kind of moral paralysis as we move through the world unable to see which evils further God’s plans and which do not.

Skeptical theists have marshalled replies to these concerns. Whether the replies are successful is up for debate. In either case, the renewed interest in the problem of evil has resurrected one of the most prevalent responses to evil in the history of theism — the response of Job when he rejects the explanations of his calamity offered by his friends and yet maintains his belief in God despite his ignorance about the evils he faces.

Headline image credit: Job’s evil dreams. Watercolor illustration by William Blake. Public domain via Wikimedia Commons.

The post When tragedy strikes, should theists expect to know why? appeared first on OUPblog.

0 Comments on When tragedy strikes, should theists expect to know why? as of 1/1/1900
Add a Comment
5. Plato and contemporary bioethics

Since its advent in the early 1970s, bioethics has exploded, with practitioners’ thinking expressed not only in still-expanding scholarly venues but also in the gamut of popular media. Not surprisingly, bioethicists’ disputes are often linked with technological advances of relatively recent vintage, including organ transplantation and artificial-reproductive measures like preimplantation genetic diagnosis and prenatal genetic testing. It’s therefore tempting to figure that the only pertinent reflective sources are recent as well, extending back — glancingly at most — to Immanuel Kant’s groundbreaking 18th-century reflections on autonomy. Surely Plato, who perforce could not have tackled such issues, has nothing at all to contribute to current debates.

This view is false — and dangerously so — because it deprives us of avenues and impetuses of reflection that are distinctive and could help us negotiate present quandaries. First, key topics in contemporary bioethics are richly addressed in Greek thought both within Plato’s corpus and through his critical engagement with Hippocratic medicine. This is so regarding the nature of the doctor-patient tie, medical professionalism, and medicine’s societal embedment, whose construction ineluctably concerns us all individually and as citizens irrespective of profession.

Second, the most pressing bioethical topics — whatever their identity — ultimately grip us not on technological grounds but instead for their bearing on human flourishing (in Greek, eudaimonia). Surprisingly, this foundational plane is often not singled out in bioethical discussions, which regularly tend toward circumscription. The fundamental grip obtains either way, but its neglect as a conscious focus harms our prospects for existing in a way that is most thoughtful, accountable, and holistic. Again a look at Plato can help, for his handling of all salient topics shows fruitfully expansive contextualization.

1847-code-of-ethics (1)
AMA Code of Medical Ethics. Public domain via Wikipedia Commons

Regarding the doctor-patient tie, attempts to circumvent Scylla and Charybdis — extremes of paternalism and autonomy, both oppositional modes — are garnering significant bioethical attention. Dismayingly given the stakes, prominent attempts to reconceive the tie fail because they veer into paternalism, allegedly supplanted by autonomy’s growing preeminence in recent decades. If tweaking and reconfiguration of existing templates are insufficient, what sources not yet plumbed might offer fresh reference points for bioethical conversation?

Prima facie, invoking Plato, staunch proponent of top-down autocracy in the Republic, looks misguided. In fact, however, the trajectory of his thought — Republic to Laws via the Statesman — provides a rare look at how this profound ancient philosopher came at once to recognize core human fallibility and to stare firmly at its implications without capitulating to pessimism about human aptitudes generally. Captivated no longer by the extravagant gifts of a few — philosophers of Kallipolis, the Republic’s ideal city — Plato comes to appreciate for the first time the intellectual and ethical aptitudes of ordinary citizens and nonphilosophical professionals.

Human motivation occupies Plato in the Laws, his final dialogue. His unprecedented handling of it there and philosophical trajectory on the topic warrant our consideration. While the Republic shows Plato’s unvarnished confidence in philosophers to rule — indeed, even one would suffice (502b, 540d) — the Laws insists that human nature as such entails that no one could govern without succumbing to arrogance and injustice (713c). Even one with “adequate” theoretical understanding could not properly restrain himself should he come to be in charge: far from reliably promoting communal welfare as his paramount concern, he would be distracted by and cater to his own yearnings (875b). “Adequate” understanding is what we have at best, but only “genuine” apprehension — that of philosophers in the Republic, seen in the Laws as purely wishful — would assure incorruptibility.

The Laws’ collaborative model of the optimal doctor-patient tie in Magnesia, that dialogue’s ideal city, is one striking outcome of Plato’s recognition that even the best among us are fallible in both insight and character. Shared human aptitudes enable reciprocal exchanges of logoi (rational accounts), with patients’ contributing as equal, even superior, partners concerning eudaimonia. This doctor-patient tie is firmly rooted in society at large, which means for Plato that there is close and unveiled continuity between medicine and human existence generally in values’ application. From a contemporary standpoint, the Laws suggests a fresh approach — one that Plato himself arrived at only by pressing past the Republic’s attachment to philosophers’ profound intellectual and values-edge, whose bioethical counterpart is a persistent investment in the same regarding physicians.

If values-spheres aren’t discrete, it’s unsurprising that medicine’s quest to demarcate medical from non-medical values, which extends back to the American Medical Association’s original Code of Medical Ethics (1847), has been combined with an inability to make it stick. In addition, a tension between the medical profession’s healing mission and associated virtues, on the one side, and other goods, particularly remuneration, on the other, is present already in that code. This conflict is now more overt, with rampancy foreseeable in financial incentives’ express provision to intensify or reduce care and to improve doctors’ behavior without concern for whether relevant qualities (e.g., self-restraint, courage) belong to practitioners themselves.

“As Plato rightly reminds us, professional and other endeavors transpire and gain their traction from their socio-political milieu”

Though medicine’s greater pecuniary occupation is far from an isolated event, the human import of it is great. Remuneration’s increasing use to shape doctors’ behavior is harmful not just because it sends the flawed message that health and remuneration are commensurable but for what it reveals more generally about our priorities. Plato’s nuanced account of goods (agatha), which does not orbit tangible items but covers whatever may be spoken of as good, may be helpful here, particularly its addressing of where and why goods are — or aren’t — cross-categorically translatable.

Furthermore, if Plato is right that certain appetites, including that for financial gain, are by nature insatiable — as weakly susceptible to real fulfillment as the odds of filling a sieve or leaky jar are dim (Gorgias 493a-494a) — then even as we hope to make doctors more virtuous via pecuniary incentives, we may actually be promoting vice. Engagement with Plato supports our retreat from calibrated remuneration and greater devotion to sources of inspiration that occupy the same plane of good as the features of doctors we want to promote. If the goods at issue aren’t commensurable, then the core reward for right conduct and attitudes by doctors shouldn’t be monetary but something more in keeping with the tier of good reflected thereby, such as appreciative expressions visible to the community (a Platonic example is seats of honor at athletic games, Laws 881b). Of course, this directional shift shouldn’t be sprung on doctors and medical students in a vacuum. Instead, human values-education (paideia) must be devotedly and thoughtfully instilled in educational curricula from primary school on up. From this vantage point, Plato’s vision of paideia as a lifelong endeavor is worth a fresh look.

As Plato rightly reminds us, professional and other endeavors transpire and gain their traction from their socio-political milieu: we belong first to human communities, with professions’ meaning and broader purposes rooted in that milieu. The guiding values and priorities of this human setting must be transparent and vigorously discussed by professionals and non-professionals alike, whose ability to weigh in is, as the Laws suggests, far more substantive than intra-professional standpoints usually acknowledge. This same line of thought, combined with Plato’s account of universal human fallibility, bears on the matter of medicine’s continued self-policing.

Linda Emanuel claims that “professional associations — whether national, state or county, specialty, licensing, or accrediting — are the natural parties to articulate tangible standards for professional accountability. Almost by definition, there are no other entities that have such ability and extensive responsibility to be the guardians of health care values — for the medical profession and for society” (53-54). Further, accountability “procedures” may include “a moral disposition, with only an internal conscience for monitoring accountability” (54). On grounds above all of our fallibility, which is strongly operative both with and absent malice, the Laws foregrounds reciprocal oversight of all, including high officials, not just from within but across professional and sociopolitical roles; crucially, no one venue is the arbiter in all cases. Whatever the number of intra-medical umbrellas that house the profession’s oversight, transparency operates within circumscribed bounds at most, and medicine remains the source of the very standards to which practitioners — and “good” patients — will be held. Moreover, endorsing moral self-oversight here without undergirding pedagogical and aspirational structures is less likely to be effective than to hold constant or even amplify countervailing motivations.

As can be only briefly suggested here, not only the themes but also their intertwining makes further bioethical consideration of Plato vastly promising. I’m not proposing our endorsement of Plato’s account as such. Rather, some positions themselves, alongside the rich expansiveness and trajectory of his explorations, are two of Plato’s greatest legacies to us — both of which, however, have been largely untapped to date. Not only does reflection on Plato stand to enrich current bioethical debates regarding the doctor-patient tie, medical professionalism, and medicine’s societal embedment, it offers a fresh orientation to pressing debates on other bioethical topics, prominent among them high-stakes discord over the technologically-spurred project of radical human “enhancement.”

Headline image credit: Doctor Office 1. Photo by Subconsci Productions. CC BY-SA 2.0 via Flickr

The post Plato and contemporary bioethics appeared first on OUPblog.

0 Comments on Plato and contemporary bioethics as of 10/12/2014 4:31:00 AM
Add a Comment
6. The Meaning of Human Existence

Through a brilliant melding of science and philosophy, "The father of sociobiology" boldly tackles humanity's biggest questions — namely, what is our role on earth, and how can we continue to evolve as a species? Wilson's writing style is accessible as always, and his passion and empathy continue to push us toward greater levels of [...]

0 Comments on The Meaning of Human Existence as of 1/1/1900
Add a Comment
7. Should Britain intervene militarily to stop Islamic State?

Britain and the United States have been suffering from intervention fatigue. The reason is obvious: our interventions in Iraq and Afghanistan have proven far more costly and their results far more mixed and uncertain than we had hoped.

This fatigue manifested itself in almost exactly a year ago, when Britain’s Parliament refused to let the Government offer military support to the U.S. and France in threatening punitive strikes against Syria’s Assad regime for its use of chemical weapons. Since then, however, developments in Syria have shown that our choosing not to intervene doesn’t necessarily make the world a safer place. Nor does it mean that distant strife stays away from our shores.

There is reason to suppose that the West’s failure to intervene early in support of the 2011 rebellion against the repressive Assad regime left a vacuum for the jihadists to fill—jihadists whose ranks now include several hundred British citizens.

A
A Syrian woman sits in front her home as Free Syrian Army fighters stand guard during a break in fighting in a neighborhood of Damascus, Syria. April 1, 2012. Photo by Freedom House, CC BY 2.0 via Flickr.

There’s also some reason to suppose that the West’s failure to support Georgia militarily against Russia in 2008, and to punish the Assad regime for its use of chemical weapons, has encouraged President Putin to risk at least covert military aggression in Ukraine. I’m not saying that the West should have supported Georgia and punished Assad. I’m merely pointing out that inaction has consequences, too, sometimes bad ones.

Now, however, despite out best efforts to keep out of direct involvement in Syria, we are being drawn in again. The rapid expansion of ‘Islamic State’, involving numerous mass atrocities, has put back on our national desk the question of whether we should intervene militarily to help stop them.

What guidance does the tradition of just war thinking give us in deliberating about military intervention? The first thing to say is that there are different streams in the tradition of just war thinking. In the stream that flows from Michael Walzer, the paradigm of a just war is national self-defence. More coherently, I think, the Christian stream, in which I swim, holds that the paradigm of a just war is the rescue of the innocent from grave injustice. This rescue can take either defensive or aggressive forms. The stipulation that the injustice must be ‘grave’ implies that some kinds of injustice should be borne rather than ended by war. This because war is a destructive and hazardous business, and so shouldn’t be ventured except for very strong reasons.

What qualifies as ‘grave’ injustice, then? In the 16th and 17th centuries just war theorists like Vitoria and Grotius proposed as candidates such inhumane social practices as cannibalism or human sacrifice. International law currently stipulates ‘genocide’. The doctrine of the Responsibility to Protest (‘R2P’) would broaden the law to encompass mass atrocity. Let’s suppose that mass atrocity characteristic of a ruling body is just cause for military intervention. Some nevertheless argue, in the light of Iraq and Afghanistan, that intervention is not an appropriate response, because it just ddoesn’twork. Against that conclusion, I call two witnesses, both of whom have served as soldiers, diplomats, and politicians, and have had direct experience of responsibility for nation-building: Paddy Ashdown and Rory Stewart.

RAF Merlin Helicopter Supplies Troops in Iraq
A Royal Air Force Merlin helicopter delivers supplies to an element of the Queens Royal Lancers during a patrol in Maysan Province, Iraq in 2007. Photo: Cpl Ian Forsyth RLC/MOD, via Wikimedia Commons

Ashdown, the international High Representative for Bosnia and Herzegovina from 2002-6, argues that “[h]igh profile failures like Iraq should not … blind us to the fact that, overall, the success stories outnumber the failures by a wide margin”.

Rory Stewart was the Coalition Provisional Authority’s deputy governor of two provinces of southern Iraq from 2003-4. He approached the task of building a more stable, prosperous Iraq with optimism, but experience brought him disillusion. Nevertheless, Stewart writes that “it is possible to walk the tightrope between the horrors of over-intervention and non-intervention; that there is still a possibility of avoiding the horrors not only of Iraq but also of Rwanda; and that there is a way of approaching intervention that can be good for us and good for the country concerned”.

Notwithstanding that, one lesson from our interventions in Iraq and Afghanistan—and indeed from British imperial history—is that successful interventions in foreign places, which go beyond the immediate fending off of indiscriminate slaughter on a massive scale to attempting some kind of political reconstruction, cannot be done quickly or on the cheap.

Here’s where national interest comes in. National interest isn’t necessarily immoral. A national government has a moral duty to look after the well being of its own people and to advance its genuine interests. What’s more, some kind of national interest must be involved if military intervention is to attract popular support, without which intervention is hard, eventually impossible, to sustain. One such interest can be moral integrity. Nations usually care about more than just being safe and fat. Usually they want to believe that they are doing the right thing, and they will tolerate the costs of war—up to a point—in a just cause that looks set to succeed. I have yet to meet a Briton who is not proud of what British troops achieved in Sierra Leone in the year 2000, even though Britain had no material stake in the outcome of that country’s civil war.

It is not unreasonable for them to ask why their sons and daughters should be put in harm’s way.

However, the nation’s interest in its own moral integrity alone will probably not underwrite military intervention that incurs very heavy costs. So other interests—such as national security—are needed to stiffen popular support for a major intervention. It is not unreasonable for a national people to ask why they should bear the burdens of military intervention, especially in remote parts of the world.

It is not unreasonable for them to ask why their sons and daughters should be put in harm’s way. And the answer to those reasonable questions will have to present itself in terms of the nation’s own interests. This brings us back to Syria and Islamic State. Repressive though the Assad regime was and is, and nasty though the civil war is, it probably wasn’t sufficiently in Britain’s national interest to become deeply involved militarily in 2011. The expansion of Islamic State, however, engages our interest in national security more directly, partly because as part of the West we are its declared enemy and partly because some of our own citizens are fighting for it and might bring their jihad back onto our own streets.

We do have a stronger interest, therefore, in taking the risks and bearing the costs of military intervention to stop and to disable Islamic State, and of subsequent political intervention to help create sustainable polities in Syria and Iraq.

The post Should Britain intervene militarily to stop Islamic State? appeared first on OUPblog.

0 Comments on Should Britain intervene militarily to stop Islamic State? as of 1/1/1900
Add a Comment
8. The problem with moral knowledge

Traveling through Scotland, one is struck by the number of memorials devoted to those who lost their lives in World War I. Nearly every town seems to have at least one memorial listing the names of local boys and men killed in the Great War (St. Andrews, where I am spending the year, has more than one).

Scotland endured a disproportionate number of casualties in comparison with most other Allied nations as Scotland’s military history and the Scots’ reputation as particularly effective fighters contributed to both a proportionally greater number of Scottish recruits as well as a tendency for Allied commanders to give Scottish units the most dangerous combat assignments.

Many who served in World War I undoubtedly suffered from what some contemporary psychologists and psychiatrists have labeled ‘moral injury’, a psychological affliction that occurs when one acts in a way that runs contrary to one’s most deeply-held moral convictions. Journalist David Wood characterizes moral injury as ‘the pain that results from damage to a person’s moral foundation’ and declares that it is ‘the signature wound of [the current] generation of veterans.’

By definition, one cannot suffer from moral injury unless one has deeply-held moral convictions. At the same time that some psychologists have been studying moral injury and how best to treat those afflicted by it, other psychologists have been uncovering the cognitive mechanisms that are responsible for our moral convictions. Among the central findings of that research are that our emotions often influence our moral judgments in significant ways and that such judgments are often produced by quick, automatic, behind-the-scenes cognition to which we lack conscious access.

Thus, it is a familiar phenomenon of human moral life that we find ourselves simply feeling strongly that something is right or wrong without having consciously reasoned our way to a moral conclusion. The hidden nature of much of our moral cognition probably helps to explain the doubt on the part of some philosophers that there really is such a thing as moral knowledge at all.

Edinburgh_Castle,_Scottish_National_War_Memorial_rear
Scottish National War Memorial, Edinburgh Castle. Photo by Nilfanion, CC BY-SA 3.0 via Wikimedia Commons.

In 1977, philosopher John Mackie famously pointed out that defenders of the reality of objective moral values were at a loss when it comes to explaining how human beings might acquire knowledge of such values. He declared that believers in objective values would be forced in the end to appeal to ‘a special sort of intuition’— an appeal that he bluntly characterized as ‘lame’. It turns out that ‘intuition’ is indeed a good label for the way many of our moral judgments are formed. In this way, it might appear that contemporary psychology vindicates Mackie’s skepticism and casts doubt on the existence of human moral knowledge.

Not so fast. In addition to discovering that non-conscious cognition has an important role to play in generating our moral beliefs, psychologists have discovered that such cognition also has an important role to play in generating a great many of our beliefs outside of the moral realm.

According to psychologist Daniel Kahneman, quick, automatic, non-conscious processing (which he has labeled ‘System 1′ processing) is both ubiquitous and an important source of knowledge of all kinds:

‘We marvel at the story of the firefighter who has a sudden urge to escape a burning house just before it collapses, because the firefighter knows the danger intuitively, ‘without knowing how he knows.’ However, we also do not know how we immediately know that a person we see as we enter a room is our friend Peter. … [T]he mystery of knowing without knowing … is the norm of mental life.’

This should provide some consolation for friends of moral knowledge. If the processes that produce our moral convictions are of roughly the same sort that enable us to recognize a friend’s face, detect anger in the first word of a telephone call (another of Kahneman’s examples), or distinguish grammatical and ungrammatical sentences, then maybe we shouldn’t be so suspicious of our moral convictions after all.

The good news is that hope for the reality of moral knowledge remains.

The good news is that hope for the reality of moral knowledge remains. – See more at: http://blog.oup.com/?p=75592&preview=true#sthash.aozalMuy.dpuf

In all of these cases, we are often at a loss to explain how we know, yet it is clear enough that we know. Perhaps the same is true of moral knowledge.

Still, there is more work to be done here, by both psychologists and philosophers. Ironically, some propose a worry that runs in the opposite direction of Mackie’s: that uncovering the details of how the human moral sense works might provide support for skepticism about at least some of our moral convictions.

Psychologist and philosopher Joshua Greene puts the worry this way:

‘I view science as offering a ‘behind the scenes’ look at human morality. Just as a well-researched biography can, depending on what it reveals, boost or deflate one’s esteem for its subject, the scientific investigation of human morality can help us to understand human moral nature, and in so doing change our opinion of it. … Understanding where our moral instincts come from and how they work can … lead us to doubt that our moral convictions stem from perceptions of moral truth rather than projections of moral attitudes.’

The challenge advanced by Greene and others should motivate philosophers who believe in moral knowledge to pay attention to findings in empirical moral psychology. The good news is that hope for the reality of moral knowledge remains.

And if there is moral knowledge, there can be increased moral wisdom and progress, which in turn makes room for hope that someday we can solve the problem of war-related moral injury not by finding an effective way of treating it but rather by finding a way of avoiding the tragedy of war altogether. Reflection on ‘the war to end war’ may yet enable it to live up to its name.

The post The problem with moral knowledge appeared first on OUPblog.

0 Comments on The problem with moral knowledge as of 9/21/2014 5:46:00 AM
Add a Comment
9. What commuters know about knowing

If your morning commute involves crowded public transportation, you definitely want to find yourself standing next to someone who is saying something like, “I know he’s stabbed people, but has he ever killed one?” . It’s of course best to enjoy moments like this in the wild, but I am not above patrolling Overheard in London for its little gems (“Shall I give you a ring when my penguins are available?”), or, on an especially desperate day, going all the way back to the London-Lund Corpus of Spoken English, a treasury of oddly informative conversations (many secretly recorded) from the 1960s and 1970s. Speaker 1: “When I worked on the railways these many years ago, I was working the claims department, at Pretona Station Warmington as office boy for a short time, and one noticed that the tremendous number of claims against the railway companies were people whose fingers had been caught in doors as the porters had slammed them.” Speaker 2: “Really. Oh my goodness.” (Speaker 1 then reports that the railway found it cheaper to pay claims for lost fingers than to install safety trim on the doors.)

Photo by CGPGrey and Alex Tenenbaum. Image supplied with permission by Jennifer Nagel.
Photo by CGPGrey and Alex Tenenbaum. Image supplied with permission by Jennifer Nagel.

If you ever need a good cover story for your eavesdropping, you are welcome to use mine: as an epistemologist, I study the line that divides knowing from merely thinking that something is the case, a line we are constantly marking in everyday conversation. There it was, in the first quotation: “I know he’s stabbed people.” How, exactly was this known, one wonders, and why was knowledge of this fact reported? There’s no shortage of data: knowledge, as it turns out, is reported heavily. In spoken English (as measured most authoritatively, by the 450-million-word Corpus of Contemporary American English), ‘know’ and ‘think’ figure as the sixth and seventh most commonly used verbs, muscling out what might seem to be more obvious contenders like ‘get’ and ‘make’. Spoken English is deeply invested in knowing, easily outshining other genres on this score. In academic writing, for example, ‘know’ and ‘think’ are only the 17th and 22nd-most popular verbs, well behind the scholar’s pallid friends ‘should’ and ‘could’. To be fair, some of the conversational traffic in ‘know’ is coming from fixed phrases, like — you know — invitations to conversational partners to make some inference, or — I know — indications that you are accepting what conversational partners are saying. But even after we strip out those formulaic uses, the database’s randomly sampled conversations remain thickly larded with genuine references to knowing and thinking. Meanwhile, similar results are found in the 100-million-word British National Corpus; this is not just an American thing.

Kanye West performing at Lollapalooza on April 3, 2011 in Santiago, Chile. Photo by rodrigoferrari. CC-BY-SA-2.0 via Wikimedia Commons
Kanye West performing at Lollapalooza on April 3, 2011 in Santiago, Chile. Photo by rodrigoferrari. CC-BY-SA-2.0 via Wikimedia Commons

It’s perhaps a basic human thing: conversations naturally slide towards the social. When we are not using language to do something artificial (like academic writing), we relate topics to ourselves. Field research in English pubs, cafeterias, and trains convinced British psychologist Robin Dunbar that most of our casual conversation time is taken up with ‘social topics’: personal relationships, personal experiences, and social plans. Anthropologist John Haviland apparently found similar patterns among the Zinacantan people in the remote highlands of Mexico. We talk about what people think, like, and want, constantly linking conversational topics back to human perspectives and feelings.

There’s an extreme philosophical theory about this tendency, advanced in Ancient Greece by Protagoras, and in our day by the best-known living American philosopher, Kanye West. Protagoras’s ideas reach us only in fragments transmitted through the reports of others, so I’ll give you Kanye’s formulation, transmitted through Twitter: “Feelings are the only facts”. Against the notion that the realm of the subjective is unreal, this theory maintains that reality can never be anything other than subjective. Here (as elsewhere) Kanye goes too far. The mental state verbs we use to link conversational topics back to humanity fall into two families, with interestingly different levels of subjectivity, divided along a line which has to do with the status of claims as fact. The first family is labeled factive, and includes such expressions as realizes, notices, is aware that, and sees that; the mother of all factive verbs is knows (and according to Oxford philosopher Timothy Williamson, knowledge is what unites the whole factive family). Non-factives make up the second family, whose members include thinks, suspects, believes and is sure. Factive verbs, rather predictably, properly attach themselves only to facts: you can know that Jack has stabbed someone only if he really has. Non-factive verbs are less informative: Jane might think that Edwin is following her even if he isn’t. In saying that Jane suspects Edwin has been stabbing people, I leave it an open question whether her suspicions are right: I report her feelings while remaining neutral on the relevant facts. Even when they mark strong degrees of subjective conviction — “Edwin is sure that Jane likes him” — non-factive expressions do not, unfortunately for Edwin in this case, necessarily attach themselves to facts. Feelings and facts can come apart.

Factives like ‘know’, meanwhile, allow us to report facts and feelings together at a single stroke. If I say that Lucy knows that the train is delayed, I’m simultaneously sharing news about the train and about Lucy’s attitude. Sometimes we use factives to reveal our attitudes to facts already known to the audience (“I know what you did last summer”), but most conversational uses of factives are bringing fresh facts into the picture. That last finding is from the work of linguist Jennifer Spenader, whose analysis of the dialogue about railway claims pulled me into the London-Lund Corpus in the first place (my goodness, so many fresh facts with those factives). Spenader and I both struggle with some deep theoretical problems about the line between knowing and thinking, but it nevertheless remains a line whose basic significance can be felt instinctively and without special training, even in casual conversation. No, wait, we have more than a feeling for this. We know something about it.

The post What commuters know about knowing appeared first on OUPblog.

0 Comments on What commuters know about knowing as of 9/20/2014 4:17:00 PM
Add a Comment
10. Coffee tasting with Aristotle

Imagine a possible world where you are having coffee with … Aristotle! You begin exchanging views on how you like the coffee; you examine its qualities – it is bitter, hot, aromatic, etc. It tastes to you this way or this other way. But how do you make these perceptual judgments? It might seem obvious to say that it is via the senses we are endowed with. Which senses though? How many senses are involved in coffee tasting? And how many senses do we have in all?

The question of how many senses we have is far from being of interest to philosophers only; perhaps surprisingly, it appears to be at the forefront of our thinking – so much so that it was even made the topic of an episode of the BBC comedy program QI. Yet, it is a question that is very difficult to answer. Neurologists, computer scientists and philosophers alike are divided on what the right answer might be. 5? 7? 22? Uncertainty prevails.

Even if the number of the senses is a question for future research to settle, it is in fact as old as rational thought. Aristotle raised it, argued about it, and even illuminated the problem, setting the stage for future generations to investigate it. Aristotle’s views are almost invariably the point of departure of current discussions, and get mentioned in what one might think unlikely places, such as the Harvard Medical School blog, the John Hopkins University Press blog, and QI. “Why did they teach me they are five?” says Alan Davies on the QI panel. “Because Aristotle said it,” replies Stephen Fry in an eye blink. (Probably) the senses are in fact more than the five Aristotle identified, but his views remain very much a point of departure in our thinking about this topic.

Aristotle thought the senses are five because there are five types of perceptible properties in the world to be experienced. This criterion for individuating the senses has had a very longstanding influence, in many domains including for example the visual arts.

Yet, something as ‘mundane’ as coffee tasting generates one of the most challenging philosophical questions, and not only for Aristotle. As you are enjoying your cup of coffee, you appreciate its flavor with your senses of taste and smell: this is one experience and not two, even if two senses are involved. So how do senses do this? For Aristotle, no sense can by itself enable the perceiver to receive input of more than one modality, precisely because uni-modal sensitivity is what according to Aristotle identifies uniquely each sense. On the other hand, it would be of no use to the perceiving subject to have two different types of perceptual input delivered by two different senses simultaneously, but as two distinct perceptual contents. If this were the case, the difficulty would remain unsolved. In which way would the subject make a perceptual judgment (e.g. about the flavor of the coffee), given that not one of the senses could operate outside its own special perceptual domain, but perceptual judgment presupposes discriminating, comparing, binding, etc. different types of perceptual input? One might think that perceptual judgments are made at the conceptual rather than perceptual level. Aristotle (and Plato) however would reject this explanation because they seek an account of animal perception that generalizes to all species and is not only applicable to human beings. In sum, for Aristotle to deliver a unified multimodal perceptual content the senses need to somehow cooperate and gain access in some way to each other’s special domain. But how do they do this?

Linard, Les cinq sens
Linard, Les cinq sens. Public domain via Wikimedia Commons.

A sixth sense? Is that the solution? Is this what Aristotle means when talking about the ‘common’ sense? There cannot be room for a sixth sense in Aristotle’s theory of perception, for as we have seen each sense is individuated by the special type of perceptible quality it is sensitive to, and of these types there are only five in the world. There is no sixth type of perceptible quality that the common sense would be sensitive to. (And even if there were a sixth sense so individuated, this would not solve the problem of delivering multimodal content to the perceiver, because the sixth sense would be sensitive only to its own special type of perceptibles). The way forward is then to investigate how modally different perceptual contents, each delivered by one sense, can be somehow unified, in such a way that my perceptual experience of coffee may be bitter and hot at once. But how can bitter and hot be unified?

Modeling (metaphysically) of how the senses cooperate to deliver to the perceiving subject unified but complex perceptual content is another breakthrough Aristotle made in his theory of perception. But it is much less known than his criterion for the senses’ individuation. In fact, Aristotle is often thought to have given an ad hoc and unsatisfactory solution to the problem of multimodal binding (of which tasting the coffee’s flavor is an instance), by postulating that there is a ‘common’ sense that somehow enables the subject to perform all the perceptual functions that the five sense singly cannot do. It is timely to take a departure form this received view which does not pay justice to Aristotle’s insights. Investigating Aristotle’s thoughts on complex perceptual content (often scattered among his various works, which adds to the interpretative challenge) reveals a much richer theory of perception that it is by and large thought he has.

If the number of the senses is a difficult question to address, how the senses combine their contents is an even harder one. Aristotle’s answer to it deserves at least as much attention as his views on the number of the senses currently receive in scholarly as well as ‘popular’ culture.

Headline image credit: Coffee. CC0 Public Domain via Pixabay

The post Coffee tasting with Aristotle appeared first on OUPblog.

0 Comments on Coffee tasting with Aristotle as of 9/16/2014 5:46:00 AM
Add a Comment
11. The construction of the Cartesian System as a rival to the Scholastic Summa

René Descartes wrote his third book, Principles of Philosophy, as something of a rival to scholastic textbooks. He prided himself in ‘that those who have not yet learned the philosophy of the schools will learn it more easily from this book than from their teachers, because by the same means they will learn to scorn it, and even the most mediocre teachers will be capable of teaching my philosophy by means of this book alone’ (Descartes to Marin Mersenne, December 1640).

Still, what Descartes produced was inadequate for the task. The topics of scholastic textbooks ranged much more broadly than those of Descartes’ Principles; they usually had four-part arrangements mirroring the structure of the collegiate curriculum, divided as they typically were into logic, ethics, physics, and metaphysics.

But Descartes produced at best only what could be called a general metaphysics and a partial physics.

Knowing what a scholastic course in physics would look like, Descartes understood that he needed to write at least two further parts to his Principles of Philosophy: a fifth part on living things, i.e., animals and plants, and a sixth part on man. And he did not issue what would be called a particular metaphysics.

Frans_Hals_-_Portret_van_René_Descartes
Portrait of René Descartes by Frans Hans. Public domain via Wikimedia Commons.

Descartes, of course, saw himself as presenting Cartesian metaphysics as well as physics, both the roots and trunk of his tree of philosophy.

But from the point of view of school texts, the metaphysical elements of physics (general metaphysics) that Descartes discussed—such as the principles of bodies: matter, form, and privation; causation; motion: generation and corruption, growth and diminution; place, void, infinity, and time—were usually taught at the beginning of the course on physics.

The scholastic course on metaphysics—particular metaphysics—dealt with other topics, not discussed directly in the Principles, such as: being, existence, and essence; unity, quantity, and individuation; truth and falsity; good and evil.

Such courses usually ended up with questions about knowledge of God, names or attributes of God, God’s will and power, and God’s goodness.

Thus the Principles of Philosophy by itself was not sufficient as a text for the standard course in metaphysics. And Descartes also did not produce texts in ethics or logic for his followers to use or to teach from.

These must have been perceived as glaring deficiencies in the Cartesian program and in the aspiration to replace Aristotelian philosophy in the schools.

So the Cartesians rushed in to fill the voids. One could mention their attempts to complete the physics—Louis de la Forge’s additions to the Treatise on Man, for example—or to produce more conventional-looking metaphysics—such as Johann Clauberg’s later editions of his Ontosophia or Baruch Spinoza’s Metaphysical Thoughts.

Cartesians in the 17th century began to supplement the Principles and to produce the kinds of texts not normally associated with their intellectual movement, that is treatises on ethics and logic, the most prominent of the latter being the Port-Royal Logic (Paris, 1662).

By the end of the 17th century, the Cartesians, having lost many battles, ulti­mately won the war against the Scholastics.

The attempt to publish a Cartesian textbook that would mirror what was taught in the schools culminated in the famous multi-volume works of Pierre-Sylvain Régis and of Antoine Le Grand.

The Franciscan friar Le Grand initially published a popular version of Descartes’ philosophy in the form of a scholastic textbook, expanding it in the 1670s and 1680s; the work, Institution of Philosophy, was then translated into English together with other texts of Le Grand and published as An Entire Body of Philosophy according to the Principles of the famous Renate Descartes (London, 1694).

On the Continent, Régis issued his General System According to the Principles of Descartes at about the same time (Amsterdam, 1691), having had difficulties receiving permission to publish. Ultimately, Régis’ oddly unsystematic (and very often un-Cartesian) System set the standard for Cartesian textbooks.

By the end of the 17th century, the Cartesians, having lost many battles, ulti­mately won the war against the Scholastics. The changes in the contents of textbooks from the scholastic Summa at beginning of the 17th century to the Cartesian System at the end can enable one to demonstrate the full range of the attempted Cartesian revolution whose scope was not limited to physics (narrowly conceived) and its epistemology, but included logic, ethics, physics (more broadly conceived), and metaphysics.

Headline image credit: Dispute of Queen Cristina Vasa and René Descartes, by Nils Forsberg (1842-1934) after Pierre-Louis Dumesnil the Younger (1698-1781). Public domain via Wikimedia Commons.

The post The construction of the Cartesian System as a rival to the Scholastic Summa appeared first on OUPblog.

0 Comments on The construction of the Cartesian System as a rival to the Scholastic Summa as of 9/15/2014 9:34:00 AM
Add a Comment
12. The vision of Confucius

To understand China, it is essential to understand Confucianism. There are many teachings of Confucianist tradition, but before we can truly understand them, it is important to look at the vision Confucius himself had. In this excerpt below from Confucianism: A Very Short Introduction, Daniel K. Gardner discusses the future the teacher behind the ideas imagined.

Confucius imagined a future where social harmony and sage rulership would once again prevail. It was a vision of the future that looked heavily to the past. Convinced that a golden age had been fully realized in China’s known history, Confucius thought it necessary to turn to that history, to the political institutions, the social relations, the ideals of personal cultivation that he believed prevailed in the early Zhou period, in order to piece together a vision to serve for all times. Here a comparison with Plato, who lived a few decades after the death of Confucius, is instructive. Like Confucius, Plato was eager to improve on contemporary political and social life. But unlike Confucius, he did not believe that the past offered up a normative model for the present. In constructing his ideal society in the Republic, Plato resorted much less to reconstruction of the past than to philosophical reflection and intellectual dialogue with others.

This is not to say, of course, that Confucius did not engage in philosophical reflection and dialogue with others. But it was the past, and learning from it, that especially consumed him. This learning took the form of studying received texts, especially the Book of Odes and the Book of History. He explains to his disciples:

“The Odes can be a source of inspiration and a basis for evaluation; they can help you to get on with others and to give proper expression to grievances. In the home, they teach you about how to serve your father, and in public life they teach you about how to serve your lord”.

The frequent references to verses from the Odes and to stories and legends from the History indicate Confucius’s deep admiration for these texts in particular and the values, the ritual practices, the legends, and the institutions recorded in them.

But books were not the sole source of Confucius’s knowledge about the past. The oral tradition was a source of instructive ancient lore for him as well. Myths and stories about the legendary sage kings Yao, Shun, and Yu; about Kings Wen and Wu and the Duke of Zhou, who founded the Zhou and inaugurated an age of extraordinary social and political harmony; and about famous or infamous rulers and officials like Bo Yi, Duke Huan of Qi, Guan Zhong, and Liuxia Hui—all mentioned by Confucius in the Analects—would have supplemented what he learned from texts and served to provide a fuller picture of the past.

Ma Lin - Emperor Yao" by Ma Lin - National Palace Museum, Taipei. Licensed under Public domain via Wikimedia Commons.
“Ma Lin – Emperor Yao” by Ma Lin – National Palace Museum, Taipei. Licensed under Public domain via Wikimedia Commons.

Still another source of knowledge for Confucius, interestingly, was the behavior of his contemporaries. In observing them, he would select out for praise those manners and practices that struck him as consistent with the cultural norms of the early Zhou and for condemnation those that in his view were contributing to the Zhou decline. The Analects shows him railing against clever speech, glibness, ingratiating appearances, affectation of respect, servility to authority, courage unaccompanied by a sense of right, and single-minded pursuit of worldly success—behavior he found prevalent among contemporaries and that he identified with the moral deterioration of the Zhou. To reverse such deterioration, people had to learn again to be genuinely respectful in dealing with others, slow in speech and quick in action, trustworthy and true to their word, openly but gently critical of friends, families, and rulers who strayed from the proper path, free of resentment when poor, free of arrogance when rich, and faithful to the sacred three-year mourning period for parents, which to Confucius’s great chagrin, had fallen into disuse. In sum, they had to relearn the ritual behavior that had created the harmonious society of the early Zhou.

That Confucius’s characterization of the period as a golden age may have been an idealization is irrelevant. Continuity with a “golden age” lent his vision greater authority and legitimacy, and such continuity validated the rites and practices he advocated. This desire for historical authority and legitimacy—during a period of disrupture and chaos—may help to explain Confucius’s eagerness to present himself as a mere transmitter, a lover of the ancients. Indeed, the Master’s insistence on mere transmission notwithstanding, there can be little doubt that from his study and reconstruction of the early Zhou period he forged an innovative—and enduring—sociopolitical vision. Still, in his presentation of himself as reliant on the past, nothing but a transmitter of what had been, Confucius established what would become something of a cultural template in China. Grand innovation that broke entirely with the past was not much prized in the pre-modern Chinese tradition. A Jackson Pollock who consciously and proudly rejected artistic precedent, for example, would not be acclaimed the creative genius in China that he was in the West. Great writers, great thinkers, and great artists were considered great precisely because they had mastered the tradition—the best ideas and techniques of the past. They learned to be great by linking themselves to past greats and by fully absorbing their styles and techniques. Of course, mere imitation was hardly sufficient; imitation could never be slavish. One had to add something creative, something entirely of one’s own, to mastery of the past.

Thus when you go into a museum gallery to view pre-modern Chinese landscapes, one hanging next to another, they appear at first blush to be quite similar. With closer inspection, however, you find that this artist developed a new sort of brush stroke, and that one a new use of ink-wash, and this one a new style of depicting trees and their vegetation. Now that your eye is becoming trained, more sensitive, it sees the subtle differences in the landscape paintings, with their range of masterful techniques an expression. But even as it sees the differences, it recognizes that the paintings evolved out of a common landscape tradition, in which artists built consciously on the achievements of past masters.

Featured image credit: “Altar of Confucius (7360546688)” by Francisco Anzola – Altar of Confucius. Licensed under CC BY 2.0 via Wikimedia Commons.

The post The vision of Confucius appeared first on OUPblog.

0 Comments on The vision of Confucius as of 9/12/2014 7:14:00 AM
Add a Comment
13. Questioning the question: religion and rationality

We all know that asking questions is important. Asking the right questions is at the heart of most intellectual activity. Questions must be encouraged. We all know this.

But are there any questions which may not be asked? Questions which should not be asked? Although many a young undergraduate might initially say “No! Never! All questions must be encouraged!”

I think most thoughtful people will realise there is a little more to it than that. There are, for example, statements which present themselves in all the innocent garb of questions, but which smuggle in nasty and false assertions, such as the phrase “why are blond people intellectually inferior to dark people?” There are questions which mould the questioner, such as “will I feel better if I arrange for this other person to be silenced?”

Questions can serve horrible purposes: they can focus the mind down a channel of horror, such as, “what is the quickest way to bulldoze this village?” Even more extreme examples could be given; they make it clear that not all statements that appear to be questions are primarily questions at all, and not all questions are innocent.

Once you start to think it through, it becomes clear that every question you can ask, just like every other type of utterance you can make, is not a simple self-contained thing, but a connector to all sorts of related assumptions and projects, some of them far from morally neutral. This makes it not just possible, but sometimes important and a matter of honour and duty, not just to refuse to answer, but to raise an objection to the question itself. More precisely, one objects to the assumptions that lie behind the question, and which have rendered the question objectionable.

Tell me, my daughters … which of you shall we say doth love us most?", King Lear, W. Shakespeare.
“Tell me, my daughters … which of you shall we say doth love us most?”, King Lear, W. Shakespeare. (Cordelia Disinherited by John Rogers Herbert. Public Domain via Wikimedia Commons.

“Have you stopped beating your children?”

“Tell me, my daughters … which of you shall we say doth love us most?”

“How do you reconcile your rationality with your religious faith?”

In all three cases the question renders any honest person speechless.

But in the first case, if the question is pressed, and I am hauled up before the judge in a court of law, then I will protest, at length and forcefully, that I never did beat my children in the first place. And in the second case, if the question is pressed, then a loving daughter may choose to handle what comes of her silence, and show her love by her behaviour. And if the third question is pressed, then I might explain, as patiently as I could, that the attitude of the questioner is as deeply distorted here as it is in the other two cases, and I will add that my faith was never divorced from my rationality in the first place, and that being required to explain this is like being required to explain that you are honest.

Now we have arrived at the point of this blog, which is not, I will come clean, the general issue of questioning the question, but the specific issue of public discourse in the area of religion. But the two are closely related, because I am interested in focussing attention on where the issue of questioning the question really lies.

The issue is not, “are there questions which are objectionable?” (I think we already settled that), nor is it, “let’s have some intellectual amusement unpicking what is objectionable about this or that ill-posed question which we find it easy to tell is ill-posed.” No, the heart of this issue is, what about the fact that there may be questions which are in fact ignorant and domineering in themselves as questions — like “have you stopped beating your children?” — but which we don’t recognise as such, because of the unquestioned assumptions of our culture and the intellectual habits it promotes.

The third example above is the one which invites the reader to explore this. Is that question objectionable or not?

I will give two reactions: first a subjective one, then the beginnings of an objective one. Subjectively, the question, and others like it such as, “how do you reconcile science and religion?” make me feel every bit as queasy as the “beating your children” one. The hollow feeling of having been pigeonholed before you can open your mouth, of being in the presence of someone whose mental landscape does not even allow the garden where you live, the feeling of being treated like dirt, it is all there.

Now, objectively, are these feelings of mine a sign of trouble in me, or a sign of trouble in our wider culture? I invite reflections. Here I will offer three.

First, my reaction is strong because rationality is a deeply ingrained part of my very identity; it is every bit as important to me as it is to anyone else, so that to face a presumption of guilt in this area is to face a great injustice. Secondly, though, religion is a broad phenomenon, having bad (terrible, horrendous) parts and good (wonderful, beautiful) parts, so the question might be a muddled attempt to ask, “what type of religion is going on in you?” It still remains a suspicious question, like “are you honest?” but in view of the nastiness of bad religion, perhaps we have to live with it, and allow that people will need to ask, to get some reassurance.

Having said that, (and thirdly) we can only make a reply if there is enough oxygen in the room–that is, if the questioner does not come over like an inquisitor who has already made up his mind. The question needs to be, in effect, “I realise that we are both rational; would you unpack for me the way that rationality pans out for you?” We need the questioner at least to be open to the idea that willingness to recognize God in personal terms can be a thoroughly rational thing to do, in a similar sense that recognizing other humans as consciously willing agents is a thoroughly rational thing to do. In both cases, it requires a willingness that is in tune with reason, not unreason, but which is larger than reason, as a chord is larger than a single note.

Headline image: King Lear: Cordelia’s Farewell by Edwin Austin Abbey, 1898. Public domain via WikiArt

The post Questioning the question: religion and rationality appeared first on OUPblog.

0 Comments on Questioning the question: religion and rationality as of 9/10/2014 5:48:00 AM
Add a Comment
14. 10 reasons why it is good to be good

The first question of moral philosophy, going back to Plato, is “how ought I to live my life?”. Perhaps the second, following close on the heels of the first, can be taken to be “ought I to live morally or not?”, assuming that one can “get away with” being immoral. Another, more familiar way of phrasing this second question is “why be moral?”, where this is elliptical for something like, “will it be good for me and my life to be moral, or would I be better off being immoral, as long as I can get away with it?”.

Bringing together the ancient Greek conception of happiness with a modern conception of self-respect, it turns out to be bad to be a bad person, while in fact, it is good to be a good person. Here are some reasons why:

(1)   Because being bad is bad. Some have thought that being bad or immoral can be good for a person, especially when we can “get away with it”, but there are some good reasons for thinking this is false. The most important reason is that being bad or immoral is self-disrespecting and it is hard to imagine being happy without self-respect. Here’s one quick argument:

Being moral (or good) is necessary for having self-respect.
Self-respect is necessary for happiness.
____________________________________________
Therefore, being good is necessary for happiness.

Of course, a full defense of this syllogism would require more than can be given in a blog post, but hopefully, it isn’t too hard to see the ways in which lying, cheating, and stealing – or being immoral in general – is incompatible with having genuine self-respect. (Of course, cheaters may think they have self-respect, but do you really think Lance Armstrong was a man of self-respect, whatever he may have thought of himself?)

(2)   Because it is the only way to have a chance at having self-respect. We can only have self-respect if we respect who we actually are, we can’t if we only respect some false image of ourselves. So, self-respect requires self-knowledge. And only people who can make just and fair self-assessments can have self-knowledge. And only just and fair people, good, moral people can make just and fair self-assessments. (This is a very compacted version of a long argument.)

(3)   Because being good lets you see what is truly of value in the world. Part of what being good requires is that good people know what is good in the world and what is not. Bad people have bad values, good people have good values. Having good values means valuing what deserves to be valued and not valuing what does not deserve to be valued.

(4)   Because a recent study of West Point cadets reveals that cadets with mixed motivations – some selfish, instrumental, and career-oriented, while others are “intrinsic” and responsive to the value of the job itself – do not perform as well cadets whose motivations are not mixed and are purely intrinsic. (See “The Secret of Effective Motivation”)

Plato and Aristotle, from the Palazzo Pontifici, Vatican. Public domain via Wikimedia Commons.
Plato and Aristotle, from the Palazzo Pontifici, Vatican. Public domain via Wikimedia Commons.

(5)   Because being good means taking good care of yourself. It doesn’t mean that you are the most important thing in the world, or that nothing is more important than you. But, in normal circumstances, it does give you permission to take better care of yourself and your loved ones than complete strangers.

(6)   Because being good means that while you can be passionate, you can choose what you are passionate about; it means that you don’t let your emotions, desires, wants, and needs “get the better of you” and “make” you do things that you later regret. It gives you true grit.

(7)   Because being good means that you will be courageous and brave, in the face of danger and pain and social rejection. It gives you the ability to speak truth to power and “fight the good fight”. It helps you assess risk, spot traps, and seize opportunities. It helps you be successful.

(8)   Because being good means that you will be wise as you can be when you are old and grey. Deep wisdom may not be open to everyone, since some simply might not have the intellectual wherewithal for it. (Think of someone with severe cognitive disabilities.) But we can all, of course, be as wise as it is possible for us to be. This won’t happen, however, by accident. Wise people have to be able to perspicuously see into the “heart of the matter”, and this won’t happen unless we care about the right things. And we won’t care about the right things unless we have good values, so being good will help make us be as wise as we can be.

(9)   Because being good means that we are lovers of the good and, if we are lucky, it means that we will be loved by those who are themselves good. And being lovers of the good means that we become good at loving what is good, to the best of our ability. So, being good makes us become good lovers. And it is good to be a good lover, isn’t it? And good lovers who value what is good are more likely to be loved in return by people who also love the good. What could be better than being loved well by a good person who is your beloved?

(10)   Because of 1-9 above, only good people can live truly happy lives. Only good people live the Good Life.

Headline image credit: Diogenes and Plato by Mattia Preti 1649. Capitoline Museums. Public domain via Wikimedia Commons

The post 10 reasons why it is good to be good appeared first on OUPblog.

0 Comments on 10 reasons why it is good to be good as of 9/8/2014 10:30:00 AM
Add a Comment
15. Nick Bostrom on artificial intelligence

From mechanical turks to science fiction novels, our mobile phones to The Terminator, we’ve long been fascinated by machine intelligence and its potential — both good and bad. We spoke to philosopher Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, about a number of pressing questions surrounding artificial intelligence and its potential impact on society.

Are we living with artificial intelligence today?

Mostly we have only specialized AIs – AIs that can play chess, or rank search engine results, or transcribe speech, or do logistics and inventory management, for example. Many of these systems achieve super-human performance on narrowly defined tasks, but they lack general intelligence.

There are also experimental systems that have fully general intelligence and learning ability, but they are so extremely slow and inefficient that they are useless for any practical purpose.

AI researchers sometimes complain that as soon as something actually works, it ceases to be called ‘AI’. Some of the techniques used in routine software and robotics applications were once exciting frontiers in artificial intelligence research.

What risk would the rise of a superintelligence pose?

It would pose existential risks – that is to say, it could threaten human extinction and the destruction of our long-term potential to realize a cosmically valuable future.

Would a superintelligent artificial intelligence be evil?

Hopefully it will not be! But it turns out that most final goals an artificial agent might have would result in the destruction of humanity and almost everything we value, if the agent were capable enough to fully achieve those goals. It’s not that most of these goals are evil in themselves, but that they would entail sub-goals that are incompatible with human survival.

For example, consider a superintelligent agent that wanted to maximize the number of paperclips in existence, and that was powerful enough to get its way. It might then want to eliminate humans to prevent us from switching if off (since that would reduce the number of paperclips that are built). It might also want to use the atoms in our bodies to build more paperclips.

Most possible final goals, it seems, would have similar implications to this example. So a big part of the challenge ahead is to identify a final goal that would truly be beneficial for humanity, and then to figure out a way to build the first superintelligence so that it has such an exceptional final goal. How to do this is not yet known (though we do now know that several superficially plausible approaches would not work, which is at least a little bit of progress).

How long have we got before a machine becomes superintelligent?

Nobody knows. In an opinion survey we did of AI experts, we found a median view that there was a 50% probability of human-level machine intelligence being developed by mid-century. But there is a great deal of uncertainty around that – it could happen much sooner, or much later. Instead of thinking in terms of some particular year, we need to be thinking in terms of probability distributed across a wide range of possible arrival dates.

So would this be like Terminator?

There is what I call a “good-story bias” that limits what kind of scenarios can be explored in novels and movies: only ones that are entertaining. This set may not overlap much with the group of scenarios that are probable.

For example, in a story, there usually have to be humanlike protagonists, a few of which play a pivotal role, facing a series of increasingly difficult challenges, and the whole thing has to take enough time to allow interesting plot complications to unfold. Maybe there is a small team of humans, each with different skills, which has to overcome some interpersonal difficulties in order to collaborate to defeat an apparently invincible machine which nevertheless turns out to have one fatal flaw (probably related to some sort of emotional hang-up).

One kind of scenario that one would not see on the big screen is one in which nothing unusual happens until all of a sudden we are all dead and then the Earth is turned into a big computer that performs some esoteric computation for the next billion years. But something like that is far more likely than a platoon of square-jawed men fighting off a robot army with machine guns.

Futuristic man. © Vladislav Ociacia via iStock.
Futuristic man. © Vladislav Ociacia via iStock.

If machines became more powerful than humans, couldn’t we just end it by pulling the plug? Removing the batteries?

It is worth noting that even systems that have no independent will and no ability to plan can be hard for us to switch off. Where is the off-switch to the entire Internet?

A free-roaming superintelligent agent would presumably be able to anticipate that humans might attempt to switch it off and, if it didn’t want that to happen, take precautions to guard against that eventuality. By contrast to the plans that are made by AIs in Hollywood movies – which plans are actually thought up by humans and designed to maximize plot satisfaction – the plans created by a real superintelligence would very likely work. If the other Great Apes start to feel that we are encroaching on their territory, couldn’t they just bash our skulls in? Would they stand a much better chance if every human had a little off-switch at the back of our necks?

So should we stop building robots?

The concern that I focus on in the book has nothing in particular to do with robotics. It is not in the body that the danger lies, but in the mind that a future machine intelligence may possess. Where there is a superintelligent will, there can most likely be found a way. For instance, a superintelligence that initially lacks means to directly affect the physical world may be able to manipulate humans to do its bidding or to give it access to the means to develop its own technological infrastructure.

One might then ask whether we should stop building AIs? That question seems to me somewhat idle, since there is no prospect of us actually doing so. There are strong incentives to make incremental advances along many different pathways that eventually may contribute to machine intelligence – software engineering, neuroscience, statistics, hardware design, machine learning, and robotics – and these fields involve large numbers of people from all over the world.

To what extent have we already yielded control over our fate to technology?

The human species has never been in control of its destiny. Different groups of humans have been going about their business, pursuing their various and sometimes conflicting goals. The resulting trajectory of global technological and economic development has come about without much global coordination and long-term planning, and almost entirely without any concern for the ultimate fate of humanity.

Picture a school bus accelerating down a mountain road, full of quibbling and carousing kids. That is humanity. But if we look towards the front, we see that the driver’s seat is empty.

Featured image credit: Humanrobo. Photo by The Global Panorama, CC BY 2.0 via Flickr

The post Nick Bostrom on artificial intelligence appeared first on OUPblog.

0 Comments on Nick Bostrom on artificial intelligence as of 9/8/2014 10:30:00 AM
Add a Comment
16. Why study paradoxes?

Why should you study paradoxes? The easiest way to answer this question is with a story:

In 2002 I was attending a conference on self-reference in Copenhagen, Denmark. During one of the breaks I got a chance to chat with Raymond Smullyan, who is amongst other things an accomplished magician, a distinguished mathematical logician, and perhaps the most well-known popularizer of `Knight and Knave’ (K&K) puzzles.

K&K puzzles involve an imaginary island populated by two tribes: the Knights and the Knaves. Knights always tell the truth, and Knaves always lie (further, members of both tribes are forbidden to engage in activities that might lead to paradoxes or situations that break these rules). Other than their linguistic behavior, there is nothing that distinguishes Knights from Knaves.

Typically, K&K puzzles involve trying to answer questions based on assertions made by, or questions answered by, an inhabitant of the island. For example, a classic K&K puzzle involves meeting an islander at a fork in the road, where one path leads to riches and success and the other leads to pain and ruin. You are allowed to ask the islander one question, after which you must pick a path. Not knowing to which tribe the islander belongs, and hence whether she will lie or tell the truth, what question should you ask?

(Answer: You should ask “Which path would someone from the other tribe say was the one leading to riches and success?”, and then take the path not indicated by the islander).

Back to Copenhagen in 2002: Seizing my chance, I challenged Smullyan with the following K&K puzzle, of my own devising:

There is a nightclub on the island of Knights and Knaves, known as the Prime Club. The Prime Club has one strict rule: the number of occupants in the club must be a prime number at all times.

Pythagoras paradox.png
Pythagoras paradox, by Jan Arkesteijn (own work). Public domain via Wikimedia Commons.

The Prime Club also has strict bouncers (who stand outside the doors and do not count as occupants) enforcing this rule. In addition, a strange tradition has become customary at the Prime Club: Every so often the occupants form a conga line, and sing a song. The first lyric of the song is:

“At least one of us in the club is a Knave.”

and is sung by the first person in the line. The second lyric of the song is:

“At least two of us in the club are Knaves.”

and is sung by the second person in the line. The third person (if there is one) sings:

“At least three of us in the club are Knaves.”

And so on down the line, until everyone has sung a verse.

One day you walk by the club, and hear the song being sung. How many people are in the club?

Smullyan’s immediate response to this puzzle was something like “That can’t be solved – there isn’t enough information”. But he then stood alone in the corner of the reception area for about five minutes, thinking, before returning to confidently (and correctly, of course) answer “Two!”

I won’t spoil things by giving away the solution – I’ll leave that mystery for interested readers to solve on their own. (Hint: if the song is sung with any other prime number of islanders in the club, a paradox results!) I will note that the song is equivalent to a more formal construction involving a list of sentences of the form:

At least one of sentences S1 – Sn is false.

At least two of sentences S1 – Sn is false.

————————————————

At least n of sentences S1 – Sn is false.

The point of this story isn’t to brag about having stumped a famous logician (even for a mere five minutes), although I admit that this episode (not only stumping Smullyan, but meeting him in the first place) is still one of the highlights of my academic career.

Frances MacDonald - A Paradox 1905.jpg
Frances MacDonald – A Paradox 1905, by Frances MacDonald McNair. Public domain via Wikimedia Commons.

Instead, the story, and the puzzle at the center of it, illustrates the reasons why I find paradoxes so fascinating and worthy of serious intellectual effort. The standard story regarding why paradoxes are so important is that, although they are sometimes silly in-and-of-themselves, paradoxes indicate that there is something deeply flawed in our understanding of some basic philosophical notion (truth, in the case of the semantic paradoxes linked to K&K puzzles).

Another reason for their popularity is that they are a lot of fun. Both of these are really good reasons for thinking deeply about paradoxes. But neither is the real reason why I find them so fascinating. The real reason I find paradoxes so captivating is that they are much more mathematically complicated, and as a result much more mathematically interesting, than standard accounts (which typically equate paradoxes with the presence of some sort of circularity) might have you believe.

The Prime Club puzzle demonstrates that whether a particular collection of sentences is or is not paradoxical can depend on all sorts of surprising mathematical properties, such as whether there is an even or odd number of sentences in the collection, or whether the number of sentences in the collection is prime or composite, or all sorts of even weirder and more surprising conditions.

Other examples demonstrate that whether a construction (or, equivalently, a K&K story) is paradoxical can depend on whether the referential relation involved in the construction (i.e. the relation that holds between two sentences if one refers to the other) is symmetric, or is transitive.

The paradoxicality of still another type of construction, involving infinitely many sentences, depends on whether cofinitely many of the sentences each refer to cofinitely many of the other sentences in the construction (a set is cofinite if its complement is finite). And this only scratches the surface!

The more I think about and work on paradoxes, the more I marvel at how complicated the mathematical conditions for generating paradoxes are: it takes a lot more than the mere presence of circularity to generate a mathematical or semantic paradox, and stating exactly what is minimally required is still too difficult a question to answer precisely. And that’s why I work on paradoxes: their surprising mathematical complexity and mathematical beauty. Fortunately for me, there is still a lot of work remains to be done, and a lot of complexity and beauty remaining to be discovered.

The post Why study paradoxes? appeared first on OUPblog.

0 Comments on Why study paradoxes? as of 9/7/2014 5:38:00 AM
Add a Comment
17. The unfinished fable of the sparrows

Owls and robots. Nature and computers. It might seem like these two things don’t belong in the same place, but The Unfinished Fable of the Sparrows (in an extract from Nick Bostrom’s Superintelligence) sheds light on a particular problem: what if we used our highly capable brains to build machines that surpassed our general intelligence?

It was the nest-building season, but after days of long hard work, the sparrows sat in the evening glow, relaxing and chirping away.

“We are all so small and weak. Imagine how easy life would be if we had an owl who could help us build our nests!”

“Yes!” said another. “And we could use it to look after our elderly and our young.”

“It could give us advice and keep an eye out for the neighborhood cat,” added a third.

Then Pastus, the elder-bird, spoke: “Let us send out scouts in all directions and try to find an abandoned owlet somewhere, or maybe an egg. A crow chick might also do, or a baby weasel. This could be the best thing that ever happened to us, at least since the opening of the Pavilion of Unlimited Grain in yonder backyard.”

The flock was exhilarated, and sparrows everywhere started chirping at the top of their lungs.

Only Scronkfinkle, a one-eyed sparrow with a fretful temperament, was unconvinced of the wisdom of the endeavor. Quoth he: “This will surely be our undoing. Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?”

Replied Pastus: “Taming an owl sounds like an exceedingly difficult thing to do. It will be difficult enough to find an owl egg. So let us start there. After we have succeeded in raising an owl, then we can think about taking on this other challenge.”

“There is a flaw in that plan!” squeaked Scronkfinkle; but his protests were in vain as the flock had already lifted off to start implementing the directives set out by Pastus.

Just two or three sparrows remained behind. Together they began to try to work out how owls might be tamed or domesticated. They soon realized that Pastus had been right: this was an exceedingly difficult challenge, especially in the absence of an actual owl to practice on. Nevertheless they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found.

Headline image credit: Chestnut Sparrow by Lip Kee. CC BY 2.0 via Flickr.

The post The unfinished fable of the sparrows appeared first on OUPblog.

0 Comments on The unfinished fable of the sparrows as of 1/1/1900
Add a Comment
18. Moral pluralism and the dismay of Amy Kane

There’s a scene in the movie High Noon that seems to me to capture an essential feature of our moral lives. Actually, it’s not the entire scene. It’s one moment really, two shots — a facial expression and a movement of the head of Grace Kelly.

The part she’s playing is that of Amy Kane, the wife of Marshal Will Kane (Gary Cooper). Amy Kane is a Quaker, and as such is opposed to violence of any kind. Indeed, she tells Kane she will marry him only if he resigns as marshal of Hadleyville and vows to put down his guns forever. He agrees. But shortly after the wedding Kane learns that four villains have plans to terrorize the town, and he comes to think it is he who must try to stop them. He picks up his guns in preparation to meet the villains, and in so doing breaks his vow to Amy.

Unrelenting in her passivism, Amy decides to leave Will. She boards the noon train out of town. Then she hears gunfire, and, just as the train is about to depart, she disembarks and rushes back. Meanwhile, Kane is battling the villains. He manages to kill two of them, but the remaining two have him cornered. It looks like the end for Kane. Then one of them falls.

Amy has picked up a gun and shot him in the back.

We briefly glimpse Amy’s face immediately after she has pulled the trigger. She is distraught, stricken. When the camera angle changes to a view from behind, we see her head fall with great sadness under the weight of what she’s done.

What’s going on with Amy at that moment? It’s possible, I suppose, that she believes she shouldn’t have shot the villain, that she let her emotions run away with her, that she thinks she did the wrong thing. But I doubt that’s it. More likely is that when Amy heard the gunshots she decided that the right thing for her to do was return to town and help her husband in his desperate fight. But why then is Amy dismayed? If she performed the action she thought was right, shouldn’t she feel only moral contentment with what she has done?

Studio publicity still of Grace Kelly for the film Rear Window (1954). Public domain via Wikimedia Commons.
Studio publicity still of Grace Kelly for the film Rear Window (1954). Public domain via Wikimedia Commons.

Grace Kelly could have played it differently. She could have whooped with delight at having offed the bad guy, perhaps dropping some “hasta la vista”-like catchphrase along the way. Or she could have set her ample square jaw in grim determination and gone after the remaining villain, signaling to us her decision to discard namby-pamby pacifism for the robust alternative of visceral western justice. But Amy Kane’s actual reaction is psychologically more plausible — and morally more interesting. While she believes she’s done what she had to do, she’s still dismayed. Why?

What Amy’s reaction shows, I believe, is that morality is pluralist, not monist.

Monistic moral theories tell us that there is one and only one ultimate moral end. If monism is true, in every situation it will always be at least theoretically possible to justify the right course of action by showing that everything of fundamental moral importance supports it. Jeremy Bentham is an example of a moral monist.

He held that pleasure is the single ultimate end. Another example is Immanuel Kant, who held that the single base for all of morality is the Categorical Imperative. According to monists,successful moral justification will always ends at a single point (even if they disagree among themselves about what that point is).

Pluralist moral theories, in contrast, hold that there is a multitude of basic moral principles that can come into conflict with each other. David Hume and W.D. Ross were both moral pluralists. They believed that various kinds of moral conflict can arise — justice can conflict with beneficence, keeping a promise can conflict with obeying the law, courage can conflict with prudence — and that there are no underlying rules that explain how such conflicts are to be resolved.

If Hume and Ross are right and pluralism is true, even after you have given the best justification for a course of action that it is possible to give, you may sometimes have to acknowledge that to follow that course will be to act in conflict with something of fundamental moral importance. Your best justification may fail to make all of the moral ends meet.

With that understanding of monism and pluralism on board, let’s now return to Grace Kelly as Amy Kane. Let’s return to the moment her righteous killing of the bad guy causes her to bow her head in moral remorse.

If we assume monism, Amy’s response will seem paradoxical, weird, in some way inappropriate. If there is one and only one ultimate end, then to think that a course of action is right will be to think that everything of fundamental importance supports it. And it would be paradoxical or weird — inappropriate in some way — for someone to regret doing something in line with everything of fundamental moral importance. If the moral justification of an action ends at a single point, then what could the point be of feeling remorse for doing it?

But Amy’s reaction is perfectly explicable if we take her to have a plurality of potentially-conflicting basic moral commitments. Moral pluralists will explain that Amy has decided that in this situation saving Kane from the villains has a fundamental moral importance that overrides the prohibition on killing, even while she continues to believe that there is something fundamentally morally terrible about killing. For pluralists, there is nothing strange about feeling remorse toward acting against something one takes to be of fundamental moral importance.

Indeed, feeling remorse in such a situation is just what we should expect. This is why we take Amy’s response to be apt, not paradoxical or weird. We think that she, like most of us, holds a plurality of fundamental moral commitments, one of which she rightly acted on even though it meant having to violate another.

The upshot is this. If you think Grace Kelly played the scene right — and if you think High Noon captures something about our moral lives that “hasta la vista”-type movies do not — then you ought to believe in moral pluralism.

Headline image: General Store Sepia Toned Wild West Town. © BCFC via iStock

The post Moral pluralism and the dismay of Amy Kane appeared first on OUPblog.

0 Comments on Moral pluralism and the dismay of Amy Kane as of 8/24/2014 5:39:00 AM
Add a Comment
19. Practical wisdom and why we need to value it

vsi1

By David Blockley


“Some people who do not possess theoretical knowledge are more effective in action (especially if they are experienced) than others who do possess it.”

Aristotle was referring, in his Nicomachean Ethics, to an attribute called practical wisdom – a quality that many modern engineers have – but our western intellectual tradition has completely lost sight of. I will describe briefly what Aristotle wrote about practical wisdom, argue for its recognition and celebration and state that we need consciously to utilise it as we face up to the uncertainties inherent in the engineering challenges of climate change.

Necessarily what follows is a simplified account of complex and profound ideas. Aristotle saw five ways of arriving at the truth – he called them art (ars, techne), science (episteme), intuition (nous), wisdom (sophia), and practical wisdom – sometimes translated as prudence (phronesis). Ars or techne (from which we get the words art and technical, technique and technology) was concerned with production but not action. Art had a productive state, truly reasoned, with an end (i.e. a product) other than itself (e.g. a building). It was not just a set of activities and skills of craftsman but included the arts of the mind and what we would now call the fine arts. The Greeks did not distinguish the fine arts as the work of an inspired individual – that came only after the Renaissance. So techne as the modern idea of mere technique or rule-following was only one part of what Aristotle was referring to.

Episteme (from which we get the word epistemology or knowledge) was of necessity and eternal; it is knowledge that cannot come into being or cease to be; it is demonstrable and teachable and depends on first principles. Later, when combined with Christianity, episteme as eternal, universal, context-free knowledge has profoundly influenced western thought and is at the heart of debates between science and religion. Intuition or nous was a state of mind that apprehends these first principles and we could think of it as our modern notion of intelligence or intellect. Wisdom or sophia was the most finished form of knowledge – a combination of nous and episteme.

Aristotle thought there were two kinds of virtues, the intellectual and the moral. Practical wisdom or phronesis was an intellectual virtue of perceiving and understanding in effective ways and acting benevolently and beneficently. It was not an art and necessarily involved ethics, not static but always changing, individual but also social and cultural. As an illustration of the quotation at the head of this article, Aristotle even referred to people who thought Anaxagoras and Thales were examples of men with exceptional, marvelous, profound but useless knowledge because their search was not for human goods.

Aristotle thought of human activity in three categories praxis, poeisis (from which we get the word poetry), and theoria (contemplation – from which we get the word theory). The intellectual faculties required were phronesis for praxis, techne for poiesis, and sophia and nous for theoria.

Sculpture of Aristotle at the Louvre Museum, Eric Gaba, CC-BY-SA-2.5 via Wikimedia Commons

Sculpture of Aristotle at the Louvre Museum. Photo by Eric Gaba, CC-BY-SA-2.5 via Wikimedia Commons

It is important to understand that theoria had total priority because sophia and nous were considered to be universal, necessary and eternal but the others are variable, finite, contingent and hence uncertain and thus inferior.

What did Aristotle actually mean when he referred to phronesis? As I see it phronesis is a means towards an end arrived at through moral virtue. It is concerned with “the capacity for determining what is good for both the individual and the community”. It is a virtue and a competence, an ability to deliberate rightly about what is good in general, about discerning and judging what is true and right but it excludes specific competences (like deliberating about how to build a bridge or how to make a person healthy). It is purposeful, contextual but not rule-following. It is not routine or even well-trained behaviour but rather intentional conduct based on tacit knowledge and experience, using longer time horizons than usual, and considering more aspects, more ways of knowing, more viewpoints, coupled with an ability to generalise beyond narrow subject areas. Phronesis was not considered a science by Aristotle because it is variable and context dependent. It was not an art because it is about action and generically different from production. Art is production that aims at an end other than itself. Action is a continuous process of doing well and an end in itself in so far as being well done it contributes to the good life.

Christopher Long argues that an ontology (the philosophy of being or nature of existence) directed by phronesis rather than sophia (as it currently is) would be ethical; would question normative values; would not seek refuge in the eternal but be embedded in the world and be capable of critically considering the historico-ethical-political conditions under which it is deployed. Its goal would not be eternal context-free truth but finite context-dependent truth. Phronesis is an excellence (arête) and capable of determining the ends. The difference between phronesis and techne echoes that between sophia and episteme. Just as sophia must not just understand things that follow from first principles but also things that must be true, so phronesis must not just determine itself towards the ends but as arête must determine the ends as good. Whereas sophia knows the truth through nous, phronesis must rely on moral virtues from lived experience.

In the 20th century quantum mechanics required sophia to change and to recognise that we cannot escape uncertainty. Derek Sellman writes that a phronimo will recognise not knowing our competencies, i.e. not knowing what we know, and not knowing our uncompetencies, i.e. not knowing what we do not know. He states that a longing for phronesis “is really a longing for a world in which people honestly and capably strive to act rightly and to avoid harm,” and he thinks it is a longing for praxis.

In summary I think that one way (and perhaps the only way) of dealing with the ‘wicked’ uncertainties we face in the future, such as the effects of climate change, is through collaborative ‘learning together’ informed by the recognition, appreciation, and exercise of practical wisdom.

Professor Blockley is an engineer and an academic scientist. He has been Head of the Department of Civil Engineering and Dean of the Faculty of Engineering at the University of Bristol. He is a Fellow of the Royal Academy of Engineering, the Institution of Civil Engineers, the Institution of Structural Engineers, and the Royal Society of Arts. He has written four books including Engineering: A Very Short Introduction and Bridges: The science and art of the world’s most inspiring structures.

The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday, subscribe to Very Short Introductions articles on the OUPblog via email or RSS., and like Very Short Introductions on Facebook.

Subscribe to the OUPblog via email or RSS
Subscribe to only philosophy articles on the OUPblog via email or RSS.

The post Practical wisdom and why we need to value it appeared first on OUPblog.

0 Comments on Practical wisdom and why we need to value it as of 7/11/2014 5:15:00 AM
Add a Comment
20. Rebooting Philosophy

By Luciano Floridi


When we use a computer, its performance seems to degrade progressively. This is not a mere impression. An old version of Firefox, the free Web browser, was infamous for its “memory leaks”: it would consume increasing amounts of memory to the detriment of other programs. Bugs in the software actually do slow down the system. We all know what the solution is: reboot. We restart the computer, the memory is reset, and the performance is restored, until the bugs slow it down again.

Philosophy is a bit like a computer with a memory leak. It starts well, dealing with significant and serious issues that matter to anyone. Yet, in time, its very success slows it down. Philosophy begins to care more about philosophers’ questions than philosophical ones, consuming increasing amount of intellectual attention. Scholasticism is the ultimate freezing of the system, the equivalent of Windows’ “blue screen of death”; so many resources are devoted to internal issues that no external input can be processed anymore, and the system stops. The world may be undergoing a revolution, but the philosophical discourse remains detached and utterly oblivious. Time to reboot the system.

Philosophical “rebooting” moments are rare. They are usually prompted by major transformations in the surrounding reality. Since the nineties, I have been arguing that we are witnessing one of those moments. It now seems obvious, even to the most conservative person, that we are experiencing a turning point in our history. The information revolution is profoundly changing every aspect of our lives, quickly and relentlessly. The list is known but worth recalling: education and entertainment, communication and commerce, love and hate, politics and conflicts, culture and health, … feel free to add your preferred topics; they are all transformed by technologies that have the recording and processing of information as their core functions. Meanwhile, philosophy is degrading into self-referential discussions on irrelevancies.

The result of a philosophical rebooting today can only be beneficial. Digital technologies are not just tools merely modifying how we deal with the world, like the wheel or the engine. They are above all formatting systems, which increasingly affect how we understand the world, how we relate to it, how we see ourselves, and how we interact with each other.

The ‘Fourth Revolution’ betrays what I believe to be one of the topics that deserves our full intellectual attention today. The idea is quite simple. Three scientific revolutions have had great impact on how we see ourselves. In changing our understanding of the external world they also modified our self-understanding. After the Copernican revolution, the heliocentric cosmology displaced the Earth and hence humanity from the centre of the universe. The Darwinian revolution showed that all species of life have evolved over time from common ancestors through natural selection, thus displacing humanity from the centre of the biological kingdom. And following Freud, we acknowledge nowadays that the mind is also unconscious. So we are not immobile, at the centre of the universe, we are not unnaturally separate and diverse from the rest of the animal kingdom, and we are very far from being minds entirely transparent to ourselves. One may easily question the value of this classic picture. After all, Freud was the first to interpret these three revolutions as part of a single process of reassessment of human nature and his perspective was blatantly self-serving. But replace Freud with cognitive science or neuroscience, and we can still find the framework useful to explain our strong impression that something very significant and profound has recently happened to our self-understanding.

Since the fifties, computer science and digital technologies have been changing our conception of who we are. In many respects, we are discovering that we are not standalone entities, but rather interconnected informational agents, sharing with other biological agents and engineered artefacts a global environment ultimately made of information, the infosphere. If we need a champion for the fourth revolution this should definitely be Alan Turing.

The fourth revolution offers a historical opportunity to rethink our exceptionalism in at least two ways. Our intelligent behaviour is confronted by the smart behaviour of engineered artefacts, which can be adaptively more successful in the infosphere. Our free behaviour is confronted by the predictability and manipulability of our choices, and by the development of artificial autonomy. Digital technologies sometimes seem to know more about our wishes than we do. We need philosophy to make sense of the radical changes brought about by the information revolution. And we need it to be at its best, for the difficulties we are facing are challenging. Clearly, we need to reboot philosophy now.

Luciano Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford, Senior Research Fellow at the Oxford Internet Institute, and Fellow of St Cross College, Oxford. He was recently appointed as ethics advisor to Google. His most recent book is The Fourth Revolution: How the Infosphere is Reshaping Human Reality.

Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.
Image credit: Alan Turing Statue at Bletchley Park. By Ian Petticrew. CC-BY-SA-2.0 via Wikimedia Commons.

The post Rebooting Philosophy appeared first on OUPblog.

0 Comments on Rebooting Philosophy as of 7/12/2014 4:25:00 AM
Add a Comment
21. Approaching peak skepticism

This is the third in a four-part series on Christian epistemology titled “Radical faith meets radical doubt: a Christian epistemology for skeptics” by John G. Stackhouse, Jr.

By John G. Stackhouse, Jr.


We are near, it seems, “peak skepticism.” We all know that the sweetest character in the movie we’re watching will turn out to be the serial killer. We all know that the stranger in the good suit and the great hair is up to something sinister. We all know that the honey-voiced therapist or the soothing guru or the brave leader of the heroic little NGO will turn out to be a fraud, embezzling here or seducing there.

“I read it on the Internet” became a rueful joke as quickly as there was an Internet. Politicians are all liars, priests are all pedophiles, professors are all blowhards: you can’t trust anyone or anything.

Notre Dame philosopher Alvin Plantinga shrugs off the contemporary storm of frightening doubt, however, with the robust common sense of his Frisian forebears:

Such Christian thinkers as Pascal, Kierkegaard, and Kuyper…recognize that there aren’t any certain foundations of the sort Descartes sought—or, if there are, they are exceedingly slim, and there is no way to transfer their certainty to our important non-foundational beliefs about material objects, the past, other persons, and the like. This is a stance that requires a certain epistemic hardihood: there is, indeed, such a thing as truth; the stakes are, indeed, very high (it matters greatly whether you believe the truth); but there is no way to be sure that you have the truth; there is no sure and certain method of attaining truth by starting from beliefs about which you can’t be mistaken and moving infallibly to the rest of your beliefs. Furthermore, many others reject what seems to you to be most important. This is life under uncertainty, life under epistemic risk and fallibility. I believe a thousand things, and many of them are things others—others of great acuity and seriousness—do not believe. Indeed, many of the beliefs that mean the most to me are of that sort. I realize I can be seriously, dreadfully, fatally wrong, and wrong about what it is enormously important to be right. That is simply the human condition: my response must be finally, “Here I stand; this is the way the world looks to me.”

Thomas Reid

Thomas Reid. Public Domain via Wikimedia Commons.

In this attitude Plantinga follows in the cheerful train of Thomas Reid, the great Scottish Enlightenment philosopher. In his several epistemological books, Reid devotes a great deal of energy to demolishing what he sees to be a misguided approach to knowledge, which he terms the “Way of Ideas.” Unfortunately for standard-brand modern philosophy, and even for most of the rest of us non-philosophers, the Way of Ideas is not merely some odd little branch but the main trunk of epistemology from Descartes and Locke forward to Kant.

The Way of Ideas, roughly speaking, is the basic scheme of perception by which the things “out there” somehow cause us to have ideas of them in our minds, and thus we form appropriate beliefs about them. Reid contends, startlingly, that this scheme fails to illuminate what is actually happening. In fact, Reid pulverizes this scheme as simply incoherent—an understanding so basic that most of us take it for granted, even if we could not actually explain it. The “problem of the external world” remains intractable. We just don’t know how we reliably get “in here” (in our minds) what is “out there” (in the world).

Having set aside the Way of Ideas, Reid then stuns the reader again with this declaration: “I do not attempt to substitute any other theory in [its] place.” Reid asserts instead that it is a “mystery” how we form beliefs about the world that actually do seem to correspond to the world as it is. (Our beliefs do seem to have the virtue of helping us negotiate that world pretty well.)

The philosopher who has followed Reid to this point now might well be aghast. “What?” she might sputter. “You have destroyed the main scheme of modern Western epistemology only to say that you don’t have anything better to offer in its place? What kind of philosopher are you?”

“A Christian one,” Reid might reply. For Reid takes great comfort in trusting God for creating the world such that human beings seem eminently well equipped to apprehend and live in it. Reid encourages readers therefore to thank God for this provision, this “bounty of heaven,” and to obey God in confidence that God continues to provide the means (including the epistemic means) to do so. Furthermore, Reid affirms, any other position than grateful acceptance of the fact that we believe the way we do just because that is the way we are is not just intellectually untenable, but (almost biblically) foolish.

Thus Thomas Reid dispenses with modern hubris on the one side and postmodern despair on the other. To those who would say, “I am certain I now sit upon this chair,” Reid would reply, “Good luck proving that.” To those who would say, “You just think you’re sitting in a chair now, but in fact you could be anyone, anywhere, just imagining you are you sitting in a chair,” he would simply snort and perhaps chastise them for their ingratitude for the knowledge they have gained so effortlessly by the grace of God.

Having acknowledged the foolishness of claiming certainty, Reid places the burden of proof, then, where it belongs: on the radical skeptic who has to show why we should doubt what seems so immediately evident, rather than on the believer who has to show why one ought to believe what seems effortless to believe. Darkness, Reid writes, is heavy upon all epistemological investigations. We know through our own action that we are efficient causes of things; we know God is, too. More than this, however, we cannot say, since we cannot peer into the essences of things. Reid commends to us all sorts of inquiries, including scientific ones, but we will always be stymied at some level by the four-year-old’s incessant question: “Yes, but why?” Such explanations always come back to questions of efficient causation, and human reason simply cannot lay bare the way things are in themselves so as to see how things do cause each other to be this or that way.

Reid’s contemporary and countryman David Hume therefore was right on this score, Reid allows. But unlike Hume—very much unlike Hume—Reid is cheerful about us carrying on anyway with the practically reliable beliefs we generally do form, as God wants us to do. Far from being paralyzed by epistemological doubt, therefore, Reid offers all of us a thankful epistemology of trust and obedience.

But do Christians need to resort to such a breathtakingly bold response to the deep skepticism of our times? My last post offers an answer.

John G. Stackhouse Jr. is the Sangwoo Youtong Chee Professor of Theology and Culture at Regent College, Vancouver, Canada. He is the author of Need to Know: Vocation as the Heart of Christian Epistemology.

Subscribe to the OUPblog via email or RSS.
Subscribe to only religion articles on the OUPblog via email or RSS.

The post Approaching peak skepticism appeared first on OUPblog.

0 Comments on Approaching peak skepticism as of 7/17/2014 11:07:00 AM
Add a Comment
22. What is consciousness?

By Ted Honderich


The philosopher Descartes set out to escape doubt and to find certainties. From the premise that he was thinking, even if falsely, he argued to what he took to be the certain conclusion that he existed. Cogito ergo sum. He is as well known for concluding that consciousness is not physical. Your being conscious right now is not an objective physical fact. It has a nature quite unlike that of the chair you are sitting on. Your consciousness is different in kind from objectively physical neural states and events in your head.

This mind-body dualism persists. It is not only a belief or attitude in religion or spirituality. It is concealed in standard cognitive science or computerism. The fundamental attraction of dualism is that we are convinced, since we have a hold on it, that consciousness is different. There really is a difference in kind between you and the chair you are sitting on, not a factitious difference.

But there is an awful difficulty. Consciousness has physical effects. Arms move because of desires, bullets come out of guns because of intentions. How could such indubitably physical events have causes that are not physical at all, for a start not in space?

Some philosophers used to accomodate the fact that movements have physical causes by saying conscious desires and intentions aren’t themselves causal but they go along with brain events. Epiphenomenalism is true. Conscious beliefs themselves do not explain your stepping out of the way of joggers. But epiphenomenalism is now believed only in remote parts of Australia, where the sun is very hot. I know only one epiphenomenalist in London, sometimes seen among the good atheists in Conway Hall.

A decent theory or analysis of consciousness will also have the recommendation of answering a clear question. It will proceed from an adequate initial clarification of a subject. The present great divergence in theories of consciousness is mainly owed to people talking about different things. Some include what others call the unconscious mind.

Crystal mind By Nevit Dilmen (Own work) CC-BY-SA-3.0, via Wikimedia Commons

But there are also the criteria for a good theory. We have two already — a good theory will make consciousness different and it will make consciousness itself effective. In fact consciousness is to us not just different, but mysterious, more than elusive. It is such that philosopher Colin McGinn has said before now that we humans have no more chance of understanding it than a chimp has of doing quantum mechanics.

There’s a lot to the new theory of Actualism, starting with a clarification of ordinary consciousness in the primary or core sense as something called actual consciousness. Think along with me just of one good piece of the theory. Think of one part or side or group of elements of ordinary consciousness. Think of consciousness in ordinary perception — say seeing — as against consciousness in just thinking and wanting. Call it perceptual consciousness. What is it for you to perceptually conscious now, as we say, of the room you’re in? Being aware of it, not thinking about it or something in it? Well, the fact is not some internal thing about you. It’s for a room to exist.

It’s for a piece of a subjective physical world to exist out there in space — yours. That is something dependent both on the objective physical world out there and also on you neurally. A subjective physical world’s being dependent on something in you, of course, doesn’t take it out of space out there or deprive it of other counterparts of the characteristics you can assemble of the objective physical world. What is actual with perceptual consciousness is not a representation of a world — stuff called sense data or qualia or mental paint — whatever is the case with cognitive and affective consciousness.

That’s just a good start on Actualism. It makes consciousness different. It doesn’t reduce consciousness to something that has no effects. It also involves a large fact of subjectivity, indeed of what you can call individuality or personal identity, even living a life. One more criterion of a good theory is naturalism — being true to science. It is also philosophy, which is greater concentration on the logic of ordinary intelligence, thinking about facts rather than getting them. Actualism also helps a little with human standing, that motive of believers in free will as against determinism.

Ted Honderich is Grote Professor Emeritus of the Philosophy of Mind and Logic at University College London. He edited The Oxford Companion to Philosophy and has written about determinism and freedom, social ends and political means, and even himself in Philosopher: A Kind of Life. He recently published Actual Consciousness.

Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.

The post What is consciousness? appeared first on OUPblog.

0 Comments on What is consciousness? as of 1/1/1900
Add a Comment
23. Why metaphor matters

By James Grant


Plato famously said that there is an ancient quarrel between philosophy and poetry. But with respect to one aspect of poetry, namely metaphor, many contemporary philosophers have made peace with the poets. In their view, we need metaphor. Without it, many truths would be inexpressible and unknowable. For example, we cannot describe feelings and sensations adequately without it. Take Gerard Manley Hopkins’s exceptionally powerful metaphor of despair:

selfwrung, selfstrung, sheathe- and shelterless,
thoughts against thoughts in groans grind.

How else could precisely this kind of mood be expressed? Describing how things appear to our senses is also thought to require metaphor, as when we speak of the silken sound of a harp, the warm colours of a Titian, and the bold or jolly flavour of a wine.  Science advances by the use of metaphors – of the mind as a computer, of electricity as a current, or of the atom as a solar system. And metaphysical and religious truths are often thought to be inexpressible in literal language. Plato condemned poets for claiming to provide knowledge they did not have. But if these philosophers are right, there is at least one poetic use of language that is needed for the communication of many truths.

In my view, however, this is the wrong way to defend the value of metaphor. Comparisons may well be indispensable for communication in many situations. We convey the unfamiliar by likening it to the familiar. But many hold that it is specifically metaphor – and no other kind of comparison – that is indispensable. Metaphor tells us things the words ‘like’ or ‘as’ never could. If true, this would be fascinating. It would reveal the limits of what is expressible in literal language. But no one has come close to giving a good argument for it. And in any case, metaphor does not have to be an indispensable means to knowledge in order to be as valuable as we take it to be.

Metaphor may not tell us anything that couldn’t be expressed by other means. But good metaphors have many other effects on readers than making them grasp some bit of information, and these are often precisely the effects the metaphor-user wants to have. There is far more to the effective use of language than transmitting information. My particular interest is in how art critics use metaphor to help us appreciate paintings, architecture, music, and other artworks. There are many reasons why metaphor matters, but art criticism reveals two reasons of particular importance.

735px-Hermann_Herzog_-_Venetian_canal

Take this passage from John Ruskin’s The Stones of Venice. Ruskin describes arriving in Venice by boat and seeing ‘the long ranges of columned palaces,—each with its black boat moored at the portal,—each with its image cast down, beneath its feet, upon that green pavement which every breeze broke into new fantasies of rich tessellation’, and observing how ‘the front of the Ducal palace, flushed with its sanguine veins, looks to the snowy dome of Our Lady of Salvation’.

One thing Ruskin’s metaphors do is describe the waters of Venice and the Ducal palace at an extraordinary level of specificity. There are many ways water looks when breezes blow across its surface. There are fewer ways it looks when breezes blow across its surface and make it look like something broken into many pieces. And there are still fewer ways it looks when breezes blow across its surface and make it look like something broken into pieces forming a rich mosaic with the colours of Venetian palaces and a greenish tint. Ruskin’s metaphor communicates that the waters of Venice look like that. The metaphor of the Ducal palace as ‘flushed with its sanguine veins’ likewise narrows the possible appearances considerably. Characterizing appearances very specifically is of particular use to art critics, as they often want to articulate the specific appearance an artwork presents.

A second thing metaphors like Ruskin’s do is cause readers to imagine seeing what he describes. We naturally tend to picture the palace or the water on hearing Ruskin’s metaphor. This function of metaphor has often been noted: George Orwell, for instance, writes that ‘a newly invented metaphor assists thought by evoking a visual image’.

Why do novel metaphors evoke images? Precisely because they are novel uses of words. To understand them, we cannot rely on our knowledge of the literal meanings of the words alone. We often have to employ imagination. To understand Ruskin’s metaphor, we try to imagine seeing water that looks like a broken mosaic. If we manage this, we know the kind of look that he is attributing to the water.

Imagining a thing is often needed to appreciate that thing. Knowing facts about it is often not enough by itself. Accurately imagining Hopkins’s despondency, or the experience of arriving in Venice by boat, gives us some appreciation of these experiences. By enabling us to imagine accurately and specifically, metaphor is exceptionally well suited to enhancing our appreciation of what it describes.

James Grant is a Tutorial Fellow in Philosophy at Exeter College, Oxford. He is the author of The Critical Imagination.

Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.
Image credit: Hermann Herzog: Venetian canal, by Bonhams. Public Domain via Wikimedia Commons.

The post Why metaphor matters appeared first on OUPblog.

0 Comments on Why metaphor matters as of 8/3/2014 5:34:00 AM
Add a Comment
24. Ethical issues in managing the current Ebola crisis

Until the current epidemic, Ebola was largely regarded as not a Western problem. Although fearsome, Ebola seemed contained to remote corners of Africa, far from major international airports. We are now learning the hard way that Ebola is not—and indeed was never—just someone else’s problem. Yes, this outbreak is different: it originated in West Africa, at the border of three countries, where the transportation infrastructure was better developed, and was well under way before it was recognized. But we should have understood that we are “all in this together” for Ebola, as for any, infectious disease.

Understanding that we were profoundly wrong about Ebola can help us to see ethical considerations that should shape how we go forward. Here, I have space just to outline two: reciprocity and fairness.

In the aftermath of the global SARS epidemic that spread to Canada, the Joint Centre for Bioethics at the University of Toronto produced a touchstone document for pandemic planning, Stand on Guard for Thee, which highlights reciprocity as a value. When health care workers take risks to protect us all, we owe them special concern if they are harmed. Dr. Bruce Ribner, speaking on ABC, described Emory University Hospital as willing to take two US health care workers who became infected abroad because they believed these workers deserved the best available treatment for the risks they took for humanitarian ends. Calls to ban the return of US workers—or treatment in the United States of other infected front-line workers—forget that contagious diseases do not occur in a vacuum. Even Ann Coulter recognized, in her own unwitting way, that we owe support to first responders for the burdens they undertake for us all when she excoriated Dr. Kent Brantly for humanitarian work abroad rather than in the United States.

We too often fail to recognize that all the health care and public health workers at risk in the Ebola epidemic—and many have died—are owed duties of special concern. Yet unlike health care workers at Emory, health care workers on the front lines in Africa must make do with limited equipment under circumstances in which it is very difficult for them to be safe, according to a recent Wall Street Journal article. As we go forward we must remember the importance of providing adequately for these workers and for workers in the next predictable epidemics — not just for Americans who are able to return to the US for care. Supporting these workers means providing immediate care for those who fall ill, as well as ongoing care for them and their families if they die or are not longer able to work. But this is not all; health care workers on the front lines can be supported by efforts to minimize disease spread—for example conducting burials to minimize risks of infection from the dead—as well as unceasing attention to the development of public health infrastructures so that risks can be swiftly identified and contained and care can be delivered as safely as possible.

Ebola in West Africa. Three humanitarian experts and six specialists in dangerous infectious diseases of the European Mobile Lab project have been deployed on the ground, with a mobile laboratory unit to help accelerate diagnoses. © EMLab, European Commission DG ECHO, EU Humanitarian Aid and Civil Protection. CC BY-ND 2.0 via European Commission DG ECHO Flickr.
Ebola in West Africa. Three humanitarian experts and six specialists in dangerous infectious diseases of the European Mobile Lab project have been deployed on the ground, with a mobile laboratory unit to help accelerate diagnoses. © EMLab, European Commission DG ECHO, EU Humanitarian Aid and Civil Protection. CC BY-ND 2.0 via European Commission DG ECHO Flickr.

Fairness requires treating others as we would like to be treated ourselves. A way of thinking about what is fair is to ask what we would want done if we did not know our position under the circumstances at hand. In a classic of political philosophy, A Theory of Justice, John Rawls suggested the thought experiment of asking what principles of justice we would be willing to accept for a society in which we were to live, if we didn’t know anything about ourselves except that we would be somewhere in that society. Infectious disease confronts us all with an actual possibility of the Rawlsian thought experiment. We are all enmeshed in a web of infectious organisms, potential vectors to one another and hence potential victims, too. We never know at any given point in time whether we will be victim, vector, or both. It’s as though we were all on a giant airplane, not knowing who might cough, or spit, or bleed, what to whom, and when. So we need to ask what would be fair under these brute facts of human interconnectedness.

At a minimum, we need to ask what would be fair about the allocation of Ebola treatments, both before and if they become validated and more widely available. Ethical issues such as informed consent and exploitation of vulnerable populations in testing of experimental medicines certainly matter but should not obscure that fairness does, too, whether we view the medications as experimental or last-ditch treatment. Should limited supplies be administered to the worst off? Are these the sickest, most impoverished, or those subjected to the greatest risks, especially risks of injustice? Or, should limited supplies be directed where they might do the most good—where health care workers are deeply fearful and abandoning patients, or where we need to encourage people who have been exposed to be monitored and isolated if needed?

These questions of fairness occur in the broader context of medicine development and distribution. ZMAPP (the experimental monoclonal antibody administered on a compassionate use basis to the two Americans) was jointly developed by the US government, the Public Health Agency of Canada, and a few very small companies. Ebola has not drawn a great deal of drug development attention; indeed, infectious diseases more generally have not drawn their fair share of attention from Big Pharma, as least as measured by the global burden of disease.

WHO has declared the Ebola epidemic an international emergency and is convening ethics experts to consider such questions as whether and how the experimental treatment administered to the two Americans should be made available to others. I expect that the values of reciprocity and fairness will surface in these discussions. Let us hope they do, and that their import is remembered beyond the immediate emergency.

Headline Image credit: Ebola virus virion. Created by CDC microbiologist Cynthia Goldsmith, this colorized transmission electron micrograph (TEM) revealed some of the ultrastructural morphology displayed by an Ebola virus virion. Centers for Disease Control and Prevention’s Public Health Image Library, #10816 . Public domain via Wikimedia Commons.

The post Ethical issues in managing the current Ebola crisis appeared first on OUPblog.

0 Comments on Ethical issues in managing the current Ebola crisis as of 8/15/2014 9:26:00 AM
Add a Comment
25. Happiness and high school humanities

I got a request this past year from my friends at Boston Green Academy (BGA) to help them consider their Humanities 4 curriculum, which focuses on philosophies, especially around happiness. This was a tough request for me, and certainly not one I had considered before. There aren’t any titles I can think of that say “Philosophy: Happiness” on their covers to pull me directly down this path.

But as I thought about it, I got more and more excited about how this topic is tackled in the YA world. The first set of books I considered were titles that dealt with “the meaning of life” in a variety of ways. Titles like Nothing by Janne Teller, Jeremy Fink and the Meaning of Life by Wendy Mass, and one of my personal favorites, The Spectacular Now by Tim Tharp give lots of food for thought about where we expend our energy and the wisdom of how we prioritize our attention in life.

 teller nothing 213x300 Happiness and high school humanities    maas jeremyfink 201x300 Happiness and high school humanities    tharp spectacularnow 199x300 Happiness and high school humanities

This, of course, led to stories about facing challenges and finding happiness despite (or because) of the circumstances in our lives.  So we pulled texts like The Fault in Our Stars by John Green, It’s Kind of a Funny Story by Ned Vizzini, and Marcelo in the Real World by Francisco X. Stork, which all deal with characters finding ways to deal with and even prosper alongside difficult circumstances.

green faultinourstars Happiness and high school humanities     vizzini kindofFunnyStory 204x300 Happiness and high school humanities     stork marcelo 195x300 Happiness and high school humanities

Then we happened upon a set of titles that raise questions about whether you can be “happy” if you are or are not being yourself. We pulled segments of titles like Openly Straight by Bill Konigsberg, Aristotle and Dante Discover the Secrets of the Universe by Benjamin Alire Saenz, Tina’s Mouth by Keshni Kashyap, American-Born Chinese by Gene Luen Yang, and Rapture Practice, which I’ve talked about here before.

openly straight Happiness and high school humanities     saenz aristotleanddante 199x300 Happiness and high school humanities     keshni tinasmouth 234x300 Happiness and high school humanities     hartzler rapturepractice 203x300 Happiness and high school humanities

And then there were a world of nonfiction possibilities, those written for young people and those not — picture books by Demi about various figures, Mihaly Csikszentmihalyi’s ideas about work and play, and any number of great series texts about philosophers and religions and such.

So I guess the (happy) moral of this story is that it was much easier than I thought to revisit old texts with these new eyes of philosophies of happiness. I left the work feeling as though every text is about this very important topic in one way or another, and I can’t wait to see how the BGA curriculum around it continues to evolve!

share save 171 16 Happiness and high school humanities

The post Happiness and high school humanities appeared first on The Horn Book.

0 Comments on Happiness and high school humanities as of 1/1/1900
Add a Comment

View Next 25 Posts