What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

Recently Viewed

JacketFlap Sponsors

Spread the word about books.
Put this Widget on your blog!
  • Powered by JacketFlap.com

Are you a book Publisher?
Learn about Widgets now!

Advertise on JacketFlap

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Tag

In the past 7 days

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing: Blog Posts Tagged with: reasoning, Most Recent at Top [Help]
Results 1 - 5 of 5
1. Is “Nothing nothings” true?

In a 1929 lecture, Martin Heidegger argued that the following claim is true: Nothing nothings. In German: “Das Nichts nichtet”. Years later Rudolph Carnap ridiculed this statement as the worst sort of meaningless metaphysical nonsense in an essay titled “Overcoming of Metaphysics Through Logical Analysis of Language”. But is this positivistic attitude reasonable?

The post Is “Nothing nothings” true? appeared first on OUPblog.

0 Comments on Is “Nothing nothings” true? as of 10/10/2015 4:05:00 AM
Add a Comment
2. Why causality now?

Head hits cause brain damage, but not always. Should we ban sport to protect athletes? Exposure to electromagnetic fields is strongly associated with cancer development. Should we ban mobile phones and encourage old-fashioned wired communication? The sciences are getting more and more specialized and it is difficult to judge whether, say, we should trust homeopathy, fund a mission to Mars, or install solar panels on our roofs. We are confronted with questions about causality on an everyday basis, as well as in science and in policy.

Causality has been a headache for scholars since ancient times. The oldest extensive writings may have been Aristotle, who made causality a central part of his worldview. Then we jump 2,000 years until causality again became a prominent topic with Hume, who was a skeptic, in the sense that he believed we cannot think of causal relationships as logically necessary, nor can we establish them with certainty.

The next major philosophical figure after Hume was probably David Lewis, who proposed quite a controversial account saying roughly that something was a cause of an effect in this world if, in other nearby possible worlds where that cause didn’t happen, the effect didn’t happen either. Currently, we come to work in computer science originated by Judea Pearl and by Spirtes, Glymour and Scheines and collaborators.

All of this is highly theoretical and formal. Can we reconstruct philosophical theorizing about causality in the sciences in simpler terms than this? Sure we can!

One way is to start from scientific practice. Even though scientists often don’t talk explicitly about causality, it is there. Causality is an integral part of the scientific enterprise. Scientists don’t worry too much about what causality is­ – a chiefly metaphysical question – but are instead concerned with a number of activities that, one way or another, bear on causal notions. These are what we call the five scientific problems of causality:

8529449382_85663d5f6a_o
Phrenology: causality, mirthfulness, and time. Photo by Stuart, CC-BY-NC-ND-2.0 via Flickr.
  • Inference: Does C cause E? To what extent?
  • Explanation: How does C cause or prevent E?
  • Prediction: What can we expect if C does (or does not) occur?
  • Control: What factors should we hold fixed to understand better the relation between C and E? More generally, how do we control the world or an experimental setting?
  • Reasoning: What considerations enter into establishing whether/how/to what extent C causes E?

This does not mean that metaphysical questions cease to be interesting. Quite the contrary! But by engaging with scientific practice, we can work towards a timely and solid philosophy of causality.

The traditional philosophical treatment of causality is to give a single conceptualization, an account of the concept of causality, which may also tell us what causality in the world is, and may then help us understand causal methods and scientific questions.

Our aim, instead, is to focus on the scientific questions, bearing in mind that there are five of them, and build a more pluralist view of causality, enriched by attention to the diversity of scientific practices. We think that many existing approaches to causality, such as mechanism, manipulationism, inferentialism, capacities and processes can be used together, as tiles in a causal mosaic that can be created to help you assess, develop, and criticize a scientific endeavour.

In this spirit we are attempting to develop, in collaboration, complementary ideas of causality as information (Illari) and variation (Russo). The idea is that we can conceptualize in general terms the causal linking or production of effect by the cause as the transmission of information between cause and effect (following Salmon); while variation is the most general conceptualization of the patterns of difference-making we can detect in populations where a cause is acting (following Mill). The thought is that we can use these complementary ideas to address the scientific problems.

For example, we can think about how we use complementary evidence in causal inference, tracking information transmission, and combining that with studies of variation in populations. Alternatively, we can think about how measuring variation may help us formulate policy decisions, as might seeking to block possible avenues of information transmission. Having both concepts available assists in describing this, and reasoning well – and they will also be combined with other concepts that have been made more precise in the philosophical literature, such as capacities and mechanisms.

Ultimately, the hope is that sharpening up the reasoning will assist in the conceptual enterprise that lies at the intersection of philosophy and science. And help decide whether to encourage sport, mobile phones, homeopathy and solar panels aboard the mission to Mars!

The post Why causality now? appeared first on OUPblog.

0 Comments on Why causality now? as of 1/18/2015 5:27:00 AM
Add a Comment
3. Inferring the unconfirmed: the no alternatives argument

By Richard Dawid, Stephan Hartmann, and Jan Sprenger


“When you have eliminated the impossible, whatever remains, however improbable, must be the truth.” Thus Arthur Conan Doyle has Sherlock Holmes describe a crucial part of his method of solving detective cases. Sherlock Holmes often takes pride in adhering to principles of scientific reasoning. Whether or not this particular element of his analysis can be called scientific is not straightforward to decide, however. Do scientists use ‘no alternatives arguments’ of the kind described above? Is it justified to infer a theory’s truth from the observation that no other acceptable theory is known? Can this be done even when empirical confirmation of the theory in question is sketchy or entirely absent?

The Edinburgh Statue of Sherlock Holmes. Siddharth Krish CC-BY-SA 3.0 Wikimedia.

The Edinburgh Statue of Sherlock Holmes. Photo by Siddharth Krish. CC-BY-SA 3.0 via Wikimedia Commons.

The canonical understanding of scientific reasoning insists that theory confirmation be based exclusively on empirical data predicted by the theory in question. From that point of view, Holmes’ method may at best play the role of a side show; the real work of theory evaluation is done by comparing the theory’s predictions with empirical data.

Actual science often tells a different story. Scientific disciplines like palaeontology or archaeology aim at describing historic events that have left only scarce traces in today’s world. Empirical testing of those theories always remains fragmentary. Under such conditions, assessing a theory’s scientific status crucially relies on the question of whether or not convincing alternative theories have been found.

Just recently, this kind of reasoning scored a striking success in theoretical physics when the Higgs particle was discovered at CERN. Besides confirming the Higgs model itself, the Higgs discovery also vindicated the judgemental prowess of theoretical physicists who were fairly sure about the existence of the Higgs particle already since the mid-1980s. Their assessment had been based on a clear-cut no alternatives argument: there seemed to be no alternative to the Higgs model that could render particle physics consistent.

Similarly, string theory is one of the most influential theories in contemporary physics, even in the absence of favorable empirical evidence and the ability to generate specific predictions. Critics argue that for these reasons, trust in string theory is unjustified, but defenders deploy the no alternatives argument: since the physics community devoted considerable efforts to developing alternatives to string theory, the failure of these attempts and the absence of similarly unified and worked-out competitors provide a strong argument in favor of string theory.

These examples show that the no alternatives argument is in fact used in science. But does it constitute a legitimate way of reasoning? In our work, we aim at identifying the structural basis for the no alternatives argument. We do so by constructing a formal model of the argument with the help of so-called Bayesian nets. That is, the argument is analyzed as a case of reasoning under uncertainty about whether a scientific theory H (e.g. string theory) is right or wrong.

A Bayes nets that captures the inferential relations between the relevant propositions in the no alternatives argument. D=complexity of the problem, F=failure to find an alternative, Y=number of alternatives, T=H is the right theory.

A Bayes nets that captures the inferential relations between the relevant propositions in the no alternatives argument. D=complexity of the problem, F=failure to find an alternative, Y=number of alternatives, T=H is the right theory.

We argue that the failure of finding a viable alternative to theory H, in spite of many attempts by clever scientists, lowers our expectations on the number of existing serious alternatives to H. This provides in turn an argument that H is indeed the right theory. In total, the probability that H is right is increased by the failure to find an alternative, demonstrating that the inference behind the no alternatives argument is valid in principle.

There is an important caveat, however. Based on the no alternatives argument alone, we cannot say how much the probability of the theory in question is raised. It may be substantial, but it may only be a tiny little bit. In that case, the confirmatory force of the no alternatives argument may be negligible.

The no alternatives argument thus is a fascinating mode of reasoning that contains a valid core. However, determining the strength of the argument requires going beyond the mere observation that no alternatives have been found. This matter is highly context-sensitive and may lead to different answers for string theory, paleontology and detective stories.

Richard Dawid, Stephan Hartmann, and Jan Sprenger are the authors of “The No Alternatives Argument” (available to read for free for a limited time) in the British Journal for the Philosophy of Science. Richard Dawid is lecturer (Dozent) and researcher at the University of Vienna. Stephan Hartmann is Alexander von Humboldt Professor at the LMU Munich. Jan Sprenger is Assistant Professor at Tilburg University. Their work focuses on the application of probabilistic methods within the philosophy of science.

For over fifty years The British Journal for the Philosophy of Science has published the best international work in the philosophy of science under a distinguished list of editors including A. C. Crombie, Mary Hesse, Imre Lakatos, D. H. Mellor, David Papineau, James Ladyman, and Alexander Bird. One of the leading international journals in the field, it publishes outstanding new work on a variety of traditional and cutting edge issues, such as the metaphysics of science and the applicability of mathematics to physics, as well as foundational issues in the life sciences, the physical sciences, and the social sciences.

Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.

The post Inferring the unconfirmed: the no alternatives argument appeared first on OUPblog.

0 Comments on Inferring the unconfirmed: the no alternatives argument as of 4/27/2014 4:05:00 AM
Add a Comment
4. Reasoning in medicine and science

By Huw Llewelyn


In medicine, we use two different thought processes: (1) non-transparent thought, e.g. slick, subjective decisions and (2) transparent reasoning, e.g. verbal explanations to patients, discussions during meetings, ward rounds, and letter-writing. In practice, we use one approach as a check for the other. Animals communicate solely through non-transparent thought, but the human gift of language allows us also to convey our thoughts to others transparently. However, in order to communicate properly we must have an appropriate vocabulary linked to shared concepts.

‘Reasoning by probable elimination’ plays an important role in transparent medical reasoning. The diagnostic process uses ‘probable elimination’ rival possibilities and points to a conclusion through that process of elimination. Suppose one item of information (e.g. a symptom) is chosen as a ‘lead’ that is associated with a short list of diagnoses that covers most people with that lead (ideally 100%). The next step is to choose a diagnosis from that list and to look for a finding that occurs commonly in those with that chosen diagnosis and rarely (ideally never) in at least one other diagnosis in the list. If such a finding is found for each of the other diagnoses in the list, then the probability of the chosen diagnosis is high. If findings are found that never occur in each other possibility in the list, then the diagnosis is certain. However, if none of this happens, then another diagnosis is chosen from the list and the process is repeated.

Probabilistic reasoning by elimination explains how diagnostic tests can be assessed in a logical way using these concepts to avoid misdiagnosis and mistreatment. If clear, written explanations became routine, it would go a long way to eliminating failures of care that have dominated the media of late.

Doctor and patient

Reasoning by probable elimination is important in estimating the probability of similar outcomes by repeating a published study (i.e. the probability of replication). In order for the probability of replication to be high, the probability of non-replication due to all other reasons has to be low. For example, the estimated probability of non-replication due to poor reporting of results or methods (due to error, ignorance or dishonesty) has to be low. Also, the probability of non-replication due to poor or idiosyncratic methodology, or different circumstances or subjects in the reader’s setting, etc. should be low. Finally, the probability of non-replication by chance due to the number of readings made must be low. If, after all this, the estimated probabilities are low for all possible reasons of non-replication, then the probability of replication should be high. This assumes of course that all the reasons for non-replication have been considered and shown to be improbable!

If the probability of replicating a study result is high, the reader will consider the possible explanations or hypotheses for that study finding. Ideally the list of possibilities should be complete. However, in a novel scientific situation there may well be some explanations that no one has considered yet. This contrasts with a diagnostic situation where past experience tells us that 99% of patients presenting with some symptom have one of short list of diagnoses. Therefore, the probability of the favoured scientific hypothesis cannot be assumed to be high or ‘confirmed’ because it cannot be guaranteed that all other important explanations have been eliminated or shown to be improbable. This partly explains why Karl Popper asserted that hypotheses can never be confirmed – that it is only possible to ‘falsify’ alternative hypotheses. The theorem of probable elimination identifies the assumptions, limitations and pitfalls of reasoning by probable elimination.

Reasoning by probable elimination is central to medicine, science, statistics and other disciplines. This important method should have a central place in education.

Huw Llewelyn is a general physician with a special interest in endocrinology and acute medicine, who has had a career-long interest in the mathematical representation of the thought processes used by doctors in their day to day work during clinical practice, teaching and research. He has also been an honorary fellow in mathematics in Aberystwyth University for many years and has had wide experience in different medical settings: general practice, teaching hospital departments with international reputations of excellence and district general hospitals in urban and rural areas. His insight is reflected in the content of the Oxford Handbook of Clinical Diagnosis and the mathematical models in the form of new theorems on which that content is based.

Subscribe to the OUPblog via email or RSS.
Subscribe to only health and medicine articles on the OUPblog via email or RSS.
Image credit: Image via iStockphoto.

The post Reasoning in medicine and science appeared first on OUPblog.

0 Comments on Reasoning in medicine and science as of 9/5/2013 5:53:00 PM
Add a Comment
5. On Math

Jason Rosenhouse is Associate Professor of mathematics at James Madison 9780195367898University in Virginia and the author of The Monty Hall Problem: The Remarkable Story of Math’s Most Contentious Brain Teaser, which looks at one of the most interesting mathematical brain teasers of recent times.  In the excerpt below Rosenhouse explains what it is like to be a professional mathematician and introduces The Monty Hall Problem.

Like all professional mathematicians, I take it for granted that most people will be bored and intimidated by what I do for a living.  Math, after all, is the sole academic subject about which people brag of their ineptitude.  “Oh,” says the typical well-meaning fellow making idle chitchat at some social gathering, “I was never any good at math.”  Then he smiles sheepishly, secure in the knowledge that his innumeracy in some way reflects well on him.  I have my world-weary stock answers to such statements.  Usually I say, “Well, maybe you just never had the right teacher.”  That defuses the situation nicely.

It is the rare person who fails to see humor in assigning to me the task of dividing up a check at a restaurant.  You know, because I’m a mathematician.  Like the elementary arithmetic used in check division is some sort of novelty act they train you for in graduate school.  I used to reply with “Dividing up a check is applied math.  I’m a pure mathematician,” but this elicits puzzled looks from those who thought mathematics was divided primarily into the cources they were forced to take in order to graduate versus the ones they could mercifully ignore.  I find “Better have someone else do it.  I’m not good with numbers” works pretty well.

I no longer grow vexed by those who ask, with perfect sincerity, how folks continue to do mathematical research when surely everything has been figured out by now.  My patience is boundless for those who assure me that their grade-school nephew is quite the little math prodigy.  When a student, after absorbing a scintillating presentation of, say, the mean-value theorem, asks me with disgust what it is good for, it does not even occur to me to grow annoyed. Instead I launch into a discourse about all of the practical benefits that accrue from an understanding of calculus.  (”You know how when you flip a switch the lights come on? Ever wonder why that is?  It’s because some really smart scientists like James Clerk Maxwell knew lots of calculus and figured out how to apply it to the problem of taming electricity.  Kind of puts your whining into perspective, wouldn’t you say?”)  And upon learning that a mainstream movie has a mathematical character, I feel cheated if that character and his profession are presented with any element of realism.

(Speaking of which, do you remember that 1966 Alfred Hitchcock movie Torn Curtain, the one where physicist Paul Newman goes to Leipzig in an attempt to elicit certain German military secrets?  Remember the scene where Newman starts writing equations on the chalkboard, only to have an impatient East German scientist, disgusted by the primitive state of American physics, cut him off and finish the equations for him?  Well, we don’t do that.  We don’t finish each other’s equations.  And that scene in Good Will Hunting where emotionally troubled math genius Matt Damon and Fields Medalist Stellan Skarsgard high-five each other after successfully performing some feat of elementary algebra?  We don’t do that either.  And don’t even get me started on Jeff Goldblum in Jurassic Pa

0 Comments on On Math as of 1/11/2010 10:27:00 AM
Add a Comment