What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

Recently Viewed

JacketFlap Sponsors

Spread the word about books.
Put this Widget on your blog!
  • Powered by JacketFlap.com

Are you a book Publisher?
Learn about Widgets now!

Advertise on JacketFlap

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Tag

In the past 7 days

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing: Blog Posts Tagged with: philosophy of science, Most Recent at Top [Help]
Results 1 - 10 of 10
1. Darwinism as religion: what literature tells us about evolution

From the publication of the Origin, Darwin enthusiasts have been building a kind of secular religion based on its ideas, particularly on the dark world without ultimate meaning implied by the central mechanism of natural selection.

The post Darwinism as religion: what literature tells us about evolution appeared first on OUPblog.

0 Comments on Darwinism as religion: what literature tells us about evolution as of 1/1/1900
Add a Comment
2. Against narrowness in philosophy

If you asked many people today, they would say that one of the limitations of analytic philosophy is its narrowness. Whereas in previous centuries philosophers took on projects of broad scope, today’s philosophers typically deal with smaller issues.

The post Against narrowness in philosophy appeared first on OUPblog.

0 Comments on Against narrowness in philosophy as of 1/1/1900
Add a Comment
3. Time and perception

The human brain is a most wonderful organ: it is our window on time. Our brains have specialized structures that work together to give us our human sense of time. The temporal lobe helps form long term memories, without which we would not be aware of the past, whilst the frontal lobe allows us to plan for the future.

The post Time and perception appeared first on OUPblog.

0 Comments on Time and perception as of 1/25/2016 6:20:00 AM
Add a Comment
4. The life of culture

Does culture really have a life of its own? Are cultural trends, fashions, ideas, and norms like organisms, evolving and weaving our minds and bodies into an ecological web? You hear a pop song a few times and suddenly find yourself humming the tune. You unthinkingly adopt the vocabulary and turns of phrase of your circle of friends.

The post The life of culture appeared first on OUPblog.

0 Comments on The life of culture as of 11/28/2015 4:29:00 AM
Add a Comment
5. The philosophical computer store

Once again, searching for unconventional computing methods as well as for a neurocomputational theory of cognition requires knowing what does and does not count as computing. A question that may appear of purely philosophical interest — which physical systems perform which computations — shows up at the cutting edge of computer technology as well as neuroscience.

The post The philosophical computer store appeared first on OUPblog.

0 Comments on The philosophical computer store as of 1/1/1900
Add a Comment
6. On ‘cookbook medicine,’ cookbooks, and gender

It is not a compliment to say that a physician is practicing “cookbook medicine.” Rather, it suggests that the physician is employing a "one size fits all" approach, applying unreflective, impersonal clinical methods that may cause patient suffering due to lack of nuanced, reflective, and humanistic care. The best physicians—just like the best cooks—make use of creativity, intuition, judgment, and even je ne sais quoi.

The post On ‘cookbook medicine,’ cookbooks, and gender appeared first on OUPblog.

0 Comments on On ‘cookbook medicine,’ cookbooks, and gender as of 7/24/2015 8:34:00 AM
Add a Comment
7. A new philosophy of science

One of the central concepts in chemistry consists in the electronic configuration of atoms. This is equally true of chemical education as it is in professional chemistry and research. If one knows how the electrons in an atom are arranged, especially in the outermost shells, one immediately understands many properties of an atom...

The post A new philosophy of science appeared first on OUPblog.

0 Comments on A new philosophy of science as of 3/23/2015 2:19:00 PM
Add a Comment
8. Efficient causation: Our debt to Aristotle and Hume

Causation is now commonly supposed to involve a succession that instantiates some lawlike regularity. This understanding of causality has a history that includes various interrelated conceptions of efficient causation that date from ancient Greek philosophy and that extend to discussions of causation in contemporary metaphysics and philosophy of science. Yet the fact that we now often speak only of causation, as opposed to efficient causation, serves to highlight the distance of our thought on this issue from its ancient origins. In particular, Aristotle (384-322 BCE) introduced four different kinds of “cause” (aitia): material, formal, efficient, and final. We can illustrate this distinction in terms of the generation of living organisms, which for Aristotle was a particularly important case of natural causation. In terms of Aristotle’s (outdated) account of the generation of higher animals, for instance, the matter of the menstrual flow of the mother serves as the material cause, the specially disposed matter from which the organism is formed, whereas the father (working through his semen) is the efficient cause that actually produces the effect. In contrast, the formal cause is the internal principle that drives the growth of the fetus, and the final cause is the healthy adult animal, the end point toward which the natural process of growth is directed.

Aristotle_by_Raphael
Aristotle, by Raphael Sanzio. Public domain via Wikimedia Commons.

From a contemporary perspective, it would seem that in this case only the contribution of the father (or perhaps his act of procreation) is a “true” cause. Somewhere along the road that leads from Aristotle to our own time, material, formal and final aitiai were lost, leaving behind only something like efficient aitiai to serve as the central element in our causal explanations. One reason for this transformation is that the historical journey from Aristotle to us passes by way of David Hume (1711-1776). For it is Hume who wrote: “[A]ll causes are of the same kind, and that in particular there is no foundation for that distinction, which we sometimes make betwixt efficient causes, and formal, and material … and final causes” (Treatise of Human Nature, I.iii.14). The one type of cause that remains in Hume serves to explain the producing of the effect, and thus is most similar to Aristotle’s efficient cause. And so, for the most part, it is today.

However, there is a further feature of Hume’s account of causation that has profoundly shaped our current conversation regarding causation. I have in mind his claim that the interrelated notions of cause, force and power are reducible to more basic non-causal notions. In Hume’s case, the causal notions (or our beliefs concerning such notions) are to be understood in terms of the constant conjunction of objects or events, on the one hand, and the mental expectation that an effect will follow from its cause, on the other. This specific account differs from more recent attempts to reduce causality to, for instance, regularity or counterfactual/probabilistic dependence. Hume himself arguably focused more on our beliefs concerning causation (thus the parenthetical above) than, as is more common today, directly on the metaphysical nature of causal relations. Nonetheless, these attempts remain “Humean” insofar as they are guided by the assumption that an analysis of causation must reduce it to non-causal terms. This is reflected, for instance, in the version of “Humean supervenience” in the work of the late David Lewis. According to Lewis’s own guarded statement of this view: “The world has its laws of nature, its chances and causal relationships; and yet — perhaps! — all there is to the world is its point-by-point distribution of local qualitative character” (On the Plurality of Worlds, 14).

David_Hume
Portrait of David Hume, by Allan Ramsey (1766). Public domain via Wikimedia Commons.

Admittedly, Lewis’s particular version of Humean supervenience has some distinctively non-Humean elements. Specifically — and notoriously — Lewis has offered a counterfactural analysis of causation that invokes “modal realism,” that is, the thesis that the actual world is just one of a plurality of concrete possible worlds that are spatio-temporally discontinuous. One can imagine that Hume would have said of this thesis what he said of Malebranche’s occasionalist conclusion that God is the only true cause, namely: “We are got into fairy land, long ere we have reached the last steps of our theory; and there we have no reason to trust our common methods of argument, or to think that our usual analogies and probabilities have any authority” (Enquiry concerning Human Understanding, §VII.1). Yet the basic Humean thesis in Lewis remains, namely, that causal relations must be understood in terms of something more basic.

And it is at this point that Aristotle re-enters the contemporary conversation. For there has been a broadly Aristotelian move recently to re-introduce powers, along with capacities, dispositions, tendencies and propensities, at the ground level, as metaphysically basic features of the world. The new slogan is: “Out with Hume, in with Aristotle.” (I borrow the slogan from Troy Cross’s online review of Powers and Capacities in Philosophy: The New Aristotelianism.) Whereas for contemporary Humeans causal powers are to be understood in terms of regularities or non-causal dependencies, proponents of the new Aristotelian metaphysics of powers insist that regularities and dependencies must be understood rather in terms of causal powers.

Should we be Humean or Aristotelian with respect to the question of whether causal powers are basic or reducible features of the world? Obviously I cannot offer any decisive answer to this question here. But the very fact that the question remains relevant indicates the extent of our historical and philosophical debt to Aristotle and Hume.

Headline image: Face to face. Photo by Eugenio. CC-BY-SA-2.0 via Flickr

The post Efficient causation: Our debt to Aristotle and Hume appeared first on OUPblog.

0 Comments on Efficient causation: Our debt to Aristotle and Hume as of 10/19/2014 6:03:00 AM
Add a Comment
9. Nick Bostrom on artificial intelligence

From mechanical turks to science fiction novels, our mobile phones to The Terminator, we’ve long been fascinated by machine intelligence and its potential — both good and bad. We spoke to philosopher Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, about a number of pressing questions surrounding artificial intelligence and its potential impact on society.

Are we living with artificial intelligence today?

Mostly we have only specialized AIs – AIs that can play chess, or rank search engine results, or transcribe speech, or do logistics and inventory management, for example. Many of these systems achieve super-human performance on narrowly defined tasks, but they lack general intelligence.

There are also experimental systems that have fully general intelligence and learning ability, but they are so extremely slow and inefficient that they are useless for any practical purpose.

AI researchers sometimes complain that as soon as something actually works, it ceases to be called ‘AI’. Some of the techniques used in routine software and robotics applications were once exciting frontiers in artificial intelligence research.

What risk would the rise of a superintelligence pose?

It would pose existential risks – that is to say, it could threaten human extinction and the destruction of our long-term potential to realize a cosmically valuable future.

Would a superintelligent artificial intelligence be evil?

Hopefully it will not be! But it turns out that most final goals an artificial agent might have would result in the destruction of humanity and almost everything we value, if the agent were capable enough to fully achieve those goals. It’s not that most of these goals are evil in themselves, but that they would entail sub-goals that are incompatible with human survival.

For example, consider a superintelligent agent that wanted to maximize the number of paperclips in existence, and that was powerful enough to get its way. It might then want to eliminate humans to prevent us from switching if off (since that would reduce the number of paperclips that are built). It might also want to use the atoms in our bodies to build more paperclips.

Most possible final goals, it seems, would have similar implications to this example. So a big part of the challenge ahead is to identify a final goal that would truly be beneficial for humanity, and then to figure out a way to build the first superintelligence so that it has such an exceptional final goal. How to do this is not yet known (though we do now know that several superficially plausible approaches would not work, which is at least a little bit of progress).

How long have we got before a machine becomes superintelligent?

Nobody knows. In an opinion survey we did of AI experts, we found a median view that there was a 50% probability of human-level machine intelligence being developed by mid-century. But there is a great deal of uncertainty around that – it could happen much sooner, or much later. Instead of thinking in terms of some particular year, we need to be thinking in terms of probability distributed across a wide range of possible arrival dates.

So would this be like Terminator?

There is what I call a “good-story bias” that limits what kind of scenarios can be explored in novels and movies: only ones that are entertaining. This set may not overlap much with the group of scenarios that are probable.

For example, in a story, there usually have to be humanlike protagonists, a few of which play a pivotal role, facing a series of increasingly difficult challenges, and the whole thing has to take enough time to allow interesting plot complications to unfold. Maybe there is a small team of humans, each with different skills, which has to overcome some interpersonal difficulties in order to collaborate to defeat an apparently invincible machine which nevertheless turns out to have one fatal flaw (probably related to some sort of emotional hang-up).

One kind of scenario that one would not see on the big screen is one in which nothing unusual happens until all of a sudden we are all dead and then the Earth is turned into a big computer that performs some esoteric computation for the next billion years. But something like that is far more likely than a platoon of square-jawed men fighting off a robot army with machine guns.

Futuristic man. © Vladislav Ociacia via iStock.
Futuristic man. © Vladislav Ociacia via iStock.

If machines became more powerful than humans, couldn’t we just end it by pulling the plug? Removing the batteries?

It is worth noting that even systems that have no independent will and no ability to plan can be hard for us to switch off. Where is the off-switch to the entire Internet?

A free-roaming superintelligent agent would presumably be able to anticipate that humans might attempt to switch it off and, if it didn’t want that to happen, take precautions to guard against that eventuality. By contrast to the plans that are made by AIs in Hollywood movies – which plans are actually thought up by humans and designed to maximize plot satisfaction – the plans created by a real superintelligence would very likely work. If the other Great Apes start to feel that we are encroaching on their territory, couldn’t they just bash our skulls in? Would they stand a much better chance if every human had a little off-switch at the back of our necks?

So should we stop building robots?

The concern that I focus on in the book has nothing in particular to do with robotics. It is not in the body that the danger lies, but in the mind that a future machine intelligence may possess. Where there is a superintelligent will, there can most likely be found a way. For instance, a superintelligence that initially lacks means to directly affect the physical world may be able to manipulate humans to do its bidding or to give it access to the means to develop its own technological infrastructure.

One might then ask whether we should stop building AIs? That question seems to me somewhat idle, since there is no prospect of us actually doing so. There are strong incentives to make incremental advances along many different pathways that eventually may contribute to machine intelligence – software engineering, neuroscience, statistics, hardware design, machine learning, and robotics – and these fields involve large numbers of people from all over the world.

To what extent have we already yielded control over our fate to technology?

The human species has never been in control of its destiny. Different groups of humans have been going about their business, pursuing their various and sometimes conflicting goals. The resulting trajectory of global technological and economic development has come about without much global coordination and long-term planning, and almost entirely without any concern for the ultimate fate of humanity.

Picture a school bus accelerating down a mountain road, full of quibbling and carousing kids. That is humanity. But if we look towards the front, we see that the driver’s seat is empty.

Featured image credit: Humanrobo. Photo by The Global Panorama, CC BY 2.0 via Flickr

The post Nick Bostrom on artificial intelligence appeared first on OUPblog.

0 Comments on Nick Bostrom on artificial intelligence as of 9/8/2014 10:30:00 AM
Add a Comment
10. The amoeba in the room

By Nicholas P. Money


The small picture is the big picture and biologists keep missing it. The diversity and functioning of animals and plants has been the meat and potatoes of most natural historians since Aristotle, and we continue to neglect the vast microbial majority. Before the invention of the microscope in the seventeenth century we had no idea that life existed in any form but the immediately observable. This delusion was swept away by Robert Hooke, Anton van Leeuwenhoek, and other pioneers of optics who found that tiny forms of life looked a lot like the cells that comprise our own tissues. We were, they showed, constructed from the same essence as the writhing animalcules of ponds and spoiled food. And yet this revelation was somehow folded into the continuing obsession with human specialness, allowing Carolus Linnaeus to catalogue plants and big animals and ignore the lilliputian majority. When microbiological inquiry was restimulated by Louis Pasteur in the nineteenth century, it became the science of germs and infectious disease. The point was not to glory in the diversity of microorganisms but exterminate them. In any case, as before, most of life was disregarded.

B0004773 Ameba, SEM

Things are changing very swiftly now. Molecular fishing expeditions in which raw biological information is examined using metagenomic methods have discovered an abundance of cryptic life forms. This research has made it clear that we are a very long way, centuries perhaps, from comprehending biodiversity properly.

Revelation of the human microbiome, the teeming trillions of bacteria and archaea in our guts that affect every aspect of our wellbeing, is the best publicized part of the inquiry. We are walking ecosystems, farmed by our microbes and dependent upon their metabolic virtuosity. There is much more besides, including the fact that a single cup of seawater contains 100 million cells, which are in turn preyed upon by billions of viruses; that a pinch of soil teems with incomprehensibly rich populations of cells; and that 50 megatons of fungal spores are released into our air supply every year. Even the pond in my Ohio garden is filled with unknowable riches: the most powerful techniques illuminate the genetic identity of only one in one billion of the cells in its shallow water.

Most biologists continue to be concerned with animals and plants, the thinnest slivers of biological splendor, and students are taught this macrobiology—with the occasional nod toward the other things that constitute almost all of life. Practical problems abound from this nepotism. Ecologists study things muscled and things leafed and conservationists worry most about animals, arguing for expensive stamp-collecting exercises to register the big bits of creation before they go extinct. This is a predicament of considerable importance to humanity. Consider: A single kind of photosynthetic bacterium absorbs 20 billion tons of carbon per year, making this minuscule cell a stronger refrigerant than all of the tropical rainforests.

Surveying our planet for its evolutionary resources, the perceptive extraterrestrial would report that Earth is swarming with viral and bacterial genes. The visitor might comment, in passing, that a few of these genes have been strung together into large assemblies capable of running around or branching toward the sunlight. It is time for us to embrace this kind of objectivity and recognize that the macrobiological bias that drives our exploration and teaching of biology is no more sensible than attempting to evaluate all of English Literature by reading nothing but a Harry Potter book. The science of biology would benefit from a philosophical reboot.

Nicholas P. Money is Professor of Botany and Western Program Director at Miami University in Oxford, Ohio. He is the author of more than 70 peer-reviewed papers on fungal biology and has authored several books. His new book is The Amoeba in the Room: Lives of the Microbes.

Subscribe to the OUPblog via email or RSS.
Subscribe to only earth, environmental, and life sciences articles on the OUPblog via email or RSS.
Image Credit: Scanning electron micrograph of amoeba, computer-coloured mauve. By David Gregory & Debbie Marshall, CC-BY-NC-ND 4.0, via Wellcome Images.

The post The amoeba in the room appeared first on OUPblog.

0 Comments on The amoeba in the room as of 4/24/2014 4:37:00 AM
Add a Comment