What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

Recently Viewed

JacketFlap Sponsors

Spread the word about books.
Put this Widget on your blog!
  • Powered by JacketFlap.com

Are you a book Publisher?
Learn about Widgets now!

Advertise on JacketFlap

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Tag

In the past 7 days

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing: Blog Posts Tagged with: Nick Bostrom, Most Recent at Top [Help]
Results 1 - 7 of 7
1. Does the ‘Chinese room’ argument preclude a robot uprising?

There has been much recent talk about a possible robot apocalypse. One person who is highly skeptical about this possibility is philosopher John Searle. In a 2014 essay, he argues that "the prospect of superintelligent computers rising up and killing us, all by themselves, is not a real danger".

The post Does the ‘Chinese room’ argument preclude a robot uprising? appeared first on OUPblog.

0 Comments on Does the ‘Chinese room’ argument preclude a robot uprising? as of 1/28/2016 7:14:00 AM
Add a Comment
2. Let us not run blindfolded into the minefield of future technologies

There is a widely held conception that progress in science and technology is our salvation, and the more of it, the better. This is the default assumption not only among the general public, but also in the research community including university administration and research funding agencies, all the way up to government ministries. I believe the assumption to be wrong, and very dangerous.

The post Let us not run blindfolded into the minefield of future technologies appeared first on OUPblog.

0 Comments on Let us not run blindfolded into the minefield of future technologies as of 1/15/2016 7:31:00 AM
Add a Comment
3. 5 academic books that will shape the future

What is the future of academic publishing? We’re celebrating University Press Week and Academic Book Week with a series of blog posts on scholarly publishing from staff and partner presses. Following on from our list of academic books that changed the world, we're looking to the future and how our current publishing could change lives and attitudes in years to come.

The post 5 academic books that will shape the future appeared first on OUPblog.

1 Comments on 5 academic books that will shape the future, last added: 11/19/2015
Display Comments Add a Comment
4. Nick Bostrom on artificial intelligence

From mechanical turks to science fiction novels, our mobile phones to The Terminator, we’ve long been fascinated by machine intelligence and its potential — both good and bad. We spoke to philosopher Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, about a number of pressing questions surrounding artificial intelligence and its potential impact on society.

Are we living with artificial intelligence today?

Mostly we have only specialized AIs – AIs that can play chess, or rank search engine results, or transcribe speech, or do logistics and inventory management, for example. Many of these systems achieve super-human performance on narrowly defined tasks, but they lack general intelligence.

There are also experimental systems that have fully general intelligence and learning ability, but they are so extremely slow and inefficient that they are useless for any practical purpose.

AI researchers sometimes complain that as soon as something actually works, it ceases to be called ‘AI’. Some of the techniques used in routine software and robotics applications were once exciting frontiers in artificial intelligence research.

What risk would the rise of a superintelligence pose?

It would pose existential risks – that is to say, it could threaten human extinction and the destruction of our long-term potential to realize a cosmically valuable future.

Would a superintelligent artificial intelligence be evil?

Hopefully it will not be! But it turns out that most final goals an artificial agent might have would result in the destruction of humanity and almost everything we value, if the agent were capable enough to fully achieve those goals. It’s not that most of these goals are evil in themselves, but that they would entail sub-goals that are incompatible with human survival.

For example, consider a superintelligent agent that wanted to maximize the number of paperclips in existence, and that was powerful enough to get its way. It might then want to eliminate humans to prevent us from switching if off (since that would reduce the number of paperclips that are built). It might also want to use the atoms in our bodies to build more paperclips.

Most possible final goals, it seems, would have similar implications to this example. So a big part of the challenge ahead is to identify a final goal that would truly be beneficial for humanity, and then to figure out a way to build the first superintelligence so that it has such an exceptional final goal. How to do this is not yet known (though we do now know that several superficially plausible approaches would not work, which is at least a little bit of progress).

How long have we got before a machine becomes superintelligent?

Nobody knows. In an opinion survey we did of AI experts, we found a median view that there was a 50% probability of human-level machine intelligence being developed by mid-century. But there is a great deal of uncertainty around that – it could happen much sooner, or much later. Instead of thinking in terms of some particular year, we need to be thinking in terms of probability distributed across a wide range of possible arrival dates.

So would this be like Terminator?

There is what I call a “good-story bias” that limits what kind of scenarios can be explored in novels and movies: only ones that are entertaining. This set may not overlap much with the group of scenarios that are probable.

For example, in a story, there usually have to be humanlike protagonists, a few of which play a pivotal role, facing a series of increasingly difficult challenges, and the whole thing has to take enough time to allow interesting plot complications to unfold. Maybe there is a small team of humans, each with different skills, which has to overcome some interpersonal difficulties in order to collaborate to defeat an apparently invincible machine which nevertheless turns out to have one fatal flaw (probably related to some sort of emotional hang-up).

One kind of scenario that one would not see on the big screen is one in which nothing unusual happens until all of a sudden we are all dead and then the Earth is turned into a big computer that performs some esoteric computation for the next billion years. But something like that is far more likely than a platoon of square-jawed men fighting off a robot army with machine guns.

Futuristic man. © Vladislav Ociacia via iStock.
Futuristic man. © Vladislav Ociacia via iStock.

If machines became more powerful than humans, couldn’t we just end it by pulling the plug? Removing the batteries?

It is worth noting that even systems that have no independent will and no ability to plan can be hard for us to switch off. Where is the off-switch to the entire Internet?

A free-roaming superintelligent agent would presumably be able to anticipate that humans might attempt to switch it off and, if it didn’t want that to happen, take precautions to guard against that eventuality. By contrast to the plans that are made by AIs in Hollywood movies – which plans are actually thought up by humans and designed to maximize plot satisfaction – the plans created by a real superintelligence would very likely work. If the other Great Apes start to feel that we are encroaching on their territory, couldn’t they just bash our skulls in? Would they stand a much better chance if every human had a little off-switch at the back of our necks?

So should we stop building robots?

The concern that I focus on in the book has nothing in particular to do with robotics. It is not in the body that the danger lies, but in the mind that a future machine intelligence may possess. Where there is a superintelligent will, there can most likely be found a way. For instance, a superintelligence that initially lacks means to directly affect the physical world may be able to manipulate humans to do its bidding or to give it access to the means to develop its own technological infrastructure.

One might then ask whether we should stop building AIs? That question seems to me somewhat idle, since there is no prospect of us actually doing so. There are strong incentives to make incremental advances along many different pathways that eventually may contribute to machine intelligence – software engineering, neuroscience, statistics, hardware design, machine learning, and robotics – and these fields involve large numbers of people from all over the world.

To what extent have we already yielded control over our fate to technology?

The human species has never been in control of its destiny. Different groups of humans have been going about their business, pursuing their various and sometimes conflicting goals. The resulting trajectory of global technological and economic development has come about without much global coordination and long-term planning, and almost entirely without any concern for the ultimate fate of humanity.

Picture a school bus accelerating down a mountain road, full of quibbling and carousing kids. That is humanity. But if we look towards the front, we see that the driver’s seat is empty.

Featured image credit: Humanrobo. Photo by The Global Panorama, CC BY 2.0 via Flickr

The post Nick Bostrom on artificial intelligence appeared first on OUPblog.

0 Comments on Nick Bostrom on artificial intelligence as of 9/8/2014 10:30:00 AM
Add a Comment
5. The unfinished fable of the sparrows

Owls and robots. Nature and computers. It might seem like these two things don’t belong in the same place, but The Unfinished Fable of the Sparrows (in an extract from Nick Bostrom’s Superintelligence) sheds light on a particular problem: what if we used our highly capable brains to build machines that surpassed our general intelligence?

It was the nest-building season, but after days of long hard work, the sparrows sat in the evening glow, relaxing and chirping away.

“We are all so small and weak. Imagine how easy life would be if we had an owl who could help us build our nests!”

“Yes!” said another. “And we could use it to look after our elderly and our young.”

“It could give us advice and keep an eye out for the neighborhood cat,” added a third.

Then Pastus, the elder-bird, spoke: “Let us send out scouts in all directions and try to find an abandoned owlet somewhere, or maybe an egg. A crow chick might also do, or a baby weasel. This could be the best thing that ever happened to us, at least since the opening of the Pavilion of Unlimited Grain in yonder backyard.”

The flock was exhilarated, and sparrows everywhere started chirping at the top of their lungs.

Only Scronkfinkle, a one-eyed sparrow with a fretful temperament, was unconvinced of the wisdom of the endeavor. Quoth he: “This will surely be our undoing. Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?”

Replied Pastus: “Taming an owl sounds like an exceedingly difficult thing to do. It will be difficult enough to find an owl egg. So let us start there. After we have succeeded in raising an owl, then we can think about taking on this other challenge.”

“There is a flaw in that plan!” squeaked Scronkfinkle; but his protests were in vain as the flock had already lifted off to start implementing the directives set out by Pastus.

Just two or three sparrows remained behind. Together they began to try to work out how owls might be tamed or domesticated. They soon realized that Pastus had been right: this was an exceedingly difficult challenge, especially in the absence of an actual owl to practice on. Nevertheless they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found.

Headline image credit: Chestnut Sparrow by Lip Kee. CC BY 2.0 via Flickr.

The post The unfinished fable of the sparrows appeared first on OUPblog.

0 Comments on The unfinished fable of the sparrows as of 1/1/1900
Add a Comment
6. The viability of Transcendence: the science behind the film

In the trailer of Transcendence, an authoritative professor embodied by Johnny Depp says that “the path to building superintelligence requires us to unlock the most fundamental secrets of the universe.” It’s difficult to wrap our minds around the possibility of artificial intelligence and how it will affect society. Nick Bostrom, a scientist and philosopher and the author of the forthcoming Superintelligence: Paths, Dangers, Strategies, discusses the science and reality behind the future of machine intelligence in the following video series.

Could you upload Johnny Depp’s brain?

Click here to view the embedded video.

How imminent is machine intelligence?

Click here to view the embedded video.

Would you have a warning before artificial intelligence?

Click here to view the embedded video.

How could you get a machine intelligence?

Click here to view the embedded video.

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Program on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, and Human Enhancement. His next book, Superintelligence: Paths, Dangers, Strategies, will be published this summer in the UK and this fall in the US. He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

Subscribe to the OUPblog via email or RSS.
Subscribe to only technology articles on the OUPblog via email or RSS.

The post The viability of Transcendence: the science behind the film appeared first on OUPblog.

0 Comments on The viability of Transcendence: the science behind the film as of 4/27/2014 10:01:00 AM
Add a Comment
7. It’s Only a Movie – Book Review

Earlier this week, I found myself wandering the rainwashed streets of New Orleans with U2′s “All I Want is You” playing on the soundtrack in my head. Cut to sitting at the French Quarter’s hippest bar, sipping cocktails mixed by a beautiful actress bartender. Chatting beside me was a local gallerist* and, along from him, a couple of artists he represented. In front of me was the notebook open at the final chapter of Johnny Mackintosh: Battle for Earth and a copy of Mark Kermode’s autobiography, It’s Only a Movie.

The gallerist wanted to talk science fiction, notably Iain (M.) Banks and Dr Who. We had similar views on both and I could recount the time where I accidentally got the Scottish novelist a little drunk in a bar before a book reading, buying him whisky and telling him he’d inspired my own novels. It took a little while for the bartender to fess up to being an actress (it turned out a show of hers was even on HBO when I returned to the hotel), but once the fact was divulged she was reciting Shakespearean sonnets and having me recreate a scene from Austin Powers with her. After which I could even tell her how I once worked with Mike Myers!

I know I’m incredibly lucky, but it often feels as though I’m living inside a wonderfully entertaining movie in which I’m director, screenwriter, cinematographer, location manager, head of casting and leading actor. And that’s exactly the conceit of Dr Kermode’s autobiography. It’s already the third book I’ve read this year so I figured it’s time to get busy reviewing or get busy dying. Choose life.

A damn fine bfi book I published with Jonathan Ross

Ever since I noticed there were film critics, Kermode has been my favourite. He’s risen through the ranks to be the nation’s favourite too, with regular slots on The Culture Show and a weekly movie roundup with “clearly the best broadcaster in the country (and having the awards to prove it)” Simon Mayo that’s so entertaining it’s been extended to two whole hours on a Friday afternoon. Possibly the highlight of my time as publisher at the bfi (British Film Institute) was receiving a very lovely email from Dr K. It goes without saying he wrote the bfi Modern Classic on The Exorcist, but this is also the man who made On the Edge of Blade Runner.