FOLLOW BOREME
TAGS
<< Back to listing
What happens when our computers get smarter than we are?

What happens when our computers get smarter than we are?

(16:31) The awakening of artificial intelligence. Swedish philosopher at the University of Oxford, Nick Bostrom, discusses the threats of superintelligence, something that is bound to come along. Bostrom is author of 'Superintelligence: Paths, Dangers, Strategies', which is available from: Amazon.co.uk and Amazon.com

Share this post

You can comment as a guest, but registering gives you added benefits

Add your comment
Submit as guest (your name)

Copy code captcha


Submit as member (username / password)

CANCEL
Guest: one more time (1079 days ago)

A couple of comments.

1. Why should we care if humans are wiped out by machines? I can understand why we might not want individuals to suffer, but if, say, machines rendered us gradually less fertile and we just died out, to be replaced by machines with all our good qualities in greater abundance, why exactly would that be a bad thing? I think only because we have evolved to want our children to survive and prosper.

2. This talk discussed *one* machine intelligence. It reminds me of the report by M V Wilkes (in the 1950s?) which concluded that, yes, the UK probably should have its own computer, and it should be located at Crewe which has good rail links to bring wagons of punched cards. We won't just have one machine intelligence. That means these beasties will be subject to competition for resources and natural selection. No-one knows where that evolution will lead but it is a fair bet that it leads to more selfish, aggressive machines than we might like. Especially if we all want the same resources.

3. If we make a machine that is smarted than us, that machine will be able to make another smarter than it is. It might not be a smart thing to do, but if it is, it will explosively lead to something as incomprehensible to us as we are to a mayfly.

ReplyVote up (134)down (111)
Original comment

A couple of comments.

1. Why should we care if humans are wiped out by machines? I can understand why we might not want individuals to suffer, but if, say, machines rendered us gradually less fertile and we just died out, to be replaced by machines with all our good qualities in greater abundance, why exactly would that be a bad thing? I think only because we have evolved to want our children to survive and prosper.

2. This talk discussed *one* machine intelligence. It reminds me of the report by M V Wilkes (in the 1950s?) which concluded that, yes, the UK probably should have its own computer, and it should be located at Crewe which has good rail links to bring wagons of punched cards. We won't just have one machine intelligence. That means these beasties will be subject to competition for resources and natural selection. No-one knows where that evolution will lead but it is a fair bet that it leads to more selfish, aggressive machines than we might like. Especially if we all want the same resources.

3. If we make a machine that is smarted than us, that machine will be able to make another smarter than it is. It might not be a smart thing to do, but if it is, it will explosively lead to something as incomprehensible to us as we are to a mayfly.

Add your reply
Submit as guest (your name)

Copy code captcha


Submit as member (username / password)

CANCEL
Guest: one more time (1079 days ago)
Latest comment:

... and see this: LINK

ReplyVote up (129)down (110)
Original comment
Latest comment:

... and see this: LINK

Add your reply
Submit as guest (your name)

Copy code captcha


Submit as member (username / password)

CANCEL
RELATED POSTS
Do You Trust This Computer?
Do You Trust This Computer?
Will AI make us immortal, or could it be the end of us?
Will AI make us immortal, or could it be the end of us?
Will AI cause WW3?
Will AI cause WW3?
Neil deGrasse Tyson - AI vs machine learning
Neil deGrasse Tyson - AI vs machine learning
Gloomy Sunday
Gloomy Sunday