FOLLOW BOREME
TAGS
<< Back to listing
Daniel Dennett: AI doesn't need to be conscious

Daniel Dennett: AI doesn't need to be conscious

(4:22) If consciousness is ours to give, should we give it to AI? This is the question on the mind of American philosopher and cognitive scientist Daniel Dennett.

Share this post

You can comment as a guest, but registering gives you added benefits

Add your comment
Submit as guest (your name)

Copy code captcha


Submit as member (username / password)

CANCEL
Guest: (189 days ago)
Latest comment:

Dennett makes a good point for having smart tools, not autonomous agents with their own agendas.

I'm curious about consciousness. Presumably it evolved for a reason. If not, if it is just some kind of by-product, then it seems to be so powerful a thing that it would be subject to evolutionary selection, or de-selection, very rapidly. Either way, therefore, consciousness is a major evolutionary asset. Why is this exactly?

I think it is because consciousness enables us to behave in more complex ways. An insect appears to behave in complex ways but it turns out to be quite mechanical and predictable when you analyse it. Humans behave in more complex ways because we are conscious of ourselves and others and we estimate the impact of our actions and of the many options that are open to us all the time. My hypothesis is that this level of complexity is not possible without consciousness.

Searle's Chinese Room thought experiment is relevant here. The argument is that the Room can behave intelligently even though no-one inside understands what is going on. The big failing in discussions of this thought experiment is the fact that it *is* just a thought experiment. It is taken for granted that the experiment could in principle be carried out, though our current level of technlogy is not quite up to it. If my hypothesis is true then this assumption is not warranted. The Room could not function as described, ie without consciousness.

What about Google translate? Well, as Dennett points out, and as anyone who has used it can attest, it's not perfect but it's quite good. Try translating a text to a different language and back again to see. Or through a chain of languages. Well, that kind of shoots my hypothesis down, unless either Google translate is conscious, which it shows no other sign of, or consciousness is not necessary for language processing.

Original comment
Latest comment:

Dennett makes a good point for having smart tools, not autonomous agents with their own agendas.

I'm curious about consciousness. Presumably it evolved for a reason. If not, if it is just some kind of by-product, then it seems to be so powerful a thing that it would be subject to evolutionary selection, or de-selection, very rapidly. Either way, therefore, consciousness is a major evolutionary asset. Why is this exactly?

I think it is because consciousness enables us to behave in more complex ways. An insect appears to behave in complex ways but it turns out to be quite mechanical and predictable when you analyse it. Humans behave in more complex ways because we are conscious of ourselves and others and we estimate the impact of our actions and of the many options that are open to us all the time. My hypothesis is that this level of complexity is not possible without consciousness.

Searle's Chinese Room thought experiment is relevant here. The argument is that the Room can behave intelligently even though no-one inside understands what is going on. The big failing in discussions of this thought experiment is the fact that it *is* just a thought experiment. It is taken for granted that the experiment could in principle be carried out, though our current level of technlogy is not quite up to it. If my hypothesis is true then this assumption is not warranted. The Room could not function as described, ie without consciousness.

What about Google translate? Well, as Dennett points out, and as anyone who has used it can attest, it's not perfect but it's quite good. Try translating a text to a different language and back again to see. Or through a chain of languages. Well, that kind of shoots my hypothesis down, unless either Google translate is conscious, which it shows no other sign of, or consciousness is not necessary for language processing.

Add your reply
Submit as guest (your name)

Copy code captcha


Submit as member (username / password)

CANCEL
RELATED POSTS
Robot Sophia represents new technologies at Web Summit 2017
Robot Sophia represents new technologies at Web Summit 2017
AI learns to lie
AI learns to lie
Elon Musk's OpenAI is disrupting esports big time
Elon Musk's OpenAI is disrupting esports big time
7 Days of Artificial Intelligence
7 Days of Artificial Intelligence
Two robots debate profound topics
Two robots debate profound topics