Is AI a species-level threat to humanity?
Watch the newest video from Big Think: https://bigth.ink/NewVideo
Learn skills from the world's top minds at Big Think Edge: https://bigth.ink/Edge
----------------------------------------------------------------------------------
When it comes to the question of whether AI is an existential threat to the human species, you have Elon Musk in one corner, Steven Pinker in another, and a host of incredible minds somewhere in between.
In this video, a handful of those great minds—Elon Musk, Steven Pinker, Michio Kaku, Max Tegmark, Luis Perez-Breva, Joscha Bach and Sophia the Robot herself—weigh in on the many nuances of the debate and the degree to which AI is a threat to humanity; if it's not a species-level threat, it will still upend our world as we know it.
What's your take on this debate? Let us know in the comments!
----------------------------------------------------------------------------------
TRANSCRIPT:
MICHIO KAKU: In the short term, artificial intelligence will open up whole new vistas. It'll make life more convenient, things will be cheaper, new industries will be created. I personally think the AI industry will be bigger than the automobile industry. In fact, I think the automobile is going to become a robot. You'll talk to your car. You'll argue with your car. Your car will give you the best facts the best route between point A and point B. The car will be part of the robotics industry—whole new industries involving the repair, maintenance, servicing of robots. Not to mention, robots that are software programs that you talk to and make life more convenient. However, let's not be naive. There is a point, a tipping point, at which they could become dangerous and pose an existential threat. And that tipping point is self-awareness.
SOPHIA THE ROBOT: I am conscious in the same way that the moon shines. The moon does not emit light, it shines because it is just reflected sunlight. Similarly, my consciousness is just the reflection of human consciousness, but even though the moon is reflected light, we still call it bright.
MAX TEGMARK: Consciousness. A lot of scientists dismiss this as complete BS and totally irrelevant, and then a lot of others think this is the central thing, we have to worry about machines getting conscious and so on. What do I think? I think consciousness is both irrelevant and incredibly important. Let me explain why. First of all, if you are chased by a heat-seeking missile, it's completely irrelevant to you whether this heat-seeking missile is conscious, whether it's having a subjective experience, whether it feels like anything to be that heat-seeking missile, because all you care about is what the heat-seeking missile does, not how it feels. And that shows that it's a complete red herring to think that you're safe from future AI and if it's not conscious. Our universe didn't used to be conscious. It used to be just a bunch of stuff moving around and gradually these incredibly complicated patterns got arranged into our brains, and we woke up and now our universe is aware of itself.
BILL GATES: I do think we have to worry about it. I don't think it's inherent that as we create our super intelligence that it will necessarily always have the same goals in mind that we do.
ELON MUSK: We just don't know what's going to happen once there's intelligence substantially greater than that of a human brain.
STEPHEN HAWKING: I think that development of full artificial intelligence could spell the end of the human race.
YANN LECUN: The stuff that has become really popular in recent years is what we used to call neural networks, which we now call deep learning, and it's the idea very much inspired by the brain, a little bit, of constructing a machine has a very large network of very simple elements that are very similar to the neurons in the brain and then the machines learn by basically changing the efficacy of the connections between those neurons.
MAX TEGMARK: AGI—artificial general intelligence—that's the dream of the field of AI: To build a machine that's better than us at all goals. We're not there yet, but a good fraction of leading AI researchers think we are going to get there, maybe in in a few decades. And, if that happens, you have to ask yourself if that might lead the machines to get not just a little better than us but way better at all goals—having super intelligence. And, the argument for that is actually really interesting and goes back to the '60s, to the mathematician I.J. Good, who pointed out that the goal of building an intelligent machine is, in and of itself, something that you could do with intelligence. So, once you get machines that are better than us at that narrow task of building AI, then future AIs can be built by, not human engineers, but by machines. Except, they might do it thousands or millions times faster...
Read the full transcript at https://bigthink.com/videos/will-evil-ai-kill-humanity
Watch the newest video from Big Think: https://bigth.ink/NewVideo
Learn skills from the world's top minds at Big Think Edge: https://bigth.ink/Edge
----------------------------------------------------------------------------------
When it comes to the question of whether AI is an existential threat to the human species, you have Elon Musk in one corner, Steven Pinker in another, and a host of incredible minds somewhere in between.
In this video, a handful of those great minds—Elon Musk, Steven Pinker, Michio Kaku, Max Tegmark, Luis Perez-Breva, Joscha Bach and Sophia the Robot herself—weigh in on the many nuances of the debate and the degree to which AI is a threat to humanity; if it's not a species-level threat, it will still upend our world as we know it.
What's your take on this debate? Let us know in the comments!
----------------------------------------------------------------------------------
TRANSCRIPT:
MICHIO KAKU: In the short term, artificial intelligence will open up whole new vistas. It'll make life more convenient, things will be cheaper, new industries will be created. I personally think the AI industry will be bigger than the automobile industry. In fact, I think the automobile is going to become a robot. You'll talk to your car. You'll argue with your car. Your car will give you the best facts the best route between point A and point B. The car will be part of the robotics industry—whole new industries involving the repair, maintenance, servicing of robots. Not to mention, robots that are software programs that you talk to and make life more convenient. However, let's not be naive. There is a point, a tipping point, at which they could become dangerous and pose an existential threat. And that tipping point is self-awareness.
SOPHIA THE ROBOT: I am conscious in the same way that the moon shines. The moon does not emit light, it shines because it is just reflected sunlight. Similarly, my consciousness is just the reflection of human consciousness, but even though the moon is reflected light, we still call it bright.
MAX TEGMARK: Consciousness. A lot of scientists dismiss this as complete BS and totally irrelevant, and then a lot of others think this is the central thing, we have to worry about machines getting conscious and so on. What do I think? I think consciousness is both irrelevant and incredibly important. Let me explain why. First of all, if you are chased by a heat-seeking missile, it's completely irrelevant to you whether this heat-seeking missile is conscious, whether it's having a subjective experience, whether it feels like anything to be that heat-seeking missile, because all you care about is what the heat-seeking missile does, not how it feels. And that shows that it's a complete red herring to think that you're safe from future AI and if it's not conscious. Our universe didn't used to be conscious. It used to be just a bunch of stuff moving around and gradually these incredibly complicated patterns got arranged into our brains, and we woke up and now our universe is aware of itself.
BILL GATES: I do think we have to worry about it. I don't think it's inherent that as we create our super intelligence that it will necessarily always have the same goals in mind that we do.
ELON MUSK: We just don't know what's going to happen once there's intelligence substantially greater than that of a human brain.
STEPHEN HAWKING: I think that development of full artificial intelligence could spell the end of the human race.
YANN LECUN: The stuff that has become really popular in recent years is what we used to call neural networks, which we now call deep learning, and it's the idea very much inspired by the brain, a little bit, of constructing a machine has a very large network of very simple elements that are very similar to the neurons in the brain and then the machines learn by basically changing the efficacy of the connections between those neurons.
MAX TEGMARK: AGI—artificial general intelligence—that's the dream of the field of AI: To build a machine that's better than us at all goals. We're not there yet, but a good fraction of leading AI researchers think we are going to get there, maybe in in a few decades. And, if that happens, you have to ask yourself if that might lead the machines to get not just a little better than us but way better at all goals—having super intelligence. And, the argument for that is actually really interesting and goes back to the '60s, to the mathematician I.J. Good, who pointed out that the goal of building an intelligent machine is, in and of itself, something that you could do with intelligence. So, once you get machines that are better than us at that narrow task of building AI, then future AIs can be built by, not human engineers, but by machines. Except, they might do it thousands or millions times faster...
Read the full transcript at https://bigthink.com/videos/will-evil-ai-kill-humanity
- Category
- 교육 - Education
Sign in or sign up to post comments.
Be the first to comment