Busting the myth of ‘neutral’ AI | Ramesh Srinivasan
Watch the newest video from Big Think: https://bigth.ink/NewVideo
Join Big Think Edge for exclusive videos: https://bigth.ink/Edge
----------------------------------------------------------------------------------
AI isn't ""just technology,"" says Professor Ramesh Srinivasan. We have to bust the myth that AI is neutral and has no biases. We encode our biases into artificial intelligence. That fact will become more apparent as 5G 'smart cities' become a reality.
Business leaders must develop awareness and ask themselves: What are the data sets my technologies are learning from and what are the values that are influencing the development of these technologies?
The American public, across every demographic and both sides of the aisle, supports doing something about big technology issues that are creating an unequal future, says Srinivasan. We are at an inflection point, and good AI is possible if tech leaders act on these issues.
----------------------------------------------------------------------------------
RAMESH SRINIVASAN:
Ramesh Srinivasan is Professor of Information Studies and Design Media Arts at UCLA. He makes regular appearances on NPR, The Young Turks, MSNBC, and Public Radio International, and his writings have been published in the Washington Post, Quartz, Huffington Post, CNN, and elsewhere.
Check Professor Ramesh Srinivasan's latest book Beyond the Valley: How Innovators around the World are Overcoming Inequality and Creating the Technologies of Tomorrow at https://amzn.to/2v4bgoF
----------------------------------------------------------------------------------
TRANSCRIPT:
RAMESH SRINIVASAN: Technology can really amplify biases because we create technologies based on who we are. Just like when we write a poem or write a book. My new book, Beyond the Valley, it reflects my biases. I'm willing to admit that. And many of us like to think of ourselves as unbiased but that's part of being a human being, is being biased. That doesn't mean we're bad. That doesn't mean we're wrong, but all of us carry unconscious biases. So we encode our ways of seeing the world based on who we are into technology. That's part one and that's a major issue and that's an issue that every business leader should be aware of and conscious of.
But the second point is, increasingly, technology companies, including the big ones, describe their companies as AI companies; artificial intelligence companies. Why is that? Well, artificial intelligence is kind of real now in a way that it wasn't before, and I, as a former artificial intelligence developer, have seen this change. And the reason why is we built faster machines, we've been able to store exponentially more data all at lower costs, and those phones in our pocket which are tracking us 24/7, 365 in ways we have no knowledge about. What I'm getting at here is technologies built upon biases are learning from data sets that are out there and they're learning from an unequal world. Because our world, we still have to try to perfect our union. We have to think about artificial intelligence in aspirational ways rather than this myth that it's somehow neutral or scientific or it's ""just technology"". So that's the second issue that I would encourage business leaders to be aware of. What are the data sets your technologies are learning from and what are your own values that are influencing the development of your technologies? Those are one and two.
And the third is being transparent and understanding when you're using an algorithmic or AI system. It turns out that, as Americans, and actually people across the world, we are always interacting with AI systems and we don't even know it. For example, there's a lot of discussion around 5G networks which is actually configured for things to communicate with one another. Like, imagine your sidewalk communicating with your shoes. I mean, who knows why we want that but that is basically the smart city concept, is a layered infrastructure for 5G. But what are the languages by which those things are communicating? What are the algorithms that are determining what those forms of communication are? So, basically, all the time we're interacting with AI systems it's not disclosed to us; we don't know what those systems know about us, we don't know what are the values that guide their decisions, we don't know how that might shape our lives, we don't know what alternatives we might provide. All of that is a black box and all of that should be opened up. So a lot of business leaders I've actually spoken to are kind of in the diversity and inclusion space, but they're also trying to think about alternatives at this moment where the American public—bipartisan, it's incredible...
Read the full transcript at https://bigthink.com/videos/artificial-intelligence-bias
Watch the newest video from Big Think: https://bigth.ink/NewVideo
Join Big Think Edge for exclusive videos: https://bigth.ink/Edge
----------------------------------------------------------------------------------
AI isn't ""just technology,"" says Professor Ramesh Srinivasan. We have to bust the myth that AI is neutral and has no biases. We encode our biases into artificial intelligence. That fact will become more apparent as 5G 'smart cities' become a reality.
Business leaders must develop awareness and ask themselves: What are the data sets my technologies are learning from and what are the values that are influencing the development of these technologies?
The American public, across every demographic and both sides of the aisle, supports doing something about big technology issues that are creating an unequal future, says Srinivasan. We are at an inflection point, and good AI is possible if tech leaders act on these issues.
----------------------------------------------------------------------------------
RAMESH SRINIVASAN:
Ramesh Srinivasan is Professor of Information Studies and Design Media Arts at UCLA. He makes regular appearances on NPR, The Young Turks, MSNBC, and Public Radio International, and his writings have been published in the Washington Post, Quartz, Huffington Post, CNN, and elsewhere.
Check Professor Ramesh Srinivasan's latest book Beyond the Valley: How Innovators around the World are Overcoming Inequality and Creating the Technologies of Tomorrow at https://amzn.to/2v4bgoF
----------------------------------------------------------------------------------
TRANSCRIPT:
RAMESH SRINIVASAN: Technology can really amplify biases because we create technologies based on who we are. Just like when we write a poem or write a book. My new book, Beyond the Valley, it reflects my biases. I'm willing to admit that. And many of us like to think of ourselves as unbiased but that's part of being a human being, is being biased. That doesn't mean we're bad. That doesn't mean we're wrong, but all of us carry unconscious biases. So we encode our ways of seeing the world based on who we are into technology. That's part one and that's a major issue and that's an issue that every business leader should be aware of and conscious of.
But the second point is, increasingly, technology companies, including the big ones, describe their companies as AI companies; artificial intelligence companies. Why is that? Well, artificial intelligence is kind of real now in a way that it wasn't before, and I, as a former artificial intelligence developer, have seen this change. And the reason why is we built faster machines, we've been able to store exponentially more data all at lower costs, and those phones in our pocket which are tracking us 24/7, 365 in ways we have no knowledge about. What I'm getting at here is technologies built upon biases are learning from data sets that are out there and they're learning from an unequal world. Because our world, we still have to try to perfect our union. We have to think about artificial intelligence in aspirational ways rather than this myth that it's somehow neutral or scientific or it's ""just technology"". So that's the second issue that I would encourage business leaders to be aware of. What are the data sets your technologies are learning from and what are your own values that are influencing the development of your technologies? Those are one and two.
And the third is being transparent and understanding when you're using an algorithmic or AI system. It turns out that, as Americans, and actually people across the world, we are always interacting with AI systems and we don't even know it. For example, there's a lot of discussion around 5G networks which is actually configured for things to communicate with one another. Like, imagine your sidewalk communicating with your shoes. I mean, who knows why we want that but that is basically the smart city concept, is a layered infrastructure for 5G. But what are the languages by which those things are communicating? What are the algorithms that are determining what those forms of communication are? So, basically, all the time we're interacting with AI systems it's not disclosed to us; we don't know what those systems know about us, we don't know what are the values that guide their decisions, we don't know how that might shape our lives, we don't know what alternatives we might provide. All of that is a black box and all of that should be opened up. So a lot of business leaders I've actually spoken to are kind of in the diversity and inclusion space, but they're also trying to think about alternatives at this moment where the American public—bipartisan, it's incredible...
Read the full transcript at https://bigthink.com/videos/artificial-intelligence-bias
- Category
- 교육 - Education
Sign in or sign up to post comments.
Be the first to comment