Event Recap: AI Experts Answer NYC Teens’ Questions and Concerns About the Technology and Its Impact on Their Lives
Recently Iridescent and The Cooper Union hosted a panel featuring influential women working in artificial intelligence (AI) across a variety of industries. Applying their diverse experience and perspectives from AI in research, neuroscience, and art, they addressed NYC teens’ questions and concerns about the impact of AI on their lives in Cooper Union’s Great Hall.
Panelists Provide Diverse AI Perspectives and Experiences
Each panelist had a different relationship to artificial intelligence and human-technology interactions, and brought their unique perspective to bear on the discussion to help NYC teens learn about AI. Panelists included: Dr. Susan Epstein, a professor at Hunter College and the CUNY Graduate Center Doctoral Program in Computer Science who develops knowledge representations and machine learning algorithms to support programs that learn to be experts; Karen Palmer, an award-winning international artist, speaker, and activist specializing in film experiences at the intersection of AI, neuroscience, consciousness and implicit bias; and Dr. Alexandra Ochoa Cohen, a neuroscientist at New York University studying emotional learning and memory in adolescents, and co-chair for New York City’s 2019 Brain Awareness Week.
Moderated by Jennifer Strong, a journalist at the Wall Street Journal and host and creator of the The Future of Everything podcast, the panelists discussed questions about the differences between humans and machines, whether an “AI-based being” can feel emotions or develop a personality, and how AI will affect our worldviews in the future.
What NYC Teenagers want to know about AI
Guided by questions submitted by NYC high school students (including the ones below), the discussion covered differences between humans and machines, current collaborations between humans and AI systems, potential threats and advantages of AI, and finally, what the future with AI might hold.
Where do we draw the lines between humans and machines?
Dr. Cohen explained that there are key differences between humans and machines, that humans are born knowing how to do and learn things that we are not able to teach machines, and that there are psychological traits of humans that can’t be replicated by machines. Dr. Epstein added that from birth, humans are learning about our environment – that our ways of interacting with the world (our senses) are helping us make sense of our world from the start and we are processing immense amounts of data from birth, without being told to do so. Ms. Palmer noted that the human brain is the most powerful computer we know of and spoke to self-awareness as a distinctly human quality.
Can AI think for itself? Can they feel emotions?
Dr. Epstein described machines that can be programmed to be cautious or risk-taking, curious or not, and asked the audience to consider the blurriness of our notions of personalities. Ms. Palmer in turn asked us to consider the question of self-awareness in terms of what sort of poetry a computer might write.
Dr. Cohen explained that emotion is difficult to study in a lab, so identifying emotions in a machine is even more complicated. The discussion also took a turn to focus on the need for greater diversity in AI and machine learning fields, both in the people building the algorithms and tools, and in the data being used to train those tools. Panelists also touched on our responsibility to provide these systems with a wide range of experience in order to avoid replicating our own internal biases.
How will humans become more dependent on AI in the future? How will it impact our lifestyle and world view?
Dr. Cohen and Ms. Palmer both shared examples of ways humans are using artificial intelligence to augment our abilities – and the ethical questions that come with that sort of augmentation. Whether we’re discussing neuroprosthetics, advanced camera and tracking technology, or personal data being collected by companies, there are many moral and ethical questions to consider. The panelists pointed to a few big ideas in their discussion –
- AI systems have the same biases as the humans who build them, so it’s critical to be aware of our own values and biases and the ways we’re replicating them in AI
- If an algorithm can make decisions, it should also be able to explain why it made that decision (an approach known as explainable AI).
For more on the event, check out the Scientific American podcast “60 Second Science” highlighting Dr. Cohen’s answer to a question about the effects of technology on our brains, or click through the slideshow below. Learn more about other women working in AI in our Women in AI Series.
Leave a Reply
Want to join the discussion?Feel free to contribute!