Family Interest in Artificial Intelligence

Parents Look For Ways to “Tech-Proof” Their Family for Impact of Artificial Intelligence

A recent study commissioned by Iridescent reveals 86% of parents want new ways to learn critical computer skills outside traditional classrooms such as taking a class, joining a club, or participating in events for more guidance on at-home education. The online study, conducted by VeraQuest, surveyed parents of 3rd – 8th graders to better understand their views of Artificial Intelligence (AI) and their children’s learning experiences. In addition to new approaches to learning, the study found parents do not understand the extent to which AI is already integrated into their everyday lives, but an overwhelming majority (92%) understand that technology, such as AI, is rapidly advancing and their children need to learn about these new technologies to be prepared for the future.

Today, only 36% of children receive technology education outside of their school, and parents expressed concern in the current gap between their child’s interest level in learning about future technology and their preparedness for it. These trends are consistent with studies conducted by Google and Gallup, which found interest in computer science learning continues to be strong, but all students do not yet have access to these learning opportunities in class. The education gap is especially prominent in low-income communities. “We often talk to concerned parents who are wondering how to provide their children the tools and skills they need to have a bright future as technology and the skill sets needed to succeed rapidly evolve,” said Tara Chklovski, Founder and CEO, Iridescent. “We want to help these parents feel confident and optimistic about their family’s future in a world filled with new technologies. That’s why we created the Curiosity Machine AI Family Challenge.” Through the Curiosity Machine AI Family Challenge Iridescent is filling the education gap with immersive AI curriculum for children and their families. The program introduces AI to underserved families in a way that fosters a deeper understanding of AI and its real world applications and makes technology education accessible to all communities.Parents learn alongside their children as they create AI-based products that solve problems in their community. “My daughter very much likes science,” said a mother surveyed in the study. “I think [the Curiosity Machine AI Family Challenge] will give her an upper hand in the [AI] field as well as allow her to be as creative as she wants to be in building skills for her future.” Iridescent Curiosity Machine AI Family Challenge Family

Additional study findings for AI education:

Fears and Misperceptions Around AI

  • Our research found that 85% of parents understand that new AI technology develops rapidly, but less than 20% of parents know that Facebook, targeted ads, or other recommendation engines use AI technology. There is real danger the lack of AI knowledge and its rapid development will widen the “digital divide,” or information gap, between parents and technology.

Interest in Exploring New Technology

  • Regardless of their concerns, we found that parents still had a positive outlook on the future of technology. 63% of parents believe AI will be used to make the world a better place and 78% were especially interested in learning more about AI.

Join the conversation

Iridescent is hosting a series of panel conversations with leading AI and technology companies and researchers. Join us for a deeper dive into this new study and a thoughtful conversation about how to support families, parents, and communities in the face of a rapidly changing world. # # #

About the Survey

Methodology The survey was conducted online from January 11th to January 17th, 2018. The sample was comprised of 1,566 respondents in the United States ages 25+ who have a child in grades three through eight. The sample was constructed from U.S. Census proportions to be representative of the population based on age, income, education, race/ethnicity and geography. Targets were also used for residential status and grade level of child. The low-income group (585 respondents) also had targets for each of the above variables. These targets were created to be specifically representative of families earning under $50K annually with a child in third to eighth grade.

Rationale Iridescent, in partnership with the Association for the Advancement of Artificial Intelligence (AAAI), the League of United Latin American Citizens (LULAC), and NVIDIA Corporation, is encouraging families to learn about Artificial Intelligence technology through the Curiosity Machine AI Family Challenge. Over the next two years, the Curiosity Machine AI Family Challenge will invite 3rd – 8th grade students and their families to explore core concepts of AI research, apply AI tools to solve problems in their communities and have an opportunity to enter their ideas into a global competition.

An Interview with Chelsea Finn: AI for Robotics

As part of our AI in Your Community series, I had the chance to speak to Chelsea Finn, a PhD student at UC Berkeley currently doing work with machine learning and robotics. Through her work she teaches robots how to perform tasks in multiple environments, with the goal of having these robots perform tasks for humans that they can’t perform, or that it would be dangerous for humans to perform.

Chelsea Finn

Tara Chklovski: Tell me a bit about your work. What research are you working on?

Chelsea Finn: I am a PhD student at UC Berkeley, and I work on machine learning and AI for robotics. A lot of my work entails having physical robots learn how to do things in the world, like screw a cap onto a bottle, or use a spatula, or pick up objects and rearrange them. Our goal is to have systems that can learn to do these different tasks so that they can go into a variety of environments and perform those tasks for humans – or perform dangerous jobs that we don’t want humans to do.
Read more

An Interview with Julita Vassileva: Artificial Intelligence and Online Communities

As part of our AI in Your Community series, I recently spoke with Julita Vassileva, a professor in Computer Science at the University of Saskatchewan who is currently focused on building successful online communities and social computing applications. Julita Vassileva is particularly interested in user participation and user modeling, as well as user motivation and designing systems that incentivize people to continue participating in online communities.

Julita Vassileva

Julita Vassileva

Tara Chklovski:To start off, maybe you can tell us a little bit about what problem you’re working on and what area of research you’re excited about.

Julita Vassileva: I’ve been doing research for 35 years, so I’ve done a lot of things! I’m a very curious person – I’ve been following my nose and have explored all over the place. When I first started working with artificial intelligence it was in education applications, while I was working on my Master’s degree and my PhD. When I started I didn’t have a particular interest in the area at all. I was in my 4th year studying mathematics at the University of Sofia, in Bulgaria, and when it came time to decide what to do next, all of my really smart and strong colleagues went in to very theoretical, classical areas of mathematics. I went to one of my professors for advice and he told me that mathematics is beautiful, and you could study it all your life and be fulfilled, but that it’s such an old area that every little stone has been turned over a hundred times by extremely smart people. You need to be extremely lucky and very smart, and work extremely hard to be able to find something new. So why not go into a new area? So I decided to go into computer science, even though I didn’t really have any idea what computer science was. I wasn’t fascinated by it – we programmed on punch cards, which was quite unexciting. But I picked somebody to work with who was sympathetic, who I thought I could talk to, my supervisor, Dr. Roumen Radev, who was creating a “smart” tutoring system to help teach how to solve physics problems. So I decided to try to do that for my Master’s.

TC: Oh, tell me more about that. What were you doing, and what did you think of it?

JV: At the beginning I thought, “who is going to study physics with computers?” Physics is tough enough just by itself. That leads me to a message I actually think is very important for young people– you don’t necessarily need to be interested in the subject when you start. In the beginning everything is hard, but under the surface there could be a whole world waiting to be discovered! You’ll have to invest a lot of work, and sometimes you just have to grit your teeth and do the work, and then suddenly you discover that it’s becoming interesting. The deeper you get, the more interesting and fascinating it becomes, and you feel the power of your knowledge gives you amazing opportunities. What flipped things for me was working on those tutors and realizing that it’s really hard to design tutoring systems. The tutor I created during my Master’s program coached students in solving problems using Ohm’s law, and it took me one year of hard work to develop it. It only taught only how to solve problems related to one physics lesson on calculating electrical circuits, but the experience made me think about how to make it easier to design the generic software for these systems so that teachers could create them more easily for different lessons and domains. And so I ventured into the area of authoring intelligent tutoring systems, which meant creating software that allowed other users (like teachers) to create their own tutoring applications. While computer-based training and authoring at that time was already an “old” area (20 years old, to be precise), intelligent tutoring, at that time, was new, which was so motivating, because it was a bit like homesteading – there were so many unexplored problems. You feel like the first person, the pioneer. Everything in the field is in front of you and you can do whatever you want. It’s a fantastic feeling.

TC: What was that transition like, moving from designing a tutor for one type of problem to creating generic tutoring systems and authoring tools?

JV: The really tricky thing was modeling the student, because for a system to be intelligent it has to understand how much the student understands and knows, and it has to adapt to it. If the student doesn’t understand a concept and the system continues giving the same advice it’s useless – the student will drop the system. How can you make the system intelligent so it can react to what the student understands? And how do you understand what the student knows?

Microsoft’s Clippy. Image from Mashable, 2017.

This is an area called student modeling, and at the time it was a new field of study, just starting. My focus during my PhD work was on creating generic architecture, knowledge representation schemes and a planning algorithms that could be used both for domain models and for student models. But then I found out that the application area of these architectures and methods is bigger than student models, since if one wants to create a “smart” system that supports the user, it needs to understand and anticipate the user’s needs, interests, knowledge and skills… which leads to user models. For example, Microsoft Office in the mid-90’s introduced Clippy – a little cartoon agent that was based on user modeling. It was watching what you were doing and trying to predict your goals and offer you advice based on those predictions. That’s user modeling and that’s what I was working on for the first five to six years after my PhD.Then I heard about multi-agent systems – a completely different area of AI from knowledge representation and planning, areas I was already familiar with. And suddenly, for me, it was like a revelation. It triggered my curiosity because I come from Bulgaria, which was a communist country. I left Bulgaria as soon as communism collapsed and it was possible to travel abroad. I went to Germany, and then came to Canada. And all the time I was trying to figure out, why did communism collapse economically? Of course, the reasons were many and complex. But I was looking for one simple, basic principle, fundamental in the system… Eventually, I realized it’s the incentives. People didn’t have the incentive to work because everything was divided based on your needs. You work as hard as you can, but then you don’t get as much as you worked for. Somebody else who did not work as hard but who has bigger needs (or connections to influential people) will get more than you. Then I realized that multi-agent systems, which was, at that time, a budding new area of artificial intelligence, allowed you to explore exactly those kinds of questions.

TC: Can you explain Multi-Agent Systems a little bit more, and how they allow us to explore those sorts of questions?

JV: You build a society of agents, where each agent is like a little person with very simple reasoning. But they can talk to each other. They can interact. They are autonomous. They can pursue their own goals, respond to rules and rewards that are set in the system or by other agents, and then you can let them loose, and see what happens with this society. If you set the rules of interaction between agents in particular ways, if you put laws and punishments in the society in a particular way you get completely different behaviors in the overall system. And you can see some societies collapse and some societies thrive, and you can build simulations of multi-agent systems. That was my focus for another five or six years. Marvin Minsky, an AI pioneer from the 1960’s created the concept of “Society of Mind” and I was totally thrilled by the idea of building software systems with a given purpose as a society of agents, not as a deterministic machine. To create “social glue” in such an artificial society, one needs to explore notions of trust and reputation, agent coalitions, emerging hierarchies and self-organization, communities of similar agents, and so on. And then, the Web “exploded”, people started blogging, sharing posts and videos, forming interest-based newsgroups and social networks, and I thought well, the agent simulation is good, but what about the real world? How can these concepts be transferred into the real world…because we design systems and then put those systems out there and people don’t use them. For every successful social system, there were thousands that failed… Just like the communist planned economies, perhaps their incentives were wrong? So how do you design for people and build incentives into the actual software to reward people so as to make the systems more engaging, and more addictive? And that became my area of research, which I’ve been working on since 2001. It’s about understanding how to incentivize participation in online communities, which could be discussion forums, social networks, or just enterprise systems where operators need to type in reports. For example, nurses who report on the state of a patient – how do you incentivize them to write a more detailed, better report? How do you reward desirable behaviors to enable the system as a whole to achieve a certain purpose, while maintaining its sustainability, quality, fairness, etc.? So then I started studying motivation, which is part of a field of psychology called social psychology and behavioral economics. Who do people do certain things? It’s a fascinating area.

TC: So what have you learned in your research about what motivates people? For instance people know you have to exercise and eat healthy, but how do you actually get them to do it? How do you keep people from getting bored and ignoring the technology once they’re used to it?

JV: I think that’s a one-million-dollar question, or maybe multi-million-dollar question because people are very easily bored, and we’re getting bored more easily. There are a lot of different strategies that exist, and there’s no one answer, but probably the best answer is that it depends on the person. The one important thing is that the person has to care about the behavior, and that they should set their own goal. To get them to set an appropriate goal, you can help them, you can educate them. So personalization is the key, and this leads again to user modeling. Here, again, going back to my original research into educational systems, AI and education – how do you teach people what is important? How do you convince them that something is important so that they can set a behavioral goal related to that thing? Changing habits depends on deciding on the new behavior you want to adopt, and then overcoming all the barriers which pull you to your old behavior, which is very sticky. So how do you do that? I’ll give you a couple of examples. Those personal monitoring devices like Fitbit and Apple Watch, they all rely on self monitoring. People are curious about their own behavior. Showing them statistics is important to teach them about how they’re doing, and if they’re improving. If they’re doing better than yesterday, it makes people feel good. Psychologists call this self-efficacy. It’s a very powerful motivation. People have a built-in drive to become better in whatever they’re doing. You can encourage this by showing them, visually showing them that they’re getting better, or if they’re not getting better, nudging them or relying on social influence. The key is to know your user, which leads back to user modeling, which means collecting data. Data, data, data. Understanding, collecting and understanding the data about your user, and of course this comes with huge privacy implications, since once the data is there, it can be copied, it could fall in the wrong hands, or can be used for wrong purposes. So ethically it’s very, very questionable. But like every technology, it has positive and negative uses.

TC: So what excites you about the potential of AI in the future?

JV: What excites me? I would very much like to see AI in a chip which people can plug in. I don’t believe that AI will be evil. I don’t think the fantasy about having a HAL computer which is much smarter than us and will take our place could happen. I think AI is very fragmented. Of course computers can have better skills than us for storing data. They’re faster than us. But these are also very narrow, very narrow areas, in which they can be better than us, for very narrow tasks. I see a positive future in augmenting our capabilities with the abilities in AI.

TC:That is so cool. What is a good way for children and maybe parents who don’t know about these kinds of technologies to start learning about them?

JV: There are so many tools out there. It’s really easy to create an app. There are environments where you don’t even need to code to be able to create a very simple app. I would recommend that they start with something that would enhance their life. For example, I live in a new neighborhood where nobody knows anybody. I had an undergraduate student who also lives in the same area who was looking for a project and I suggested that he make an app to allow people who live in the same neighborhood to just meet each other for a good reason. People are busy and need a good reason. Maybe they have something to give away, or cooked too much. Instead of throwing it away, why not offer it to the neighbors who didn’t have time to cook this evening? It’s also a reason to meet a neighbor, to encourage real face-to-face interaction, to get people to meet each other through the mediation of technology. And he was very excited. He did it and it worked well – he managed to actually get people on the application. So that’s a very good entry point. Once you start with something simple – and perhaps it’s not intelligent at all – then you can start adding “smarts” to it. But start simple, to solve a specific problem. And use the web. The web is big and there is so much to learn there.

TC:I think that is great advice, to start simple. And then you get excited.

JV: Exactly. Don’t wait to become interested because if you wait, there are so many interesting areas which you never get exposed to in school. So how would you know about this? How would you know if you’re interested? So start somewhere, work hard to become good in it, and then you’ll get interested. You’ll get excited, and it becomes a passion. Once you get passionate you will be good at it.

TC: That’s awesome advice. Thank you so much.

Elizabeth Clark: Creative Writing with AI

As part of our AI in Your Community series, I spoke to Elizabeth Clark, who won the Amazon Alexa Prize for her work with Sounding Board, a social bot. Elizabeth is studying natural language processing and working on tools for collaborative storytelling.

Elizabeth Clark

Tara Chklovski: Tell me a little bit about what you’re working on.

Elizabeth Clark: Very broadly I’m working on natural language processing, so looking at how language and computers interact, and helping computers process language – either written text or speech. More specifically I’ve been looking at collaborative writing systems, which give people support and offer suggestions to them as they write. I’m exploring how we can build models that will generate suggestions that are helpful to people as they try to write, say, a short story. There are different levels to offer help to people as they write. You could point out grammatical errors or spelling mistakes, or you could offer suggestions about structure. The type of suggestions we’re interested in are focused on the actual content for your story.

Our goal is to look at what type of suggestions people want, and determine how we can give them suggestions that are coherent with the story that has come so far, but are still creative and surprising – all to try and spark their creativity as they write. As for what are useful suggestions, we’ve found that it really depends on who is using the system. Different people want different things out of these suggestions. Some people really like silly suggestions, that have these unexpected elements, and they’ll work really hard to try to find a way to work it into their story, embracing it as a challenge…where other people know exactly what they want to write and if the suggestion isn’t in line with that, then they will just delete it and write their own story. There does seem to be a tradeoff between the level of unexpectedness of the suggestion and how coherent it is with what has come before.

Read more

Natural Language Processing and Bias: An Interview with Maarten Sap

As part of the AI in your Community series, I recently spoke with Maarten Sap, a PhD student at the University of Washington. Maarten is interested in natural language processing, and social science applications of AI. Maarten is also the 2017 winner of the Alexa Prize, an Amazon competition to further conversational artificial intelligence. Maarten […]

Virtual humans and decision-making: A conversation with Stacy Marsella

As part of our ongoing AI In your Community series, I talked to Stacy Marsella, professor in the College of Computer and Information Science at Northeastern University and the Psychology Department. Professor Marsella’s research is grounded in computational modeling of human cognition, emotion, and social behavior as well as evaluation of those models. Tara Chklovski: […]

Machine Learning and combining common sense with data: An Interview with Fabio Cozman

As part of our AI in Your Community project, I spoke to Fabio Gagliardi Cozman, who is a Full Professor at the Engineering School  at the University of Sao Paulo, Brazil. He  works in the Department of Mechatronics and Mechanical Systems, in the Decision Making Lab, which focuses on Artificial intelligence. Tara Chklovski: Tell us […]

Game Theory and Machine Learning: An Interview with Fei Fang

As part of our AI in Your Community series, I sat down with Fei Fang, an Assistant Professor in the Institute for Software Research in the School of Computer Science at Carnegie Mellon University. She works on game theory and machine learning, researching the strategic behavior of multiple agents, which has applications to many societal challenges […]