As part of our ongoing AI In your Community series, I sat down with Dr. Carolyn Rosé, Professor at Carnegie Mellon University’s Language Technologies Institute and Human-Computer Interaction Institute, to learn about her artificial intelligence research and advice for children and families starting to learn AI.

Dr. Carolyn Rosé

Dr. Carolyn Rosé

Tara Chklovski: What is your current work?

Carolyn Rosé: I work in the area of education, trying to apply artificial intelligence techniques to support students’ learning. In particular, I’m trying to identify characteristics of people that would make them work well together and then connecting them. So we very much are not trying to replace people with technology, but we want to use a technology that actually connects people and gives them more opportunities to benefit from their human interaction. So for example, we have done several different projects now where we first give individual students an activity where they share with the community, in a public space like a discussion forum, the work that they’ve done on their own.

Then, we encourage students to find opportunities within that context to offer feedback to three other students. We observe who they pick and how they interact with those people, and we look for signs that there’s curiosity, mutual respect, interest in each other’s ideas. What we find is that there’s something that we actually haven’t formalized that’s behind those interactions—a sense of mutual interest and respect that would underlay a good collaboration. What we want in groups of students working together is that they listen and they compare what the other person thinks to what they themselves think—that there are other ways of looking at the same information and that they can learn a richness in that comparison. That’s why we look for those signs of curiosity, interest, and respect in the exchange, and then we use that as a basis to assign people to groups wherein they should have chemistry, even though we don’t know what it is that actually sparked their interest. The thinking is that if we’ve seen it in one context, then we can use it as the basis for matching people in groups that they will perform better in, because they’re with the people who inspire them.

TC: What inspires you?

CR: I would say I’m really inspired by commitment to excellence and persistence, even in the face of adversity. I like to run after a big question where there might not be an answer, but it’s fascinating. I’m also really inspired by people who are committed to doing something impactful in the world.

When I was an undergraduate, I had to decide what area of computer science I was going into. I was most intellectually stimulated in my machine learning course, and I had an invitation to stay to do my PhD with the professor who taught that class. I wasn’t sure that the work was going to have any impact in the world, but the person who I admired more was a different professor who was doing work in educational technology. I became interested in this, because I really feel that education is the key to advancing society.

My dad always said that “education is what makes people free.” I’ll never forget that expression, because I really enjoy the experience of learning something new and wanted to give that joy to people. So I decided to teach a machine learning course every semester, and I apply machine learning in my research, which is aimed toward impact in education to try to bring people together in groups where they can learn and achieve together.

TC: What makes a good problem?

CR: I’m a researcher, so I try to find a problem that gives me the opportunity to learn something new to answer questions that haven’t been answered before. I would love children to get that sense of wonder and realize that scientists live in a world where there aren’t answers and that we’re comfortable with that.

But at the same time, I think a weakness of science is that it sometimes can exist in a world for its own sake without connecting with the real, felt needs of people in the communities we are trying to serve. So I think a good problem is a good bridge between all the questions we’re trying to solve and the needs of the people whose problems bring about those questions.

TC: That leads me to my next question: What do you think makes a good product?

CR: A good product is one that’s well designed to meet real needs. For example, I was working with the Smithsonian on an online course that’s about superheroes. There was a version of the class where individual students would learn about superheroes, as a genre, and design a superhero that would tried to solve a problem in the world. There was also a group version where people who finished the individual version could choose to do a team project where they could put their superheroes together, on a superhero team. We looked at what was different in the stories they constructed about individual superheroes or superhero teams, and what was  interesting was that when they created one superhero, they would have it solve a personal, individualized problem. They were important problems, but they were important to individual people. But when they were in teams, they had to think about broader societal problems and how to work as a team.

TC: How do you think AI will help strengthen society and communities?

CR: I don’t think in itself AI has the ability to achieve anything good; it’s all in how it’s applied. It’s in picking the right problems to solve, because AI gives us some tools that fit in certain parts of problems’ solutions. AI is like a magnifying glass: it extends our ability to view things, and in particular models, allows us to summarize a huge amount of data. So it’s a means of giving the technology a little bit of agency, which I hesitate to say, because then it makes it sound like AI is going to be self-aware and take over, which is not at all what’s going to happen.

TC: I love the analogy of AI being a magnifying glass—that’s very powerful. What do you find difficult and how do you overcome that difficulty?

CR: I think again, the most difficult thing in my work is trying to bridge areas of expertise. A lot of the work that I do is to create opportunities for communities to come together. For the past few years, I’ve been working to get several different research societies that work in the area of learning to team together, which can be challenging for organizations with different focuses. What helps to work through the challenges is remembering that we are building connections between technology research and what actual recipients of AI want and need.

TC: What advice would you give children and families as they try to find a problem in their communities that may solve using technology?

CR: One initial step could be becoming aware of the ways in which AI technology already figures into all of the things that we interact with everyday, like apps on your phone that recommended things for you. I think what makes it hard for children is that they aren’t in a position to separate the picture of AI in movies from reality. I think it is very valuable for children to actually understand that there’s nothing mystical about artificial intelligence; it’s really just about patterns.

Because we do not have to fear AI taking over humans. It’s never going capture the real complexity of the world—this is so intricate and so continuous that we will never be able to fully model it. But using AI to make a problem-solving model, even if it is simple, can make useful predictions that we otherwise didn’t have before. For the music recommendations example, sometimes AI will recommend a song that you usually put on repeat on an app, but other times it exposes you to stuff that you really like and may never have discovered without it. It’s not magic; it’s just patterns. If children and families can work together to start to see these common examples, then they can start to be creative with their own learning.