As part of our ongoing AI In your Community series, I sat down with Dr. Marie desJardins, Associate Dean and Professor of Computer Science at University of Maryland Baltimore County, to learn about her artificial intelligence research and curriculum development. Marie was chosen to be one of the eight fellows at AAAI-18: the Thirty-Second AAAI Conference on Artificial Intelligence, wherein Iridescent will host an “AI for Social Good” design hackathon on Friday, February 2.

Named UMBC Presidential Teaching Professor (2014-2017) and a dedicated mentor, herself, Marie offered advice for how children and families can solve problems using technology.

Tara Chklovski:
What problems are you currently working on?

Marie desJardins: The main problem that my lab is working on is intelligent learning, so robots acting as software agents to can learn how to solve complex tasks in very complicated domains. It requires paying attention to the part of the world that’s relevant for what you’re trying to do right now. So for example, when I’m driving my car down the road, I don’t worry about little pebbles in the road, right? Because I can’t really do anything about them and that’s not the level of granularity that I need to think about. I need to think about exit ramps and other traffic, but I don’t really need to think about all of the other buildings and things that are surrounding me.

So, we’re looking at different ways where you can automatically identify what’s the right abstraction or what’s the right granularity to look at for different kinds of tasks and then be able to use that information to transfer knowledge from one kind of problem to another.

Dr. desJardins

TC: So do humans play a big role in creating the program that says, “Here are the right things to pay attention to,” or can an AI figure that out itself?

MD: The first thing that we worked on was if a human being tells the AI system, “Here are the things you should pay attention to,” can it then learn the task? Now what we’re working on is having the AI system automatically figure out what it should pay attention to.

TC: How do you think AI will help strengthen communities and society at large?

MD: The potential is there for AI systems to really augment people’s capacity and let them be more effective, with bigger impact. I just think about the GPS in my car, which is fundamentally AI technology. Every time something works, we stop calling it AI. So 20 years ago, route planning was AI. Now it’s the GPS in your car. It’s just Google Maps. This navigation system has completely freed and empowered me; I don’t have to unfold paper maps and try to figure out where I’m going. It’s like a little cognitive augmentation to my capabilities. I think these augmented capacities equalize the playing field for people.

The danger, as AI systems move into the marketplace, is that they will displace people’s jobs. On the upside, there will be all these new jobs, such as somebody who will need to build and maintain the factory robots that displaced human workers, for instance. But in addition, a factory’s company is going to use the increased productivity, from AI, to invest in growing its business. Maybe it will then need more sales people, more technicians, or maybe it opens a whole new line of consulting businesses. Who knows?

TC: In your view, what makes a good problem?

MD: A good problem is a lot of balancing what’s possible with what matters, then putting it together with the resources to actually put it in place. For example, there’s a new AP course called AP Computer Science Principles, and we had some NSF funding to develop a curriculum for that. It’s a course meant for a broad range of students. We’re trying to attract more women. We’re trying to attract more under-represented minorities. We’ve got this great course, and it doesn’t do us any good at all if teachers aren’t prepared to teach it. You have to have all the resources in place. So when we designed this curriculum, we worked with teachers to make sure it is actually teacher-driven, not just driven from my university perspective, of saying what I think is important. Then, those teachers who helped to develop the curriculum were our master teachers to train other teachers to teach the curriculum. Thinking about all the resources to not just create a solution, but actually deploy it in your community, makes a good problem.

TC: What do you find difficult in your research and how do you overcome it?

MD: I think for me personally, I am more of a big-picture thinker. I like to think about big problems and high levels of abstraction, and I’m not so personally driven by the low level implementation details. I’m impatient with these discussions.

The way that I mitigate this is that I try to find collaborators and students who are more interested in that part of the problem and are better at it. It turns out that I’ve had a couple of collaborators who are not particularly big-picture thinkers. They like to tinker and actually build the thing. When I get together with somebody like that, we can be a terrific team because I’m doing what I love and what I’m good at, and they’re doing what they love and what they’re good at; we complement each other. I think what’s hard for me is easy for somebody else and finding the set of complementary people that have those complementary perspectives is really important.

Another challenge that we try to tackle is getting our computer scientists and engineers to have experiences working with diverse teams. One of the ways that I’ve been working on this is designing and running our Grand Challenge Scholars Program (under the auspices of the National Academy of Engineering; pictured right) at UMBC. I love seeing these students, and I’m most proud that our program, unlike many universities’, is open to students from all majors.

TC: That’s awesome.

MD: Yes, so I’ve had an ancient studies major, biology majors, a psychology major. Getting these students together from really different perspectives and having them talk about some of these hard problems is initially really exciting and also very hard.



Then, it gets easier. The initial barrier is often just one of language and perspective. I’m saying one thing and you’re saying a different thing, or we’re using the same words, but we really mean something different, and then we start to butt heads about those differences, without even realizing why we’re butting heads. Watching that play out with students is very rewarding because it makes you realize how worthwhile it is to get past that difficult point where you disagree with one other, so you can find common ground. You eventually start to agree with one another on ideas.

TC: What advice would you give children and families, as they try to find a problem to solve in their communities, using technology?

MD: The first step is to find a problem in your community to solve. I think technology can help with almost any problem, if you understand the problem and you understand the technology. I think when we don’t have both of those things working hand-in-hand, then that’s when problem-solving doesn’t work out so well. There are a lot of cases where technologists come along and try to solve a problem for somebody else, even though they don’t really understand the nuances of the problem. There’s been a lot of examples of cases in third-world countries where teams of engineers will come in from some first-world country and say, “We’re going to provide access to clean water. We’re going to build a well.” After they come in and build a well, they go away, and a year later, the well doesn’t work anymore.

People are becoming more aware of this and are finally starting to think, “Okay. We can’t just come in and build a thing. We have to partner with the people in the community who need that thing and understand not just the engineering part of the solution, but the human part of the solution.”

One of the areas I’m really interested in is food recovery networks. Like here at UMBC, we have all kind of catered events, and there’s always a ton of food left over. I think most of it goes in the trash. There’s a student group on campus that’s working to try to recover some of that food. I think it would be really useful if an app was created where people who are organizing events could put that information into a system so that people who are food deprived could make use of that food.

If you’re not working with both people who understand the technology and people who understand the problem, you’re not going to be able to solve the problem. But there’s an example of a problem, I think in generally any given community, that some people have excess resources and other people are under-resourced, and how do you connect that excess resource with under-resourcing in a way that brings more dignity to people, rather than digging through dumpsters? This is just one instance where technology could come in.

TC: Great answers, Marie. Thank you for speaking with me, as we collect more advice from experts like you, in preparation for our first-ever AI Family Challenge.

MD: It was really nice to meet you, thank you.