Posts

An Interview with Gabriel Torres: AI, Agriculture and Drones

Gabriel Torres

Gabriel Torres

As part of our AI in Your Community series, I sat down with Gabriel Torres, an unmanned systems expert and CEO and co-founder of MicaSense, which brings unmanned systems and sensors to the agricultural industry. We discussed his work with MicaSense and how they’re working to help the agricultural industry take better advantage of tools and technologies.

Tara Chklovski: What problem are you working on?

Gabriel Torres: In general, optimization. Ever since I was in school I’ve been interested in efficiency. I see inefficiencies in the world, and I see opportunities for things to be easier, better, more transparent, more universal. I always like to see how I can impact somebody’s life for the better. I’ve done that throughout my career.It’s been really rewarding, to really make an impact and to touch lives in ways that I would normally not be able to.

TC: Tell me more about MicaSense.

GT: With MicaSense we started to make it easier for the agricultural industry to catch up with and take advantage of technologies that haven’t really been available or accessible, and provide a benefit, in terms of quality, quantity, and yield.

Read more

Kasia Muldner: Artificial Intelligence and Student Learning

As part of our AI in your Community Series, I recently had the opportunity to sit down with Kasia Muldner, an assistant professor in the Institute of Cognitive Science at Carleton University. She works with intelligent tutoring systems to better understand student learning, problem solving, and creativity and the factors that affect them.

Kasia MuldnerTara Chklovski: Tell me a little bit about what kinds of problems you’re working on.

Kasia Muldner: I work in the field of learning and cognition, with applications to intelligent tutoring systems. My research focuses on student learning, including both cognitive and affective components. I’m particularly interested in factors that influence student learning and ways to improve it, which is where technology comes in.

Tara Chklovski: Can you explain what you mean by cognitive and affective components? What are the differences and how do you define them?

Kasia Muldner: Sure. Cognitive factors have traditionally been linked to domain knowledge – like the knowledge needed to isolate a variable in an algebra equation by subtracting some value from both sides of the equation.

Affect on the other hand is commonly used to refer to feelings, moods, or emotions – although these terms have distinctive definitions, they are often used interchangeably. When it comes to computer tutors, the field used to focus on designing support for cognitive factors, like having the tutor give the student domain hints. However, there is a lot of evidence that how students are feeling when they’re learning really influences what they learn and even whether they learn at all. So there is now a lot more work developing tutors that can both detect how students are feeling and respond to that emotion.

Read more

An Interview with Manuela Veloso: AI and Autonomous Agents

As part of our AI in Your Community series, I sat down with Manuela Veloso, a renowned expert in artificial intelligence and robotics. Manuela Veloso is the Herbert A. Simon University Professor in the School of Computer Science at Carnegie Mellon University and a former President of the Association for the Advancement of Artificial Intelligence (AAAI). She is also the co-founder and a Past President of the RoboCup Federation. We discussed her work, how she started working with robots, and what advice she has for students about identifying good problems to solve.

Manuela Veloso

Manuela Veloso

Tara Chklovski: Maybe you can start by telling us a little bit about what problem you are excited about and what you’re working on.

Manuela Veloso: I’ve been working in the field of AI for many, many years. In particular I look at AI research as the challenge to integrate what I call perception, which is the ability to interpret data, or assess the world through sensors – and then I combine it with reasoning, which is cognition, or the part of thinking to plan actions that allows an AI system to achieve some kind of goal. And then the third component of this kind of AI story is the action, where these robots, these intelligent agents, do take action in the world by changing the state of the world.

So, I have been working on this problem of integrating perception, cognition, and action. I can also usually think about these as autonomy – so, agents or AI systems exhibit autonomy in their integrations of assessing the world through their perceptions, eventually making decisions through their cognition, and finally actuating in the world. So, within that goal of autonomous agents, I’ve been working a lot with autonomous robots. But I also work with software agents.

TC: And have you created robots to try to accomplish a particular task?

MV: Yeah – basically, these autonomous robots can be thought of as designed to achieve tasks. I’ve also worked on robots that have been part of the robot soccer team. These soccer robots address multiple problems of working on a team – problems of coordination and trying to work as teammates, and in very uncertain environments like in the presence of an opponent. I’ve done robot soccer research since the mid-90s, always trying to make these teams of robots capable of addressing the complications of playing an adversary. Soccer captures all sorts of problems of teamwork at the physical-space level.

So, I’ve done that, and then I’ve also worked on autonomous robots called “service robots” which are capable of performing tasks for humans like driving people in a particular kind of building, taking them to particular locations, picking up and delivering objects. And so that involves navigation, but instead of being on roads it’s inside of buildings. I’ve been working on these mobile indoor service robots – not just service robots that will wash your dishes, but service robots that actually navigate our environments.And then I’ve done many autonomous robots for lots of education purposes, like NAO robots, humanized robots, all sorts and shapes of robots, but basically they are all machines or artificial creatures with a way to perceive the world, like cameras or sensors and then potentially some computing that enables them to think about what they should be doing, which they then actuate with their wheels or their legs or their arms or their gestures.

TC: That’s totally fascinating. And I love the robot soccer! I’m curious, how did you get interested in thinking about making robots play soccer?

MV: I actually was not interested in the soccer problem per se. I was working on planning and execution for robots when one of my students, Peter Stone, was exposed to this idea through a little demonstration of small robots playing soccer. And then we started doing this research on autonomous robots that would be able to play on a team. I don’t have an interest in soccer really, but it gave us a tremendously challenging and exciting infrastructure to think about the problems of autonomy.It’s hard to do autonomy – for instance, autonomous cars are expensive. Plus it’s complicated. And the soccer problem in the lab became a very good framework to actually study the problem of autonomy. And it’s still very important. It’s still very challenging as we add to the research challenges and scale up (by making the teams larger, for example). The soccer was always kind of an excuse to study these very difficult autonomy problems.

TC: It constrains it, right? What advice would you give children as they try to find a problem in their communities that they can solve using technology?

MV: I think that’s a very interesting question. I can’t really advise children to follow what they like to do, since that didn’t apply to me. I never thought about doing robotics. I never loved robots, although I liked math. I liked the rigor of mathematics very much. And then as time went on I became an electrical engineer, and then I started to be very interested in computers.But for children I think the important thing is to think currently. Many of the jobs that are exciting now, be it a medical doctor, an economist, a banker, a teacher, a computer scientist – there is a lot of data involved. So, these computers that we have now, the Fitbits that we wear, the little Alexas we talk to, everything is about accumulating data, accumulating information. Computers now have access to everywhere you go because you are carrying a cell phone that has GPS. So, children need to understand the beautiful concept of trying to make use of all this data. There is a kind of data skill-set that maybe kids could be exposed to. For example, count how many steps it takes to go from here all the way to the kitchen. How many times did you see a specific person this month? Let’s do a little counting here and then they can start understanding probability, they can start understanding distributions, they can start understanding that a lot of the information to be processed is of this data nature. It’s beautiful to try to understand that these children are going to more and more be in a digital world in which a lot of the information is data. This concept of space, this concept of data, this concept of being aware that many things are numbers that become parts of a computer, I think it’s an interesting way of thinking.In addition to reading and writing and math there are these new skills that in some sense are data skills. They talk with Alexa, they can count things, they have this understanding of how these things get in a computer.

TC: One last question before we go: in your research I’m sure you encounter a lot of obstacles. So, what are some strategies that have worked for you that help you to stay motivated and keep pursuing the end goal?

MV: Very good question. I think that the way to think about this given that there are so many difficulties in research – there’s a lot of bottlenecks and very few breakthroughs – it’s the ability to actually collaborate with other people. When you are stuck, there is a component of collaboration, to engage in thinking together about difficult problems rather than facing the difficult problem by one’s self. That always helped me.At the research level I’ve worked very closely with my students; it’s a joint pursuit of difficult problems. And in children, we need to also grow that skill of collaborating with other people. It helps to interact with others, and of course having depth and understanding and persistence and not giving up.Problems that are hard are the most interesting ones, but are the ones that require more dedication, more persistence, more thoroughness. You become aware that when you are stuck in a difficult problem it’s both an opportunity to make a great discovery at the same time as it is frustration of being stuck. So, if life would not have any difficult problems then you would solve all these simple things that then would not have an impact. So, it’s good to have challenges, but you also know that you might have the potential to make a big difference if you persist on solving challenging problems.

TC: Yeah. And I think as a child, if you have not had that much experience being successful then it’s even harder because then it’s scary, right? Like do I have what I need to succeed? And so, you need to have the mentors and the parents all part of it.

MV: Yeah. So, Tara, one final thought then: it’s true that it’s difficult, but on the other hand somehow when you collaborate with other people, older people or just other people, by magic someone helps you break this difficult problem into steps and it becomes, “let’s see if we can do this thing, which is much simpler, not the actual big problem.” But it becomes an ability to let the child make incremental progress towards solving problems of a staggering nature. So, there’s no problem that can’t be solved without this kind of breaking them into pieces.And so, I think that’s what people should think and tell children. Even if it seems very hard maybe there is a much simpler problem that’s on the way to solve the hard problem that they can address now. That’s basically what we do in the research world, and that’s basically what we do in our education. We build an upon the difficult concepts. We don’t just present them. They are broken down. It’s a confidence‑building process, but it’s also about showing you are on the path so solve the bigger problem.That’s a skill that we develop as we become teachers, and as we become researchers and parents. The problems are difficult but it’s good that they are difficult. It’s just that you can make incremental progress as you go.

TC: Totally. All right, Manuela, thank you so, so much.
[/av_textblock]

An Interview with Chelsea Finn: AI for Robotics

As part of our AI in Your Community series, I had the chance to speak to Chelsea Finn, a PhD student at UC Berkeley currently doing work with machine learning and robotics. Through her work she teaches robots how to perform tasks in multiple environments, with the goal of having these robots perform tasks for humans that they can’t perform, or that it would be dangerous for humans to perform.

Chelsea Finn

Tara Chklovski: Tell me a bit about your work. What research are you working on?

Chelsea Finn: I am a PhD student at UC Berkeley, and I work on machine learning and AI for robotics. A lot of my work entails having physical robots learn how to do things in the world, like screw a cap onto a bottle, or use a spatula, or pick up objects and rearrange them. Our goal is to have systems that can learn to do these different tasks so that they can go into a variety of environments and perform those tasks for humans – or perform dangerous jobs that we don’t want humans to do.
Read more

An Interview with Julita Vassileva: Artificial Intelligence and Online Communities

As part of our AI in Your Community series, I recently spoke with Julita Vassileva, a professor in Computer Science at the University of Saskatchewan who is currently focused on building successful online communities and social computing applications. Julita Vassileva is particularly interested in user participation and user modeling, as well as user motivation and designing systems that incentivize people to continue participating in online communities.

Julita Vassileva

Julita Vassileva

Tara Chklovski:To start off, maybe you can tell us a little bit about what problem you’re working on and what area of research you’re excited about.

Julita Vassileva: I’ve been doing research for 35 years, so I’ve done a lot of things! I’m a very curious person – I’ve been following my nose and have explored all over the place. When I first started working with artificial intelligence it was in education applications, while I was working on my Master’s degree and my PhD. When I started I didn’t have a particular interest in the area at all. I was in my 4th year studying mathematics at the University of Sofia, in Bulgaria, and when it came time to decide what to do next, all of my really smart and strong colleagues went in to very theoretical, classical areas of mathematics. I went to one of my professors for advice and he told me that mathematics is beautiful, and you could study it all your life and be fulfilled, but that it’s such an old area that every little stone has been turned over a hundred times by extremely smart people. You need to be extremely lucky and very smart, and work extremely hard to be able to find something new. So why not go into a new area? So I decided to go into computer science, even though I didn’t really have any idea what computer science was. I wasn’t fascinated by it – we programmed on punch cards, which was quite unexciting. But I picked somebody to work with who was sympathetic, who I thought I could talk to, my supervisor, Dr. Roumen Radev, who was creating a “smart” tutoring system to help teach how to solve physics problems. So I decided to try to do that for my Master’s.

TC: Oh, tell me more about that. What were you doing, and what did you think of it?

JV: At the beginning I thought, “who is going to study physics with computers?” Physics is tough enough just by itself. That leads me to a message I actually think is very important for young people– you don’t necessarily need to be interested in the subject when you start. In the beginning everything is hard, but under the surface there could be a whole world waiting to be discovered! You’ll have to invest a lot of work, and sometimes you just have to grit your teeth and do the work, and then suddenly you discover that it’s becoming interesting. The deeper you get, the more interesting and fascinating it becomes, and you feel the power of your knowledge gives you amazing opportunities. What flipped things for me was working on those tutors and realizing that it’s really hard to design tutoring systems. The tutor I created during my Master’s program coached students in solving problems using Ohm’s law, and it took me one year of hard work to develop it. It only taught only how to solve problems related to one physics lesson on calculating electrical circuits, but the experience made me think about how to make it easier to design the generic software for these systems so that teachers could create them more easily for different lessons and domains. And so I ventured into the area of authoring intelligent tutoring systems, which meant creating software that allowed other users (like teachers) to create their own tutoring applications. While computer-based training and authoring at that time was already an “old” area (20 years old, to be precise), intelligent tutoring, at that time, was new, which was so motivating, because it was a bit like homesteading – there were so many unexplored problems. You feel like the first person, the pioneer. Everything in the field is in front of you and you can do whatever you want. It’s a fantastic feeling.

TC: What was that transition like, moving from designing a tutor for one type of problem to creating generic tutoring systems and authoring tools?

JV: The really tricky thing was modeling the student, because for a system to be intelligent it has to understand how much the student understands and knows, and it has to adapt to it. If the student doesn’t understand a concept and the system continues giving the same advice it’s useless – the student will drop the system. How can you make the system intelligent so it can react to what the student understands? And how do you understand what the student knows?

Microsoft’s Clippy. Image from Mashable, 2017.

This is an area called student modeling, and at the time it was a new field of study, just starting. My focus during my PhD work was on creating generic architecture, knowledge representation schemes and a planning algorithms that could be used both for domain models and for student models. But then I found out that the application area of these architectures and methods is bigger than student models, since if one wants to create a “smart” system that supports the user, it needs to understand and anticipate the user’s needs, interests, knowledge and skills… which leads to user models. For example, Microsoft Office in the mid-90’s introduced Clippy – a little cartoon agent that was based on user modeling. It was watching what you were doing and trying to predict your goals and offer you advice based on those predictions. That’s user modeling and that’s what I was working on for the first five to six years after my PhD.Then I heard about multi-agent systems – a completely different area of AI from knowledge representation and planning, areas I was already familiar with. And suddenly, for me, it was like a revelation. It triggered my curiosity because I come from Bulgaria, which was a communist country. I left Bulgaria as soon as communism collapsed and it was possible to travel abroad. I went to Germany, and then came to Canada. And all the time I was trying to figure out, why did communism collapse economically? Of course, the reasons were many and complex. But I was looking for one simple, basic principle, fundamental in the system… Eventually, I realized it’s the incentives. People didn’t have the incentive to work because everything was divided based on your needs. You work as hard as you can, but then you don’t get as much as you worked for. Somebody else who did not work as hard but who has bigger needs (or connections to influential people) will get more than you. Then I realized that multi-agent systems, which was, at that time, a budding new area of artificial intelligence, allowed you to explore exactly those kinds of questions.

TC: Can you explain Multi-Agent Systems a little bit more, and how they allow us to explore those sorts of questions?

JV: You build a society of agents, where each agent is like a little person with very simple reasoning. But they can talk to each other. They can interact. They are autonomous. They can pursue their own goals, respond to rules and rewards that are set in the system or by other agents, and then you can let them loose, and see what happens with this society. If you set the rules of interaction between agents in particular ways, if you put laws and punishments in the society in a particular way you get completely different behaviors in the overall system. And you can see some societies collapse and some societies thrive, and you can build simulations of multi-agent systems. That was my focus for another five or six years. Marvin Minsky, an AI pioneer from the 1960’s created the concept of “Society of Mind” and I was totally thrilled by the idea of building software systems with a given purpose as a society of agents, not as a deterministic machine. To create “social glue” in such an artificial society, one needs to explore notions of trust and reputation, agent coalitions, emerging hierarchies and self-organization, communities of similar agents, and so on. And then, the Web “exploded”, people started blogging, sharing posts and videos, forming interest-based newsgroups and social networks, and I thought well, the agent simulation is good, but what about the real world? How can these concepts be transferred into the real world…because we design systems and then put those systems out there and people don’t use them. For every successful social system, there were thousands that failed… Just like the communist planned economies, perhaps their incentives were wrong? So how do you design for people and build incentives into the actual software to reward people so as to make the systems more engaging, and more addictive? And that became my area of research, which I’ve been working on since 2001. It’s about understanding how to incentivize participation in online communities, which could be discussion forums, social networks, or just enterprise systems where operators need to type in reports. For example, nurses who report on the state of a patient – how do you incentivize them to write a more detailed, better report? How do you reward desirable behaviors to enable the system as a whole to achieve a certain purpose, while maintaining its sustainability, quality, fairness, etc.? So then I started studying motivation, which is part of a field of psychology called social psychology and behavioral economics. Who do people do certain things? It’s a fascinating area.

TC: So what have you learned in your research about what motivates people? For instance people know you have to exercise and eat healthy, but how do you actually get them to do it? How do you keep people from getting bored and ignoring the technology once they’re used to it?

JV: I think that’s a one-million-dollar question, or maybe multi-million-dollar question because people are very easily bored, and we’re getting bored more easily. There are a lot of different strategies that exist, and there’s no one answer, but probably the best answer is that it depends on the person. The one important thing is that the person has to care about the behavior, and that they should set their own goal. To get them to set an appropriate goal, you can help them, you can educate them. So personalization is the key, and this leads again to user modeling. Here, again, going back to my original research into educational systems, AI and education – how do you teach people what is important? How do you convince them that something is important so that they can set a behavioral goal related to that thing? Changing habits depends on deciding on the new behavior you want to adopt, and then overcoming all the barriers which pull you to your old behavior, which is very sticky. So how do you do that? I’ll give you a couple of examples. Those personal monitoring devices like Fitbit and Apple Watch, they all rely on self monitoring. People are curious about their own behavior. Showing them statistics is important to teach them about how they’re doing, and if they’re improving. If they’re doing better than yesterday, it makes people feel good. Psychologists call this self-efficacy. It’s a very powerful motivation. People have a built-in drive to become better in whatever they’re doing. You can encourage this by showing them, visually showing them that they’re getting better, or if they’re not getting better, nudging them or relying on social influence. The key is to know your user, which leads back to user modeling, which means collecting data. Data, data, data. Understanding, collecting and understanding the data about your user, and of course this comes with huge privacy implications, since once the data is there, it can be copied, it could fall in the wrong hands, or can be used for wrong purposes. So ethically it’s very, very questionable. But like every technology, it has positive and negative uses.

TC: So what excites you about the potential of AI in the future?

JV: What excites me? I would very much like to see AI in a chip which people can plug in. I don’t believe that AI will be evil. I don’t think the fantasy about having a HAL computer which is much smarter than us and will take our place could happen. I think AI is very fragmented. Of course computers can have better skills than us for storing data. They’re faster than us. But these are also very narrow, very narrow areas, in which they can be better than us, for very narrow tasks. I see a positive future in augmenting our capabilities with the abilities in AI.

TC:That is so cool. What is a good way for children and maybe parents who don’t know about these kinds of technologies to start learning about them?

JV: There are so many tools out there. It’s really easy to create an app. There are environments where you don’t even need to code to be able to create a very simple app. I would recommend that they start with something that would enhance their life. For example, I live in a new neighborhood where nobody knows anybody. I had an undergraduate student who also lives in the same area who was looking for a project and I suggested that he make an app to allow people who live in the same neighborhood to just meet each other for a good reason. People are busy and need a good reason. Maybe they have something to give away, or cooked too much. Instead of throwing it away, why not offer it to the neighbors who didn’t have time to cook this evening? It’s also a reason to meet a neighbor, to encourage real face-to-face interaction, to get people to meet each other through the mediation of technology. And he was very excited. He did it and it worked well – he managed to actually get people on the application. So that’s a very good entry point. Once you start with something simple – and perhaps it’s not intelligent at all – then you can start adding “smarts” to it. But start simple, to solve a specific problem. And use the web. The web is big and there is so much to learn there.

TC:I think that is great advice, to start simple. And then you get excited.

JV: Exactly. Don’t wait to become interested because if you wait, there are so many interesting areas which you never get exposed to in school. So how would you know about this? How would you know if you’re interested? So start somewhere, work hard to become good in it, and then you’ll get interested. You’ll get excited, and it becomes a passion. Once you get passionate you will be good at it.

TC: That’s awesome advice. Thank you so much.

Elizabeth Clark: Creative Writing with AI

As part of our AI in Your Community series, I spoke to Elizabeth Clark, who won the Amazon Alexa Prize for her work with Sounding Board, a social bot. Elizabeth is studying natural language processing and working on tools for collaborative storytelling.

Elizabeth Clark

Tara Chklovski: Tell me a little bit about what you’re working on.

Elizabeth Clark: Very broadly I’m working on natural language processing, so looking at how language and computers interact, and helping computers process language – either written text or speech. More specifically I’ve been looking at collaborative writing systems, which give people support and offer suggestions to them as they write. I’m exploring how we can build models that will generate suggestions that are helpful to people as they try to write, say, a short story. There are different levels to offer help to people as they write. You could point out grammatical errors or spelling mistakes, or you could offer suggestions about structure. The type of suggestions we’re interested in are focused on the actual content for your story.

Our goal is to look at what type of suggestions people want, and determine how we can give them suggestions that are coherent with the story that has come so far, but are still creative and surprising – all to try and spark their creativity as they write. As for what are useful suggestions, we’ve found that it really depends on who is using the system. Different people want different things out of these suggestions. Some people really like silly suggestions, that have these unexpected elements, and they’ll work really hard to try to find a way to work it into their story, embracing it as a challenge…where other people know exactly what they want to write and if the suggestion isn’t in line with that, then they will just delete it and write their own story. There does seem to be a tradeoff between the level of unexpectedness of the suggestion and how coherent it is with what has come before.

Read more

Natural Language Processing and Bias: An Interview with Maarten Sap

As part of the AI in your Community series, I recently spoke with Maarten Sap, a PhD student at the University of Washington. Maarten is interested in natural language processing, and social science applications of AI. Maarten is also the 2017 winner of the Alexa Prize, an Amazon competition to further conversational artificial intelligence. Maarten […]

Virtual humans and decision-making: A conversation with Stacy Marsella

As part of our ongoing AI In your Community series, I talked to Stacy Marsella, professor in the College of Computer and Information Science at Northeastern University and the Psychology Department. Professor Marsella’s research is grounded in computational modeling of human cognition, emotion, and social behavior as well as evaluation of those models. Tara Chklovski: […]