Posts

An Interview with Erin Bradner: Using AI to make construction easier

As part of our AI in Your Community series, I sat down to interview Erin Bradner, the Director of Robotics at Autodesk, which aims to solve complex design problems from ecological challenges to smart design practices. Erin has researched topics ranging from from the future of computer-aided design to how to use robots in novel ways to automate processes in manufacturing and construction.

Erin Bradner, Director of Robotics at Autodesk

Tara Chklovski: Tell me about what you’re working on.

Erin Bradner: I’m now the Director of Robotics at Autodesk, where we make professional software for architects, engineers, and animators. And what they’re looking to do is create more flexible manufacturing lines. And in construction, they’re looking to automate aspects of construction that have not been automated before.

In that sense, construction is aiming to be more like manufacturing, with assembly happening off-site to allow you to bring pre-assembled parts onto the construction site and have the construction site become an open-air assembly line. The traditional sort of stick-built architecture where you’re cutting timber on site is inefficient. The construction industry has not received the productivity gains that other fields have received through technology over the last 20 years. It’s been flat, and we’re helping to address that. There are a lot of interesting startups in construction at work too!

Tara Chklovski: Like which ones?

Erin Bradner: Well startups are doing what startups do – they’re laser focused on innovative, focused technology. For example, Built Robotics is looking at autonomous Bobcats to grade a building site. Usually a Bobcat is operated by an engineer and comes in to clear the site, but taking the technology used for autonomous vehicles, like LIDAR and vision sensing, they’ve developed an autonomous Bobcat that can clear the site on its own.

There’s another company called Canvas that’s just getting off the ground and is using soft pneumatic robots that are human-safe and applying them to the construction site to do dirty and repetitive jobs. Their robots are still in development, but they likely requires quite a bit of AI to integrate.

What Autodesk is looking to do, being a software provider, is not to make robots, but rather to connect our CAD software to robots and other machines to make it easier to build what has been designed. Because CAD – computer-aided design software –  is what’s used to specify nearly everything that is manufactured or engineered today. There is CAD to map terrain for those Bobcats, there is CAD for the walls and the floor and other elements of the buildings. We want to bring CAD into these platforms, along with simulations, so you can simulate the robot in its environment before ever running an operation, and also use machine learning to train the robot to complete its tasks.

Read more

Hands-On AI Activities Welcome Parents to the World of Technology

“[Today’s activities] showed us that math and science can be very fun; when you work as a team you can accomplish anything.”  – Flodinita Santillan, Parent

Iridescent_Intel_AI

Shifting From Fear to Fun

“We had fun racking our brains to figure out how to complete each activity and they liked it! This really gave me an idea of the engineering field.” – Bianca Loaiza, parent

While AI isn’t new, the media’s sudden focus on it – good and bad – has brought it to everyone’s attention. The perception of AI-driven machines is both, embraced and shunned, for its potential impact on society. Arizona has been especially ridden with fear after an incident in March where a self-driving car killed a woman.

When Iridescent’s Founder and CEO, Tara Chklovski, asked the girls and their families who found AI mysterious and scary, a room full of hands shot into the air. Tara explained that we are not familiar with positive instances of AI in action because these examples, such as video games or search engines, are so integrated into our everyday lives we don’t recognize them as AI.

Read more

An Interview with Gabriel Torres: AI, Agriculture and Drones

Gabriel Torres

Gabriel Torres

As part of our AI in Your Community series, I sat down with Gabriel Torres, an unmanned systems expert and CEO and co-founder of MicaSense, which brings unmanned systems and sensors to the agricultural industry. We discussed his work with MicaSense and how they’re working to help the agricultural industry take better advantage of tools and technologies.

Tara Chklovski: What problem are you working on?

Gabriel Torres: In general, optimization. Ever since I was in school I’ve been interested in efficiency. I see inefficiencies in the world, and I see opportunities for things to be easier, better, more transparent, more universal. I always like to see how I can impact somebody’s life for the better. I’ve done that throughout my career.It’s been really rewarding, to really make an impact and to touch lives in ways that I would normally not be able to.

TC: Tell me more about MicaSense.

GT: With MicaSense we started to make it easier for the agricultural industry to catch up with and take advantage of technologies that haven’t really been available or accessible, and provide a benefit, in terms of quality, quantity, and yield.

Read more

Kasia Muldner: Artificial Intelligence and Student Learning

As part of our AI in your Community Series, I recently had the opportunity to sit down with Kasia Muldner, an assistant professor in the Institute of Cognitive Science at Carleton University. She works with intelligent tutoring systems to better understand student learning, problem solving, and creativity and the factors that affect them.

Kasia MuldnerTara Chklovski: Tell me a little bit about what kinds of problems you’re working on.

Kasia Muldner: I work in the field of learning and cognition, with applications to intelligent tutoring systems. My research focuses on student learning, including both cognitive and affective components. I’m particularly interested in factors that influence student learning and ways to improve it, which is where technology comes in.

Tara Chklovski: Can you explain what you mean by cognitive and affective components? What are the differences and how do you define them?

Kasia Muldner: Sure. Cognitive factors have traditionally been linked to domain knowledge – like the knowledge needed to isolate a variable in an algebra equation by subtracting some value from both sides of the equation.

Affect on the other hand is commonly used to refer to feelings, moods, or emotions – although these terms have distinctive definitions, they are often used interchangeably. When it comes to computer tutors, the field used to focus on designing support for cognitive factors, like having the tutor give the student domain hints. However, there is a lot of evidence that how students are feeling when they’re learning really influences what they learn and even whether they learn at all. So there is now a lot more work developing tutors that can both detect how students are feeling and respond to that emotion.

Read more

An Interview with Manuela Veloso: AI and Autonomous Agents

As part of our AI in Your Community series, I sat down with Manuela Veloso, a renowned expert in artificial intelligence and robotics. Manuela Veloso is the Herbert A. Simon University Professor in the School of Computer Science at Carnegie Mellon University and a former President of the Association for the Advancement of Artificial Intelligence (AAAI). She is also the co-founder and a Past President of the RoboCup Federation. We discussed her work, how she started working with robots, and what advice she has for students about identifying good problems to solve.

Manuela Veloso

Manuela Veloso

Tara Chklovski: Maybe you can start by telling us a little bit about what problem you are excited about and what you’re working on.

Manuela Veloso: I’ve been working in the field of AI for many, many years. In particular I look at AI research as the challenge to integrate what I call perception, which is the ability to interpret data, or assess the world through sensors – and then I combine it with reasoning, which is cognition, or the part of thinking to plan actions that allows an AI system to achieve some kind of goal. And then the third component of this kind of AI story is the action, where these robots, these intelligent agents, do take action in the world by changing the state of the world.

So, I have been working on this problem of integrating perception, cognition, and action. I can also usually think about these as autonomy – so, agents or AI systems exhibit autonomy in their integrations of assessing the world through their perceptions, eventually making decisions through their cognition, and finally actuating in the world. So, within that goal of autonomous agents, I’ve been working a lot with autonomous robots. But I also work with software agents.

TC: And have you created robots to try to accomplish a particular task?

MV: Yeah – basically, these autonomous robots can be thought of as designed to achieve tasks. I’ve also worked on robots that have been part of the robot soccer team. These soccer robots address multiple problems of working on a team – problems of coordination and trying to work as teammates, and in very uncertain environments like in the presence of an opponent. I’ve done robot soccer research since the mid-90s, always trying to make these teams of robots capable of addressing the complications of playing an adversary. Soccer captures all sorts of problems of teamwork at the physical-space level.

So, I’ve done that, and then I’ve also worked on autonomous robots called “service robots” which are capable of performing tasks for humans like driving people in a particular kind of building, taking them to particular locations, picking up and delivering objects. And so that involves navigation, but instead of being on roads it’s inside of buildings. I’ve been working on these mobile indoor service robots – not just service robots that will wash your dishes, but service robots that actually navigate our environments.And then I’ve done many autonomous robots for lots of education purposes, like NAO robots, humanized robots, all sorts and shapes of robots, but basically they are all machines or artificial creatures with a way to perceive the world, like cameras or sensors and then potentially some computing that enables them to think about what they should be doing, which they then actuate with their wheels or their legs or their arms or their gestures.

TC: That’s totally fascinating. And I love the robot soccer! I’m curious, how did you get interested in thinking about making robots play soccer?

MV: I actually was not interested in the soccer problem per se. I was working on planning and execution for robots when one of my students, Peter Stone, was exposed to this idea through a little demonstration of small robots playing soccer. And then we started doing this research on autonomous robots that would be able to play on a team. I don’t have an interest in soccer really, but it gave us a tremendously challenging and exciting infrastructure to think about the problems of autonomy.It’s hard to do autonomy – for instance, autonomous cars are expensive. Plus it’s complicated. And the soccer problem in the lab became a very good framework to actually study the problem of autonomy. And it’s still very important. It’s still very challenging as we add to the research challenges and scale up (by making the teams larger, for example). The soccer was always kind of an excuse to study these very difficult autonomy problems.

TC: It constrains it, right? What advice would you give children as they try to find a problem in their communities that they can solve using technology?

MV: I think that’s a very interesting question. I can’t really advise children to follow what they like to do, since that didn’t apply to me. I never thought about doing robotics. I never loved robots, although I liked math. I liked the rigor of mathematics very much. And then as time went on I became an electrical engineer, and then I started to be very interested in computers.But for children I think the important thing is to think currently. Many of the jobs that are exciting now, be it a medical doctor, an economist, a banker, a teacher, a computer scientist – there is a lot of data involved. So, these computers that we have now, the Fitbits that we wear, the little Alexas we talk to, everything is about accumulating data, accumulating information. Computers now have access to everywhere you go because you are carrying a cell phone that has GPS. So, children need to understand the beautiful concept of trying to make use of all this data. There is a kind of data skill-set that maybe kids could be exposed to. For example, count how many steps it takes to go from here all the way to the kitchen. How many times did you see a specific person this month? Let’s do a little counting here and then they can start understanding probability, they can start understanding distributions, they can start understanding that a lot of the information to be processed is of this data nature. It’s beautiful to try to understand that these children are going to more and more be in a digital world in which a lot of the information is data. This concept of space, this concept of data, this concept of being aware that many things are numbers that become parts of a computer, I think it’s an interesting way of thinking.In addition to reading and writing and math there are these new skills that in some sense are data skills. They talk with Alexa, they can count things, they have this understanding of how these things get in a computer.

TC: One last question before we go: in your research I’m sure you encounter a lot of obstacles. So, what are some strategies that have worked for you that help you to stay motivated and keep pursuing the end goal?

MV: Very good question. I think that the way to think about this given that there are so many difficulties in research – there’s a lot of bottlenecks and very few breakthroughs – it’s the ability to actually collaborate with other people. When you are stuck, there is a component of collaboration, to engage in thinking together about difficult problems rather than facing the difficult problem by one’s self. That always helped me.At the research level I’ve worked very closely with my students; it’s a joint pursuit of difficult problems. And in children, we need to also grow that skill of collaborating with other people. It helps to interact with others, and of course having depth and understanding and persistence and not giving up.Problems that are hard are the most interesting ones, but are the ones that require more dedication, more persistence, more thoroughness. You become aware that when you are stuck in a difficult problem it’s both an opportunity to make a great discovery at the same time as it is frustration of being stuck. So, if life would not have any difficult problems then you would solve all these simple things that then would not have an impact. So, it’s good to have challenges, but you also know that you might have the potential to make a big difference if you persist on solving challenging problems.

TC: Yeah. And I think as a child, if you have not had that much experience being successful then it’s even harder because then it’s scary, right? Like do I have what I need to succeed? And so, you need to have the mentors and the parents all part of it.

MV: Yeah. So, Tara, one final thought then: it’s true that it’s difficult, but on the other hand somehow when you collaborate with other people, older people or just other people, by magic someone helps you break this difficult problem into steps and it becomes, “let’s see if we can do this thing, which is much simpler, not the actual big problem.” But it becomes an ability to let the child make incremental progress towards solving problems of a staggering nature. So, there’s no problem that can’t be solved without this kind of breaking them into pieces.And so, I think that’s what people should think and tell children. Even if it seems very hard maybe there is a much simpler problem that’s on the way to solve the hard problem that they can address now. That’s basically what we do in the research world, and that’s basically what we do in our education. We build an upon the difficult concepts. We don’t just present them. They are broken down. It’s a confidence‑building process, but it’s also about showing you are on the path so solve the bigger problem.That’s a skill that we develop as we become teachers, and as we become researchers and parents. The problems are difficult but it’s good that they are difficult. It’s just that you can make incremental progress as you go.

TC: Totally. All right, Manuela, thank you so, so much.
[/av_textblock]

Family Interest in Artificial Intelligence

Parents Look For Ways to “Tech-Proof” Their Family for Impact of Artificial Intelligence

A recent study commissioned by Iridescent reveals 86% of parents want new ways to learn critical computer skills outside traditional classrooms such as taking a class, joining a club, or participating in events for more guidance on at-home education. The online study, conducted by VeraQuest, surveyed parents of 3rd – 8th graders to better understand their views of Artificial Intelligence (AI) and their children’s learning experiences. In addition to new approaches to learning, the study found parents do not understand the extent to which AI is already integrated into their everyday lives, but an overwhelming majority (92%) understand that technology, such as AI, is rapidly advancing and their children need to learn about these new technologies to be prepared for the future.

Today, only 36% of children receive technology education outside of their school, and parents expressed concern in the current gap between their child’s interest level in learning about future technology and their preparedness for it. These trends are consistent with studies conducted by Google and Gallup, which found interest in computer science learning continues to be strong, but all students do not yet have access to these learning opportunities in class. The education gap is especially prominent in low-income communities. “We often talk to concerned parents who are wondering how to provide their children the tools and skills they need to have a bright future as technology and the skill sets needed to succeed rapidly evolve,” said Tara Chklovski, Founder and CEO, Iridescent. “We want to help these parents feel confident and optimistic about their family’s future in a world filled with new technologies. That’s why we created the Curiosity Machine AI Family Challenge.” Through the Curiosity Machine AI Family Challenge Iridescent is filling the education gap with immersive AI curriculum for children and their families. The program introduces AI to underserved families in a way that fosters a deeper understanding of AI and its real world applications and makes technology education accessible to all communities.Parents learn alongside their children as they create AI-based products that solve problems in their community. “My daughter very much likes science,” said a mother surveyed in the study. “I think [the Curiosity Machine AI Family Challenge] will give her an upper hand in the [AI] field as well as allow her to be as creative as she wants to be in building skills for her future.” Iridescent Curiosity Machine AI Family Challenge Family

Additional study findings for AI education:

Fears and Misperceptions Around AI

  • Our research found that 85% of parents understand that new AI technology develops rapidly, but less than 20% of parents know that Facebook, targeted ads, or other recommendation engines use AI technology. There is real danger the lack of AI knowledge and its rapid development will widen the “digital divide,” or information gap, between parents and technology.

Interest in Exploring New Technology

  • Regardless of their concerns, we found that parents still had a positive outlook on the future of technology. 63% of parents believe AI will be used to make the world a better place and 78% were especially interested in learning more about AI.

Join the conversation

Iridescent is hosting a series of panel conversations with leading AI and technology companies and researchers. Join us for a deeper dive into this new study and a thoughtful conversation about how to support families, parents, and communities in the face of a rapidly changing world. # # #

About the Survey

Methodology The survey was conducted online from January 11th to January 17th, 2018. The sample was comprised of 1,566 respondents in the United States ages 25+ who have a child in grades three through eight. The sample was constructed from U.S. Census proportions to be representative of the population based on age, income, education, race/ethnicity and geography. Targets were also used for residential status and grade level of child. The low-income group (585 respondents) also had targets for each of the above variables. These targets were created to be specifically representative of families earning under $50K annually with a child in third to eighth grade.

Rationale Iridescent, in partnership with the Association for the Advancement of Artificial Intelligence (AAAI), the League of United Latin American Citizens (LULAC), and NVIDIA Corporation, is encouraging families to learn about Artificial Intelligence technology through the Curiosity Machine AI Family Challenge. Over the next two years, the Curiosity Machine AI Family Challenge will invite 3rd – 8th grade students and their families to explore core concepts of AI research, apply AI tools to solve problems in their communities and have an opportunity to enter their ideas into a global competition.

An Interview with Chelsea Finn: AI for Robotics

As part of our AI in Your Community series, I had the chance to speak to Chelsea Finn, a PhD student at UC Berkeley currently doing work with machine learning and robotics. Through her work she teaches robots how to perform tasks in multiple environments, with the goal of having these robots perform tasks for humans that they can’t perform, or that it would be dangerous for humans to perform.

Chelsea Finn

Tara Chklovski: Tell me a bit about your work. What research are you working on?

Chelsea Finn: I am a PhD student at UC Berkeley, and I work on machine learning and AI for robotics. A lot of my work entails having physical robots learn how to do things in the world, like screw a cap onto a bottle, or use a spatula, or pick up objects and rearrange them. Our goal is to have systems that can learn to do these different tasks so that they can go into a variety of environments and perform those tasks for humans – or perform dangerous jobs that we don’t want humans to do.
Read more

An Interview with Julita Vassileva: Artificial Intelligence and Online Communities

As part of our AI in Your Community series, I recently spoke with Julita Vassileva, a professor in Computer Science at the University of Saskatchewan who is currently focused on building successful online communities and social computing applications. Julita Vassileva is particularly interested in user participation and user modeling, as well as user motivation and designing systems that incentivize people to continue participating in online communities.

Julita Vassileva

Julita Vassileva

Tara Chklovski:To start off, maybe you can tell us a little bit about what problem you’re working on and what area of research you’re excited about.

Julita Vassileva: I’ve been doing research for 35 years, so I’ve done a lot of things! I’m a very curious person – I’ve been following my nose and have explored all over the place. When I first started working with artificial intelligence it was in education applications, while I was working on my Master’s degree and my PhD. When I started I didn’t have a particular interest in the area at all. I was in my 4th year studying mathematics at the University of Sofia, in Bulgaria, and when it came time to decide what to do next, all of my really smart and strong colleagues went in to very theoretical, classical areas of mathematics. I went to one of my professors for advice and he told me that mathematics is beautiful, and you could study it all your life and be fulfilled, but that it’s such an old area that every little stone has been turned over a hundred times by extremely smart people. You need to be extremely lucky and very smart, and work extremely hard to be able to find something new. So why not go into a new area? So I decided to go into computer science, even though I didn’t really have any idea what computer science was. I wasn’t fascinated by it – we programmed on punch cards, which was quite unexciting. But I picked somebody to work with who was sympathetic, who I thought I could talk to, my supervisor, Dr. Roumen Radev, who was creating a “smart” tutoring system to help teach how to solve physics problems. So I decided to try to do that for my Master’s.

TC: Oh, tell me more about that. What were you doing, and what did you think of it?

JV: At the beginning I thought, “who is going to study physics with computers?” Physics is tough enough just by itself. That leads me to a message I actually think is very important for young people– you don’t necessarily need to be interested in the subject when you start. In the beginning everything is hard, but under the surface there could be a whole world waiting to be discovered! You’ll have to invest a lot of work, and sometimes you just have to grit your teeth and do the work, and then suddenly you discover that it’s becoming interesting. The deeper you get, the more interesting and fascinating it becomes, and you feel the power of your knowledge gives you amazing opportunities. What flipped things for me was working on those tutors and realizing that it’s really hard to design tutoring systems. The tutor I created during my Master’s program coached students in solving problems using Ohm’s law, and it took me one year of hard work to develop it. It only taught only how to solve problems related to one physics lesson on calculating electrical circuits, but the experience made me think about how to make it easier to design the generic software for these systems so that teachers could create them more easily for different lessons and domains. And so I ventured into the area of authoring intelligent tutoring systems, which meant creating software that allowed other users (like teachers) to create their own tutoring applications. While computer-based training and authoring at that time was already an “old” area (20 years old, to be precise), intelligent tutoring, at that time, was new, which was so motivating, because it was a bit like homesteading – there were so many unexplored problems. You feel like the first person, the pioneer. Everything in the field is in front of you and you can do whatever you want. It’s a fantastic feeling.

TC: What was that transition like, moving from designing a tutor for one type of problem to creating generic tutoring systems and authoring tools?

JV: The really tricky thing was modeling the student, because for a system to be intelligent it has to understand how much the student understands and knows, and it has to adapt to it. If the student doesn’t understand a concept and the system continues giving the same advice it’s useless – the student will drop the system. How can you make the system intelligent so it can react to what the student understands? And how do you understand what the student knows?

Microsoft’s Clippy. Image from Mashable, 2017.

This is an area called student modeling, and at the time it was a new field of study, just starting. My focus during my PhD work was on creating generic architecture, knowledge representation schemes and a planning algorithms that could be used both for domain models and for student models. But then I found out that the application area of these architectures and methods is bigger than student models, since if one wants to create a “smart” system that supports the user, it needs to understand and anticipate the user’s needs, interests, knowledge and skills… which leads to user models. For example, Microsoft Office in the mid-90’s introduced Clippy – a little cartoon agent that was based on user modeling. It was watching what you were doing and trying to predict your goals and offer you advice based on those predictions. That’s user modeling and that’s what I was working on for the first five to six years after my PhD.Then I heard about multi-agent systems – a completely different area of AI from knowledge representation and planning, areas I was already familiar with. And suddenly, for me, it was like a revelation. It triggered my curiosity because I come from Bulgaria, which was a communist country. I left Bulgaria as soon as communism collapsed and it was possible to travel abroad. I went to Germany, and then came to Canada. And all the time I was trying to figure out, why did communism collapse economically? Of course, the reasons were many and complex. But I was looking for one simple, basic principle, fundamental in the system… Eventually, I realized it’s the incentives. People didn’t have the incentive to work because everything was divided based on your needs. You work as hard as you can, but then you don’t get as much as you worked for. Somebody else who did not work as hard but who has bigger needs (or connections to influential people) will get more than you. Then I realized that multi-agent systems, which was, at that time, a budding new area of artificial intelligence, allowed you to explore exactly those kinds of questions.

TC: Can you explain Multi-Agent Systems a little bit more, and how they allow us to explore those sorts of questions?

JV: You build a society of agents, where each agent is like a little person with very simple reasoning. But they can talk to each other. They can interact. They are autonomous. They can pursue their own goals, respond to rules and rewards that are set in the system or by other agents, and then you can let them loose, and see what happens with this society. If you set the rules of interaction between agents in particular ways, if you put laws and punishments in the society in a particular way you get completely different behaviors in the overall system. And you can see some societies collapse and some societies thrive, and you can build simulations of multi-agent systems. That was my focus for another five or six years. Marvin Minsky, an AI pioneer from the 1960’s created the concept of “Society of Mind” and I was totally thrilled by the idea of building software systems with a given purpose as a society of agents, not as a deterministic machine. To create “social glue” in such an artificial society, one needs to explore notions of trust and reputation, agent coalitions, emerging hierarchies and self-organization, communities of similar agents, and so on. And then, the Web “exploded”, people started blogging, sharing posts and videos, forming interest-based newsgroups and social networks, and I thought well, the agent simulation is good, but what about the real world? How can these concepts be transferred into the real world…because we design systems and then put those systems out there and people don’t use them. For every successful social system, there were thousands that failed… Just like the communist planned economies, perhaps their incentives were wrong? So how do you design for people and build incentives into the actual software to reward people so as to make the systems more engaging, and more addictive? And that became my area of research, which I’ve been working on since 2001. It’s about understanding how to incentivize participation in online communities, which could be discussion forums, social networks, or just enterprise systems where operators need to type in reports. For example, nurses who report on the state of a patient – how do you incentivize them to write a more detailed, better report? How do you reward desirable behaviors to enable the system as a whole to achieve a certain purpose, while maintaining its sustainability, quality, fairness, etc.? So then I started studying motivation, which is part of a field of psychology called social psychology and behavioral economics. Who do people do certain things? It’s a fascinating area.

TC: So what have you learned in your research about what motivates people? For instance people know you have to exercise and eat healthy, but how do you actually get them to do it? How do you keep people from getting bored and ignoring the technology once they’re used to it?

JV: I think that’s a one-million-dollar question, or maybe multi-million-dollar question because people are very easily bored, and we’re getting bored more easily. There are a lot of different strategies that exist, and there’s no one answer, but probably the best answer is that it depends on the person. The one important thing is that the person has to care about the behavior, and that they should set their own goal. To get them to set an appropriate goal, you can help them, you can educate them. So personalization is the key, and this leads again to user modeling. Here, again, going back to my original research into educational systems, AI and education – how do you teach people what is important? How do you convince them that something is important so that they can set a behavioral goal related to that thing? Changing habits depends on deciding on the new behavior you want to adopt, and then overcoming all the barriers which pull you to your old behavior, which is very sticky. So how do you do that? I’ll give you a couple of examples. Those personal monitoring devices like Fitbit and Apple Watch, they all rely on self monitoring. People are curious about their own behavior. Showing them statistics is important to teach them about how they’re doing, and if they’re improving. If they’re doing better than yesterday, it makes people feel good. Psychologists call this self-efficacy. It’s a very powerful motivation. People have a built-in drive to become better in whatever they’re doing. You can encourage this by showing them, visually showing them that they’re getting better, or if they’re not getting better, nudging them or relying on social influence. The key is to know your user, which leads back to user modeling, which means collecting data. Data, data, data. Understanding, collecting and understanding the data about your user, and of course this comes with huge privacy implications, since once the data is there, it can be copied, it could fall in the wrong hands, or can be used for wrong purposes. So ethically it’s very, very questionable. But like every technology, it has positive and negative uses.

TC: So what excites you about the potential of AI in the future?

JV: What excites me? I would very much like to see AI in a chip which people can plug in. I don’t believe that AI will be evil. I don’t think the fantasy about having a HAL computer which is much smarter than us and will take our place could happen. I think AI is very fragmented. Of course computers can have better skills than us for storing data. They’re faster than us. But these are also very narrow, very narrow areas, in which they can be better than us, for very narrow tasks. I see a positive future in augmenting our capabilities with the abilities in AI.

TC:That is so cool. What is a good way for children and maybe parents who don’t know about these kinds of technologies to start learning about them?

JV: There are so many tools out there. It’s really easy to create an app. There are environments where you don’t even need to code to be able to create a very simple app. I would recommend that they start with something that would enhance their life. For example, I live in a new neighborhood where nobody knows anybody. I had an undergraduate student who also lives in the same area who was looking for a project and I suggested that he make an app to allow people who live in the same neighborhood to just meet each other for a good reason. People are busy and need a good reason. Maybe they have something to give away, or cooked too much. Instead of throwing it away, why not offer it to the neighbors who didn’t have time to cook this evening? It’s also a reason to meet a neighbor, to encourage real face-to-face interaction, to get people to meet each other through the mediation of technology. And he was very excited. He did it and it worked well – he managed to actually get people on the application. So that’s a very good entry point. Once you start with something simple – and perhaps it’s not intelligent at all – then you can start adding “smarts” to it. But start simple, to solve a specific problem. And use the web. The web is big and there is so much to learn there.

TC:I think that is great advice, to start simple. And then you get excited.

JV: Exactly. Don’t wait to become interested because if you wait, there are so many interesting areas which you never get exposed to in school. So how would you know about this? How would you know if you’re interested? So start somewhere, work hard to become good in it, and then you’ll get interested. You’ll get excited, and it becomes a passion. Once you get passionate you will be good at it.

TC: That’s awesome advice. Thank you so much.