As part of our ongoing AI In your Community series, I talked to Marek Rosa, CEO and CTO of GoodAI, an organization on a mission to develop general AI as quickly as possible to help humanity and understand the universe. They recently launched “Solving the Race” as part of their General AI Challenge.

Marek Rosa

Marek Rosa

Tara Chklovski: Thanks for doing this. Can you tell me a bit about the area you’re working in and what types of problems you’re looking to solve?

Marek Rosa: I have two areas: One is computer game development and the other is AI.  Regarding AI, my interest lies in trying to develop general artificial intelligence (AGI), or general AI, which is not aimed or focused on some narrow target and is not just designed by an AI programmer or AI engineer. Actually, the AI learns how to learn, learns gradually, accumulates these skills gradually, even reaches a human‑level skill set, and then uses this human‑level skill set to solve anything we want it to solve.

TC: How did you choose this area, and how did you get into it?

MR: Every time I was thinking about the future, my future or the universe or anything, it always got to the point where I started to think that the ultimate intelligence could help us solve all other things. So, it’s like some kind of leverage that can be used on anything else.

And to me it didn’t make sense to be working on, let’s say, space travel because I knew that if we solve AI, AI will solve space travel. So, I was thinking “okay, better solve AGI and then use it for all the other things.” Maybe it’s more risky, or maybe you never know, but at least the possible outcome can enable everything. So this is why it appears to me to be the best thing to work on. Everything else is slightly below it because it has less potential or it will get outpaced by AGI anyway.

There is also curiosity and all the other inspirations. I would like to know how the AGI works and then use it to discover all the other things, and I would also like to help people.  With AGI I think we can maximize this, if nothing else.

TC: And so what is your connection to GoodAI and what do you want that to accomplish?

MR: GoodAI is a company that I started about four years ago and it’s still privately funded by me alone. We are trying to develop general AI and then use it to help people, and to explore the universe, to explore physics – basically, to understand this world a little bit better.

TC: What, in your opinion, makes a very good product?

MR: Okay, so product, I would say that something that helps many people – either helping a lot of people a little bit, or a few people by some huge margin. Those two things are very important. And helpful can mean that it’s bringing them something new.

For example, with the games [I develop], of course we are not going to cure cancer with video games, but at least some people have fun with our games and because they’re engineering games they learn a lot, and they learn to think and solve problems in a novel way that wasn’t possible with other games before. So, I would say this is what is important about a product:  How useful it is to people.

TC: Going down to a game level what are some features of a game that keep people coming back?

MR: There can be many elements. For example some people just want to collect things. Some people want to compete. Some people want to solve problems. Some people want a story. Some people want social interaction when playing multiplayer. So, there can be many, many motivations.

Our games like “Space Engineers” or “Medieval Engineers” are about building mechanical things and then playing with them in a three-dimensional world. For me, what is most important is the believability of the world, and that the interaction of the world is intuitive. For example when you as a player have an expectation that you’ll build a spaceship, and it will be heavy, but with thrusters it will fly, then when this actually happens in the game it should be true. So the game has similar rules as those that we have in real life. It transfers [a player’s] experience to the game. And also, what is very important is that the interaction you have with the world must feel good.

Like bubble wrap – it’s just enjoyable to press and pop. And of course, it’s the most primitive level, but something like this, this kind of instant gratification, we also seek in games. For example if there is a game where you have an ax and there are trees, you should enjoy cutting the trees.  Even though it’s a very simple thing just the fact that you are there and if you hit the tree it makes a nice effect visually, say the camera shakes and you just feel it.

But also, what is very important is to give players something they cannot do in other games or in real life. I cannot fly or build spaceships in real life. So that’s why I would then play the game.

TC: Right. I talked to one AI researcher who said that maybe games can be one way in which children learn about AI because they could actually make AI agents that play that game with them. It’s interesting to think that they could get some understanding of some basic AI principles or some models by actually creating it to play with them. I don’t know if while you engineer your worlds you’ve thought about that you could also create AI agents in the game.

MR: Of course. This is one of the things that we are doing in GoodAI, we are thinking about some simple worlds and tasks and what skills agents can learn from these tasks, and how to distinguish between tasks and so on. So, it’s very important.

TC:  Earlier you talked a little bit about how you think AI can benefit society. In your world how do you think AI can strengthen society in the next couple of decades? What can we be doing to accelerate that or improve that?

MR: There are many, many ways AI can benefit us. One obvious example is that AI can help us to get better technology, better research and so one. Another can be learning something from AI or from how AI learns and then operates, and understanding a little bit better what we as a society want to achieve. What is the end goal?  Is it happiness of every person or just a few people? What is it actually, and then trying to optimize from there. And by this, I mean try to find strategies to get there, because AIs, especially today, usually work this way. You have some end goal, or some objective, and then you train the AI agent or some neural network to maximize that objective.

Maybe we as a society can also work in a similar way so that instead of either, voting politicians to parliament, we would vote for what the end goal should be, and then we can actually find strategies that will maximize this end goal, and not by just arguing and fighting, but actually running it in some simulation where in the end you know that these strategies, when combined together, will maximize that final objective. So that’s one possibility.

TC: Can you explain that? So, you’re saying, for instance, it’s a simulation that you run because you know the end outcome. You run the simulation and then it helps you back out what strategies to implement based on that simulation?

MR: Yes. And it’s not simple because the real world, of course, is very complex.

TC: Right.

MR: And even if you have a great and very high‑detail simulation the small differences between it and the real world can make it completely unusable. So, I’m not saying this will be some sort of utopia, but it can help us to better understand those strategies.

TC: Right. Are you working on some of these models?

MR: Just one. It’s called “Solving the AI Race.” And it’s basically how to mitigate or avoid an “AI race”. The “AI race” is an idea that, as we are approaching stronger and stronger AI, there will be players who will want to be the winner in the sense that they will want to be the person, company, or country who will have the best AI in the world, and seek even better AI. And then use this better AI to do whatever they want.

There can be negative scenarios with this. Maybe the race will end up having only one winner and then everyone will have lost. Or even before this race ends it gets so heated that something very bad will happen even before we actually get the AI; we kill each other during the process of getting the best AI or something like that. There are many other negative scenarios like this. But the idea of this challenge that we started is we ask people to propose their solutions of how to mitigate this AI race so that it doesn’t even end up in these bad scenarios.

GoodAI's new initiative, the General AI Challenge

GoodAI’s new initiative, the General AI Challenge

TC: Interesting. Very cool. What advice would you give children as they try to find a problem in their community to work on?  I think you have some ideas, but this is specifically for children.

MR: Well it depends on the child.  There are children, I think, that will want some long‑term, super high‑reward, but high‑risk problems. I’m one of these children. I don’t want to be working on simple problems. I want to be working on problems that are practically impossible to solve…but on the other side there are children that like doable problems.  They want to solve one thing after another and they just want this flow.

So, I think that really depends on the child. They should realize what their thing is, otherwise they’ll be unhappy, and then from that point should also realize what will make them happy when working on this goal because everyone can have different objectives. Some child might do this for ego, some child might do it for money.  Some child may want to help people. Some child wants to collaborate with as many people as possible, and so on.

I also think the child should think about how useful it will be to other people. I think that’s very important. And, money can come from this approach. It’s not impossible, because usually when you make something useful for many people, and you do it in a way that you can still get some money back, then you can actually make a lot of money.

So, I think children can be thinking about this as something where you will be pushing your limits. Not going into a comfort zone but actually being on the edge and pushing it.

TC: And so I think that leads to a very important point:  How do you motivate yourself when you hit an obstacle? Because I think when you are younger you have less and less experience dealing with challenges. So how do you push forward?

MR: This is a good question. And actually, I think they should have taught us these things in school.  I remember when I was younger, and I would hit some wall I was kind of determined to overcome it.

In our team we call it meta skills, because skills are skills.  I can read I can add numbers and things like this, but meta skills are skills about how to think about the skills and how to improve them. I think it’s super important.

TC: I think some people call them soft skills, but I think it’s really meta skills because it’s metacognition, right?  You are looking at how you are thinking.

MR: I think it’s very important, and it’s not really taught enough. So again, just to sum this up, when there are some obstacles the thing is that it’s not only about if you are still enjoying it or something like this. Sometimes you need to go in some kind of autopilot and just do it.

So I would say that sometimes doing it even though you don’t like it anymore is still okay. But of course, in the long term I would not be working on AI if I didn’t believe it, or I would not be working on games if I didn’t believe in it. But if there is some trouble, some technical or some management issue or whatever, then I’ll just think that I should probably solve this thing because if I don’t solve it there will be a problem.  And I don’t like solving it because it’s a boring thing, but still I don’t have any other option. So, I just go and do it.

TC: Because the end goal is worthy.

MR: Yes. And sometimes it’s really good. For example, in sports they say that if you feel pain and you are really tired and your body is telling that you cannot do anymore push-ups you actually can still do at least twice as much. It’s just like, the body is telling you that you can’t anymore, but in reality, you actually can.

TC: It’s like when we thought, okay, the marathon was the longest you can run and then people come and run 150 miles.

MR: Exactly. That’s another good point.  It is easy to say something cannot be done because it is “impossible” but if someone does not think it is impossible they will have more of a chance of doing it. As Thomas Edison said “those who say it cannot be done should not interrupt those who are doing it.” It doesn’t make sense to take people’s feedback for granted before even trying yourself.

TC: That’s awesome. This was such a fun conversation. Thank you so much, Marek.