As part of our ongoing AI In your Community series, I talked to Ryota Kanai, Founder and CEO of Araya about how they are using neuroscience to create new types of AI, but also improve our understanding of human consciousness.

Ryota Kanoi

Ryota Kanai

Tara Chklovski: What problem are you working on?

Ryota Kanai: We are an AI startup and we work with big companies and help them solve their problems. Most of the problems are related to image recognition. For example, one of the projects was a very Japanese one – to count the number of tuna in a tuna farm! We didn’t expect that we would need A.I technology — but we did!

There’s a huge AI hype right now and people are trying to apply popular methods like deep neural networks to solve different problems. One problem that we are tackling is that of real-time image recognition. For example, it would be cool if surveillance cameras could do real-time image processing, but then every camera would need very expensive high performance GPUs. So there is a competition in terms of making deep learning light so that even smartphones can run deep learning image recognition algorithms. Another area is that of reducing the time we spend doing really simple work on computers. That is an area of improvement and we are thinking about ways of solving that.

These are more business-oriented problems, but actually our real interest is to understand human consciousness. We are trying to create something like consciousness in AI. Of course it’s a difficult problem because the very first step is to define consciousness. In our case we hypothesize that consciousness is the ability to simulate internally. So for example, we humans can think about what we want to do in the future. If you are about to reach for a glass of water you can mentally simulate how you will do it before you do it. So that is a kind of mental simulation and to do that you have to have some sort of a mental model of the environment and how your body interacts with the environment.

Of course we can debate whether it’s related to consciousness or not! My background is neuroscience and I’ve seen a few cases where consciousness is needed to perform some tasks. For example, you have probably heard of the classical Pavlovian conditioning when for example you may hear a beep and that sound would be followed by an electric shock. After experiencing this kind of pairing a few times you will react to the sound even without the electric shock. Consciousness is important when we need to learn such association when the sound and electric shock are temporally separated with a gap of a few seconds. You have to notice and become aware of that relationship to successfully learn the association. But if the sound and electric shock overlap in time, you don’t need consciousness for learning the association. There’s a reason for this. If two things happen at the same time or within a relatively short time interval, the evoked neural responses can meet each other in the brain. The responses happen without any additional processing, but when the events are temporarily separated, then we have to retain the information of the first stimulus (the sound) until the second stimulus (the electric shock) is delivered.  Learning associations of events separated over time is a computationally difficult problem because you can’t maintain all the information from the past to test for all possible combinations of association. You have to generate some hypothesis about what stimulus might predict what event.  Bridging a temporal gap is one important function of consciousness.

There are many interesting clinical cases that shed light on possible functions of consciousness. One of the most important ones in this context is a patient called D.F. She had brain damage to a region called LO – Lateral Occipital Cortex –  an area that is responsible for shape recognition. She lost the ability to recognize shapes in some weird ways! So for example, she was not able to say how a line was oriented, but she could orient a sheet of paper to fit through a slit. There seemed to be some sort of unconscious brain which was using the information of shape  to complete the task, although she could not consciously report on the orientation. Also what was interesting was that she couldn’t perform the action if she had to perform the task a few seconds after she saw the slit. She could react to input in real time but she couldn’t act on her own mental representation of the slit.

So it seems that you need some sort of consciousness to maintain information from the past so that you can act on it later. From this example we hypothesized that consciousness is the ability to act on things that are not present at the moment. It is a little bit like short-term memory but an important aspect is that this kind of mental representation doesn’t have to come from the past but even thinking about the future  can involve thought-generation and creation of mental representation. So we think that a special function of consciousness is to internally generate some sort of sensory representations even when the stimuli are not there.

And actually this is related to what people do with modern neural networks. People talk about things like generative models and these days there’s a method called generative adversarial networks (GANs) which have become very popular for generating faces of people who do not exist! The most famous one is the creation of photo-realistic bedroom images. The idea is that you can use high level representation  in the neural network to generate specific faces — or bedrooms! We are trying to connect this kind of ability of generating new information to consciousness. We are using this technique to let AIs generate many possible future events! . So for example a robot could generate future action sequences or future sensory input sequences using generative models. This is  actually similar to imagination because AIs have internally images of the future. We are training robots within a simulation, to imagine what might happen in the future and adjust their behavior accordingly. We are linking this technique to consciousness research so that we can also better understand human consciousness.

TC:  That is absolutely fascinating and thank you for explaining the wide array of problems your startup is tackling! What advice would you give children as they try to find problems in their community that they can solve using technology?

RK:  I think they should apply technologies to what they really like. For example an interesting data-set to analyze using neural networks could be one from professional sports, as they come with a lot of statistics. Many researchers have used deep learning to understand the movements of basketball and soccer players and what makes them successful. Another way to enjoy learning these kind of methods could be designing an agent to play games. Of course it will be technically challenging so they will need to know a lot of math and programming, but these days it’s getting easier and easier, and more accessible.

TC:  What in your opinion makes a good problem?

RK: Well that’s already a very good problem! I think it’s important to start from actual needs to solve, using what we have. This maybe slightly at a tangent but we do many projects with for-profit organizations who want to try applying A.I. but they don’t know to what problems! The important thing in meaningful applications is not the technology itself, but that its actually solving a real problem. And so maybe when we think about problems we don’t really need to think about how we will solve the problem but we should rather spend time identifying an important problem to solve — and that’s a really difficult question!

TC: What inspires you and keeps you going?

RK: People inspire me! Personally I really want to apply AI to something great and interesting and find out more about how the human brain works. I have always had this passion but real life is really tough! There’s a lot of competition and I have to work really hard to achieve even a small thing. And sometimes it gets difficult to keep working hard. But then I see some people who keep working with such passion. I’m sure they also experience a lot of difficulties, but they just keep working hard and eventually achieve something significant. That inspires me. I feel as if I can do it too!

TC: What aspect or field of A.I. excites you the most?

RK: It’s weird to see deep learning becoming so useful these days, but I feel that people are focusing too much on just one approach. I want to see a lot more different approaches. The concept of Artificial General Intelligence which is flexible A.I., adapting to any new environments and new things by themselves, is very interesting. We are succeeding at some important parts of Artificial General Intelligence but to progress further we need creative minds to come up with different ways to build A.I. So for example it would be cool if neural networks could convert sensory sensor representations like camera images and video feeds into slightly more symbolic representations, such as the concept of a cat. Current deep learning can do this, but we we would need a high level reasoning model where we manipulate those symbolic representations to reason logically. That kind of symbolic A.I. used to be very popular, and we need to connect deep learning to symbolic AI to create human-level general intelligence. We may need to redesign symbolic A.I., but different kinds of A.I. approaches are linked nicely we may have a new kind of A.I. I would really like to see this happen and to be part of it.