An Interview with Erin Bradner: Using AI to make construction easier
As part of our AI in Your Community series, I sat down to interview Erin Bradner, the Director of Robotics at Autodesk, which aims to solve complex design problems from ecological challenges to smart design practices. Erin has researched topics ranging from from the future of computer-aided design to how to use robots in novel ways to automate processes in manufacturing and construction.
Tara Chklovski: Tell me about what you’re working on.
Erin Bradner: I’m now the Director of Robotics at Autodesk, where we make professional software for architects, engineers, and animators. And what they’re looking to do is create more flexible manufacturing lines. And in construction, they’re looking to automate aspects of construction that have not been automated before.
In that sense, construction is aiming to be more like manufacturing, with assembly happening off-site to allow you to bring pre-assembled parts onto the construction site and have the construction site become an open-air assembly line. The traditional sort of stick-built architecture where you’re cutting timber on site is inefficient. The construction industry has not received the productivity gains that other fields have received through technology over the last 20 years. It’s been flat, and we’re helping to address that. There are a lot of interesting startups in construction at work too!
Tara Chklovski: Like which ones?
Erin Bradner: Well startups are doing what startups do – they’re laser focused on innovative, focused technology. For example, Built Robotics is looking at autonomous Bobcats to grade a building site. Usually a Bobcat is operated by an engineer and comes in to clear the site, but taking the technology used for autonomous vehicles, like LIDAR and vision sensing, they’ve developed an autonomous Bobcat that can clear the site on its own.
There’s another company called Canvas that’s just getting off the ground and is using soft pneumatic robots that are human-safe and applying them to the construction site to do dirty and repetitive jobs. Their robots are still in development, but they likely requires quite a bit of AI to integrate.
What Autodesk is looking to do, being a software provider, is not to make robots, but rather to connect our CAD software to robots and other machines to make it easier to build what has been designed. Because CAD – computer-aided design software – is what’s used to specify nearly everything that is manufactured or engineered today. There is CAD to map terrain for those Bobcats, there is CAD for the walls and the floor and other elements of the buildings. We want to bring CAD into these platforms, along with simulations, so you can simulate the robot in its environment before ever running an operation, and also use machine learning to train the robot to complete its tasks.
Tara Chklovski: What does that training look like?
What takes the longest when programming robots is getting your robotic control, meaning the robot motion, where it needs to be. Usually this is done by what‘s called a “teach pendant” where you basically have a joystick that you’re moving from one point to another point to another point so the robot moves. But we wanted to know if it could move more fluidly and really mimic a person moving, so we’ve put a VIVE virtual reality headset on a person and a VIVE controller. When they move, the robot will move with them. That way, the robot can directly mimic the person’s behavior.
Take for instance, a manufacturer and what they call “bin picking.” They have a bin of objects, and they can be oddly shaped, and be squishy or hard, but a variety of objects. A robot that is preprogrammed would be able to pick up a set of specific objects successfully and to do it repeatedly, but it couldn’t pick up any others. But if you train the robot on a wide range of objects then it starts to learn how to pick up every object you gave it and then new objects that it hasn’t seen before. And so that’s the convergence of AI and robotics.
If you have all that then you can put a person in the environment the robot will be working in and have the person really grab the robot by the nose and move the robot virtually around with his or her hand and the VIVE controller. You can play back that motion in the digital environment, revise it, and you can send that program to the robot.
Tara Chklovski: What role does the actual start up play when they are creating the robots? Are you working closely with them to create software solutions for that robot?
Erin Bradner: There’s so many startups that we can’t provide custom solutions for every one, but what we are looking to do is provide a platform for companies to make sure they, first and foremost, can use their CAD models in a way that’s meaningful and helpful to them. Beyond that, they can use the rich simulation tools that we have to create digital simulations of the robot and its environment.
Tara Chklovski: Interesting. And so how does this connect to your earlier generative design work?
Erin Bradner: So I came from the Generative Design Group and I’m now on the robotics team. It’s not necessarily connected because generative design is a new approach to creating. Instead of an engineer specifying a point, a line, and a surface and a single material, in a generative design tool you’ll specify your goals and you’ll say, “I need this part to withstand a hundred newtons, and that force is coming in this direction and it could be out of aluminum but it could be a polycarbonate or titanium. Show me the shape that would be generated if it needs to connect to this part and this part.”
For instance, a very simple design is a bracket that holds up a shelf. So it needs to connect to a shelf and a wall. What would that shape look like? So the algorithm take those given constraints and give you hundreds of solutions in different materials that all satisfy your load constraints, but you ultimately choose which design you like best.
So that kind of intelligence is going into future design tools. And I think it’s really exciting, and it’s not simply restricted to mechanical design – you can use this as well for building design. For example we we actually programmed the interior design of a building that Autodesk was going to occupy in Toronto. We knew where the bathrooms were; those were fixed. We knew where the walls were of this building; those were fixed. We used a generative design algorithm to create clusters of desks, common spaces, and conference rooms, and run through thousands of alternatives.
The goals and constraints it was using came from surveys individuals had completed about their preference for light, distraction, noise. Everyone completed the survey and then we also took into account which people worked with which people and needed to be near them.
Tara Chklovski: It’s such an awesome example of the human augmentation with the computer, like, the creative options are endless and a human mind cannot absorb all those constraints. I have two more questions: what do you find difficult in your work and how do you overcome that difficulty?
Erin Bradner: Ooh, what do I find difficult? Well as you can tell from this conversation Autodesk tools are used by a very wide range of professionals, and these are professionals who are the best in their fields and they require four or five years of advanced schooling and certification to do what they do. So just being able to follow a conversation when I’m talking to an engineer in a field I’ve never been exposed to before is an incredible challenge. It also happens to be a challenge I love because I love language. I like parsing sentences and building up my understanding of terminology in different industries so each time I get better and better at following the conversation, until soon enough, I’m contributing my own ideas.
So I think that’s the hardest thing is just being exposed to so many domain-specific terms and really important concepts that I won’t grok the first time, and I have no shame in asking, but sometimes there’s so much, I don’t even know what to ask initially. It’s not that surface problem of just, “I don’t understand that word you said”; it’s “I don’t even understand the big concept we’re trying to solve here because you’ve had five years of schooling and I’m just hearing this for the first time.” But it’s fun. It’s a challenge. I love it.
Tara Chklovski: I think not being afraid to ask; that’s the key. And that’s one message we definitely share, of parents saying “I don’t know; let’s find out together” when their child asks a question that they don’t know the answer to. And one last question: where do you see technology going?
Erin Bradner: Well, one thing to pick up on, you talked about human augmentation and it might sound sci-fi but really, software tools today, for all the sort of mathematical intelligence that’s built into them, they’re dumb tools. They only do what we tell them to do. And what I’m seeing with the introduction of AI in software design is that we’re able to develop tools that are going to anticipate our needs better, predict errors, and help us dig ourselves out of design problems more readily because we have this design partner working with us.
Tara Chklovski: That’s a cool term, design partner.
Erin Bradner: These expressions become trite after awhile, but I genuinely believe that our software is going to be a design partner that is quick-thinking. It may not be as creative a thinker as me or you, but it will be it’s a quick-thinking design partner. It has access to a high performance computing in the cloud, it can think faster than I can, it can compute all those multi-objective design optimization problems for me, and then I can look at the solution set that it provides and think, “oh, you know what? I’d never thought of that before.” Now this, I’m identifying this as a fruitful area to dive more deeply in. I can instruct the algorithm to chase the new areas some more on my behalf. So it really is a partnership.
Tara Chklovski: Very interesting. Well, thank you so much, this was such a pleasure.
Leave a Reply
Want to join the discussion?Feel free to contribute!