Hit enter to search or ESC to close

Robots That Build Homes, Help With Tasks, and Check Vital Signs – Inside Autodesk Scientist Hui Li’s Lab

A colorful illustration of Hui Li, Senior Principal Research Scientist at Autodesk Robotics Lab

Hui Li stands before a towering autonomous robotic arm. The arm, which can extend nearly seven feet in all directions, reaches for a wooden beam, grasps it, rotates it, and attempts to fit it into a notch in a second beam. With each touch and each adjustment, the robot learns more and more about the contact between the beams. Finally, the robot slides the beam smoothly into place. Thanks to the information the robot has gathered about the objects and their contact via touch sensors on its wrist and fingers, the next time it is asked to fit two beams together it will do it faster.

This process, where a robot independently interacts with objects around it, is known as Autonomous Robotic Manipulation. Its applications and potential societal impact are enormous.

“Think of a world where robots not only build custom products, houses, and buildings, but also load dishwashers and do our laundry, help move elders or patients and check their vital signs,” Li says. “Construction will be safer, chores will be easier, and care will be more accessible and sustainable.”

While there are many organizations researching Autonomous Robotic Manipulation, fewer than 10 labs worldwide focus on learning based on multi-modal sensory input, particularly the sense of touch. Li’s lab is one of them.

Today, Li is a Senior Principal Research Scientist in the Autodesk Robotics Lab. Her passion for robotics and AI, however, dates back to her childhood. Li was born and raised in Beijing, China, where some of her earliest memories were of watching programs like “Astro Boy,” “Doraemon” and “Transformers.” Li was mesmerized by the robots in the series—their superpowers, gadgets and creativity, and the way they cohabited and interacted with humans.

Curious about the mechanics of technology—but to the chagrin of her parents—Li spent hours after school taking apart the television and stereo and then putting them back together. In college in the late ‘90s, Li studied electrical engineering, which she enjoyed, but maintained an interest in AI.

“Today, AI is everywhere. It’s a hot topic. But back then, it wasn’t very popular,” Li says. “The opportunities to study AI in China were limited.”

Li decided to look abroad for graduate schools to pursue her PhD and in 2001, she was admitted to MIT’s Computer Science & Artificial Intelligence Lab.

During her time in Boston, Li and her team worked with fake Mars rovers and a replica of a Mars landscape. They developed systems that would allow the robots to independently problem solve if they encountered a roadblock or challenge as they were maneuvering across the faux Mars terrain.

After completing her doctorate, Li worked for Boeing on AI robots to assist in the construction of planes and then at Airware, a startup that built autonomous drones. In 2016, she joined Autodesk, a company headquartered in San Rafael, California, that, among other things, creates software for construction, engineering, and manufacturing.

At Autodesk, Li’s current focus is on the industrial applications of teaching robots to manipulate objects and adapt to real-time sensory feedback, like touch.

In the lab, she works with robotic arms of different sizes—the largest are the seven-footers who maneuver wooden beams, the smallest are the size of a tabletop lamp who take on smaller tasks like screwing nuts. Large and small, the robots’ arms and “fingers” (picture kitchen tongs) are equipped with touch sensors that allow them to collect data on object interactions by feeling it. The robot can then use that understanding to maneuver objects into place.

“The goal of my work is to teach robots to be adaptive, so if they are presented with a task and a piece is misaligned or in an unexpected position, the robot can independently figure out, via touch, how to adjust the pieces in order to complete the task.” When a robot succeeds in properly fitting two pieces together, it stores its trial and error process so that the next time it’s assigned that task it can complete it more efficiently.

According to Li, when it comes to manufacturing, people currently use machines only to complete repetitive, isolated tasks in controlled environments. Her line of research aims to add flexibility and customization to the building process.

“Construction is still an old industry, so the room for improvement and innovation with autonomous robots is huge,” Li says. “If the on-site building process in construction can largely be done with autonomous robots, projects will be safer, more punctual, and more eco-friendly. In time, completely new building designs that better suit being built by robots will emerge.”

Right now, Li’s robots are relying only on touch, but in the coming months, Li will introduce vision to the robots’ toolbox. “This will allow a robot to move through a space, identify the piece it needs for a task, maybe it’s in a pile or in a bin with other pieces, and then combine sight and touch to select that piece and fit it with another one,” she says.

The goal is for a robot to independently shift between its senses, the way humans do, which will expand its capabilities when it comes to completing tasks and further eliminate reliance on people.

While Li’s work centers on autonomous robotic manipulation for industrial applications, she imagines a world where it will be used for everything from folding laundry to assisting with nursing and medical care. Li also sees autonomous robotic manipulation working in tandem with autonomous vehicles. “In the future, if we have autonomous vehicles, we might want them not only to travel from one place to the next, but also to deliver goods, load and unload cargo, install a carseat, or fill its own gas tank. I could even see autonomous robotic manipulation applied to tow trucks, so that a truck could find your car and maneuver it onto a bed independently of humans. The scenarios are limitless.”

Li often thinks back to how significant her exposure to robotics as a child was in inspiring her future career. “When I’m asked about gender or diversity in robotics, I think that we have a long way to go, but that change starts with children,” Li says. “If kids learn about engineering and tinkering and AI at an early age, that has the potential to stick with them for the rest of their life.” In other words, bring on the Transformers.

Choose your lane

How Autonomous Vehicles Distinguish Between Bicycles and People Who Ride Them

How Autonomous Vehicles Distinguish Between Bikes and People

When it comes to how autonomous vehicles see the world, humans come first, literally. Autonomous vehicles (AVs), like the kind operated by Pittsburgh-based Argo AI, use Machine Learning to detect and classify the objects in their surroundings, identifying people...
Why The League of American Bicyclists is optimistic about autonomous vehicles

Why a Leading Cycling Advocacy Group Is Optimistic About Autonomous Vehicles

As autonomous vehicle use grows, AV companies and the League of American Bicyclists are collaborating on how to ensure cyclists and motorists can share the roads safely, even if the “motorist” is artificial intelligence software. As part of the...

Self-Driving Is Arriving Right On Time. Just Like Ice Cream Did

Seven years ago, I was a self-driving skeptic. Not of the technology. Of all the “experts” promising autonomous vehicles would be everywhere by 2020. You didn’t need to be Nostradamus to know that was ridiculous. All you needed was...
Illustration of a futuristic parking deck turned into a mixed-use space, with AVs driving by

How Autonomous Vehicles Could Help Transform Parking Lots

Researchers say it’s likely that autonomous vehicles (AVs) can help reduce the need for parking lots, opening more room for grass and trees and other elements of nature. It may not seem like it when you’re circling the block...
An illustration of an Argo autonomous vehicle in teal against a backdrop of buildings, a bicyclist, and research papers

7 Big Breakthroughs From Argo Research at CVPR 2022

The 2022 Conference on Computer Vision and Pattern Recognition (CVPR 2022) is nearly here. Thousands of computer scientists, software engineers, and researchers from around the globe will gather in New Orleans to review and discuss their latest work in...

Researchers Predict the Future With Lidar Data

Researchers at Carnegie Mellon University’s Argo AI Center for Autonomous Vehicle Research, a private-public partnership funded by Argo for advancing the autonomous-vehicle (AV) field, say they have come up with a way to use lidar data to visualize not...

Must Reads