Researchers incorporate computer vision and uncertainty into AI for robotic prosthetics

Imaging devices and environmental context. (a) On-glasses camera
configuration using a Tobii Pro Glasses 2 eye tracker. (b) Lower
limb data acquisition device with a camera and an IMU chip. (c) and
(d) Example frames from the cameras for the two data acquisition
configurations. (e) and (f) Example images of the data collection
environment and terrains considered in the experiments. Credit:
Edgar Lobaton

Researchers have developed new software that can be integrated
with existing hardware to enable people using robotic prosthetics
or exoskeletons to walk in a safer, more natural manner on
different types of terrain. The new framework incorporates computer
vision into prosthetic leg control, and includes robust artificial
intelligence (AI) algorithms that allow the software to better
account for uncertainty.

“Lower-limb robotic prosthetics need to execute different
behaviors based on the terrain users are walking on,” says Edgar
Lobaton, co-author of a paper on the work and an associate
professor of electrical and computer engineering at North Carolina
State University. “The framework we’ve created allows the AI in
robotic prostheses to predict the type of terrain users will be
stepping on, quantify the uncertainties associated with that
prediction, and then incorporate that uncertainty into its
decision-making.”

The researchers focused on distinguishing between six different
terrains that require adjustments in a robotic prosthetic’s
behavior: tile, brick, concrete, grass, “upstairs” and
“downstairs.”

“If the degree of uncertainty is too high, the AI isn’t forced
to make a questionable decision—it could instead notify the user
that it doesn’t have enough confidence in its prediction to act, or
it could default to a ‘safe’ mode,” says Boxuan Zhong, lead author
of the paper and a recent Ph.D. graduate from NC State.

The new “environmental context” framework incorporates both
hardware and software elements. The researchers designed the
framework for use with any lower-limb robotic exoskeleton or
robotic prosthetic device, but with one additional piece of
hardware: a camera. In their study, the researchers used cameras
worn on eyeglasses and camera mounted on the lower-limb prosthesis
itself. The researchers evaluated how the AI was able to make use
of computer vision data from both types of camera, separately and
when used together.


Read more

Originally published by
Matt Shipman,North Carolina State
University
May 27, 2020
TechXplore