that varies its equations’ parameters, enhancing its ability to
analyze time series data. Image: Jose-Luis Olivares, MIT
MIT researchers have developed a type of neural network that
learns on the job, not just during its training phase. These
flexible algorithms, dubbed âliquidâ networks, change their
underlying equations to continuously adapt to new data inputs. The
advance could aid decision making based on data streams that change
over time, including those involved in medical diagnosis and
autonomous driving.
âThis is a way forward for the future of robot control,
natural language processing, video processing â any form of time
series data processing,â says Ramin Hasani, the studyâs lead
author. âThe potential is really significant.â
The research will be presented at Februaryâs AAAI Conference
on Artificial Intelligence. In addition to Hasani, a postdoc in the
MIT Computer Science and Artificial Intelligence Laboratory
(CSAIL), MIT co-authors include Daniela Rus, CSAIL director and the
Andrew and Erna Viterbi Professor of Electrical Engineering and
Computer Science, and PhD student Alexander Amini. Other co-authors
include Mathias Lechner of the Institute of Science and Technology
Austria and Radu Grosu of the Vienna University of Technology.
Time series data are both ubiquitous and vital to our
understanding the world, according to Hasani. âThe real world is
all about sequences. Even our perception â youâre not
perceiving images, youâre perceiving sequences of images,â he
says. âSo, time series data actually create our reality.â
He points to video processing, financial data, and medical
diagnostic applications as examples of time series that are central
to society. The vicissitudes of these ever-changing data streams
can be unpredictable. Yet analyzing these data in real time, and
using them to anticipate future behavior, can boost the development
of emerging technologies like self-driving cars. So Hasani built an
algorithm fit for the task.
Hasani designed a neural network that can adapt to the
variability of real-world systems. Neural networks are algorithms
that recognize patterns by analyzing a set of âtrainingâ
examples. Theyâre often said to mimic the processing pathways of
the brain â Hasani drew inspiration directly from the microscopic
nematode, C. elegans. âIt only has 302 neurons in its nervous
system,â he says, âyet it can generate unexpectedly complex
dynamics.â
Hasani coded his neural network with careful attention to how C.
elegans neurons activate and communicate with each other via
electrical impulses. In the equations he used to structure his
neural network, he allowed the parameters to change over time based
on the results of a nested set of differential equations.
This flexibility is key. Most neural networksâ behavior is
fixed after the training phase, which means theyâre bad at
adjusting to changes in the incoming data stream. Hasani says the
fluidity of his âliquidâ network makes it more resilient to
unexpected or noisy data, like if heavy rain obscures the view of a
camera on a self-driving car. âSo, itâs more robust,â he
says.
Thereâs another advantage of the networkâs flexibility, he
adds: âItâs more interpretable.â
Hasani says his liquid network skirts the inscrutability common
to other neural networks. âJust changing the representation of a
neuron,â which Hasani did with the differential equations, âyou
can really explore some degrees of complexity you couldnât
explore otherwise.â Thanks to Hasaniâs small number of highly
expressive neurons, itâs easier to peer into the âblack boxâ
of the networkâs decision making and diagnose why the network
made a certain characterization.
âThe model itself is richer in terms of expressivity,â says
Hasani. That could help engineers understand and improve the liquid
networkâs performance.
Hasaniâs network excelled in a battery of tests. It edged out
other state-of-the-art time series algorithms by a few percentage
points in accurately predicting future values in datasets, ranging
from atmospheric chemistry to traffic patterns. âIn many
applications, we see the performance is reliably high,â he says.
Plus, the networkâs small size meant it completed the tests
without a steep computing cost. âEveryone talks about scaling up
their network,â says Hasani. âWe want to scale down, to have
fewer but richer nodes.â
Hasani plans to keep improving the system and ready it for
industrial application. âWe have a provably more expressive
neural network that is inspired by nature. But this is just the
beginning of the process,â he says. âThe obvious question is
how do you extend this? We think this kind of network could be a
key element of future intelligence systems.â
Originally published by
Daniel Ackerman, MIT News Office | January 28, 2021
MIT
This research was funded, in part, by Boeing, the National
Science Foundation, the Austrian Science Fund, and Electronic
Components and Systems for European Leadership.