Teaching artificial intelligence to adapt Salk’s simulated system could help develop better artificial intelligence, treatments for brain disorders

From left: Terrence Sejnowski, Kay Tye and Ben Tsuda. Credit:
Salk Institute

LA JOLLA—Getting computers to “think” like humans is the
holy grail of artificial intelligence, but human brains turn out to
be tough acts to follow. The human brain is a master of applying
previously learned knowledge to new situations and constantly
refining what’s been learned. This ability to be adaptive has
been hard to replicate in machines.

Now, Salk researchers have used a computational model of brain
activity to simulate this process more accurately than ever before.
The new model mimics how the brain’s prefrontal cortex uses a
phenomenon known as “gating” to control the flow of information
between different areas of neurons. It not only sheds light on the
human brain, but could also inform the design of new artificial
intelligence programs.

“If we can scale this model up to be used in more complex
artificial intelligence systems, it might allow these systems to
learn things faster or find new solutions to problems,”
saysTerrence
Sejnowski
, head of Salk’s Computational Neurobiology
Laboratory and senior author of the new work, published on November
24, 2020, in Proceedings of the
National Academy of Sciences
.

The brains of humans and other mammals are known for their
ability to quickly process stimuli—sights and sounds, for
instance—and integrate any new information into things the brain
already knows. This flexibility to apply knowledge to new
situations and continuously learn over a lifetime has long been a
goal of researchers designing machine learning programs or
artificial brains. Historically, when a machine is taught to do one
task, it’s difficult for the machine to learn how to adapt that
knowledge to a similar task; instead each related process has to be
taught individually.

In the current study, Sejnowski’s group designed a new
computational modeling framework to replicate how neurons in the
prefrontal cortex—the brain area responsible for decision-making
and working memory—behave during a cognitive test known as the
Wisconsin Card Sorting Test. In this task, participants have to
sort cards by color, symbol or number—and constantly adapt their
answers as the card-sorting rule changes. This test is used
clinically to diagnose dementia and psychiatric illnesses but is
also used by artificial intelligence researchers to gauge how well
their computational models of the brain can replicate human
behavior.

Previous models of the prefrontal cortex performed poorly on
this task. The Sejnowski team’s framework, however, integrated
how neurons control the flow of information throughout the entire
prefrontal cortex via gating, delegating different pieces of
information to different subregions of the network. Gating was
thought to be important at a small scale—in controlling the flow
of information within small clusters of similar cells—but the
idea had never been integrated into models through the whole
network.

The new network not only performed as reliably as humans on the
Wisconsin Card Sorting Task, but also mimicked the mistakes seen in
some patients. When sections of the model were removed, the system
showed the same errors seen in patients with prefrontal cortex
damage, such as that caused by trauma or dementia.

“I think one of the most exciting parts of this is that, using
this sort of modeling framework, we’re getting a better idea of
how the brain is organized,” says Ben Tsuda, a Salk graduate
student and first author of the new paper. “That has implications
for both machine learning and gaining a better understanding of
some of these diseases that affect the prefrontal cortex.”

If researchers have a better understanding of how regions of the
prefrontal cortex work together, he adds, that will help guide
interventions to treat brain injury. It could suggest areas to
target with deep brain stimulation, for instance.

“When you think about the ways in which the brain still
surpasses state-of-the-art deep learning networks, one of those
ways is versatility and generalizability across tasks with
different rules,” says study coauthor Kay Tye, a professor
in Salk’s Systems Neurobiology Laboratory and the Wylie Vale
Chair. “In this new work, we show how gating of information can
power our new and improved model of the prefrontal cortex.”

The team next wants to scale up the network to perform more
complex tasks than the card-sorting test and determine whether the
network-wide gating gives the artificial prefrontal cortex a better
working memory in all situations. If the new approach works under
broad learning scenarios, they suspect that it will lead to
improved artificial intelligence systems that can be more adaptable
to new situations.

Originally published by
salk news | December 16,
2020

 

Hava Siegelmann of the University of Massachusetts Amherst was
an additional author of the study. The work was supported by grants
from the Kavli Institute for Brain and Mind at UC San Diego, the
Office of Naval Research (N000141612829), the National Science
Foundation (1735004) and DARPA (W911NF1820).