Crossing the artificial intelligence thin red line?

Image: Pixabay

Artificial intelligence shapes our modern lives. It will be one
of the defining technologies of the future, with its influence and
application expected to accelerate as we go through the 2020s. Yet,
the stakes are high; with the countless benefits that AI brings,
there is also growing academic and public concern around a lack of
transparency, and its misuse, in many areas of life.

It’s in this environment that the European Commission has
become one of the first political institutions in the world to
release awhite
paper
 that could be a game-changer towards a regulatory
framework for AI. In addition, this year the European
Parliament adopted
proposals
 on how the EU can best regulate artificial
intelligence to boost innovation, ethical standards and trust in
technology.

Recently, an all-virtual conference on the ‘Governance Of and By
Digital
 Technology’ hosted by EPFL’s International Risk
Governance Center
 (IRGC) and the European Union’s Horizon 2020
TRIGGER Project
 explored the principles needed to govern
existing and emerging digital technologies, as well as the
potential danger of decision-making algorithms and how to prevent
these from causing harm.

Stuart Russell, Professor of Computer Science at the University
of California, Berkeley and author of the popular textbook,
Artificial Intelligence: A Modern Approach, proposed that there is
huge upside potential in AI, but we are already seeing the risks
from the poor design of AI systems, including the impacts of online
misinformation, impersonation and deception.

“I believe that if we don’t move quickly, human beings will
just be losing their rights, their powers, their individuality and
becoming more and more the subject of digital technology rather
than the owners of it. For example, there is already AI from 50
different corporate representatives sitting in your pocket stealing
your information, and your money, as fast as it can, and there’s
nobody in your phone who actually works for you. Could we rearrange
that so that the software in your phone actually works for you and
negotiates with these other entities to keep all of your data
private?” he asked.

Reinforcement learning algorithms, that select the content
people see on their phones or other devices, are a major problem he
continued, “they currently have more power than Hitler or Stalin
ever had in their wildest dreams over what billions of people see
and read for most of their waking lives. We might argue that
running these kinds of experiments without informed consent is a
bad idea and, just as we have with pharmaceutical products, we need
to have stage 1, 2, and 3 trials on human subjects and look at what
effect these algorithms have on people’s minds and behavior.”

Beyond regulating artificial intelligence aimed at individual
use, one of the conference debates focused on how governments might
use AI in developing and implementing public policy in areas such
as healthcare, urban development or education. Bryan Ford, an
Associate Professor at EPFL and head of the Decentralized and Distributed
Systems Laboratory
 (DEDIS) in the School of Communication and
Computer Sciences
, argued that while the cautious use of
powerful AI technologies can play many useful roles in low-level
mechanisms used in many application domains, it has no legitimate
role to play in defining, implementing, or enforcing public
policy.

“Matters of policy in governing humans must remain a domain
reserved strictly for humans. For example, AI may have many
justifiable uses in electric sensors to detect the presence of a
car – how fast it is going or whether it stopped at an
intersection, but I would claim AI does not belong anywhere near
the policy decision of whether a car’s driver warrants suspicion
and should be stopped by Highway Patrol.”

“Because machine learning algorithms learn from data sets that
represent historical experience, AI driven policy is fundamentally
constrained by the assumption that our past represents the right,
best, or only viable basis on which to make decisions about the
future. Yet we know that all past and present societies are highly
imperfect so to have any hope of genuinely improving our societies,
governance must be visionary and forward looking,” Professor Ford
continued.

Artificial intelligence is heterogeneous and complex. When we
talk about the governance of, and by, AI are we talking about
machine learning, neural networks or autonomous agents, or the
different applications of any of these in different areas? Likely,
all the above in many different applications. We are only at the
beginning of the journey when it comes to regulating artificial
intelligence, one that most participants agreed has geopolitical
implications.

“These issues may lead directly to a set of trade and
geostrategic conflicts that will make them all the more difficult
to resolve and all the more crucial. The question is not only to
avoid them but to avoid the decoupling of the US from Europe, and
Europe and the US from China, and that is going to be a significant
challenge economically and geo-strategically,” suggested John
Zysman, Professor of Political Science at the University of
California, Berkeley and co-Director of the Berkeley Roundtable on
the International Economy.

“Ultimately, there is a thin red line that AI should not cross
and some regulation, that balances the benefits and risks from AI
applications, is needed. The IRGC is looking at some of the most
challenging problems facing society today, and it’s great to have
them as part of IC,” said James Larus, Dean of the IC School and
IRGC Academic Director.

Concluding the conference, Marie-Valentine Florin, Executive
Director of the IRGC reminded participants that artificial
intelligence is a means to an end, not the end, “as societies we
need a goal. Maybe that could be something like the Green Deal
around sustainability to perhaps give a sense to today’s digital
transformation. Digital transformation is the tool, and I don’t
think society has collectively decided a real objectivel for it
yet. That’s what we need to figure out.”

Originally published by
Tanya Peterson | December 22, 2020
EPFL News