Robot cleaners win MIT Sloan’s inaugural AI competition

The robots can help to safely clean high-traffic areas -Image:
Adobe

The robots perform the task of spraying disinfectant at
consistent rates and speeds, enabling human cleaners to focus on
manually scrubbing biofilms and residues.

A developer of autonomous ground robots that safely disinfect
high-traffic spaces has been announced winner of MIT’s
inaugural Collaborative Intelligence Competition. motorCortex.ai
beat out seven other finalists to win the $50,000 prize.

Launched by MIT last fall, the competition forms part of a
larger educational programme conceived by MIT’s School of
Engineering dean Anantha Chandrakasan.The initiative aims to
advance the development of artificial intelligence (AI) to
complement, collaborate with, and augment humans versus replacing
them.

Robot cleaners

According to MIT, as Covid-19 necessitates the frequent
disinfection of public spaces, motorCortex.ai’s offering
addresses companies’ multiple cleaning challenges. Manual
application of sprayed disinfectants requires careful attention to
ensure the correct “soak time†to neutralise pathogens,
and disinfectant toxicity raises safety concerns for the custodian
due to frequent exposure.

“AI and computational technologies will be a transformative
force in our society for the next decadeâ€

Additionally, companies are struggling to find and train
sufficient staff to perform this critical function due to an
increase in the sanitisation frequency. The autonomous disinfection
robots work collaboratively with cleaning staff to address these
challenges.

The robots perform the task of spraying disinfectant at
consistent rates and speeds, enabling human cleaners to focus on
manually scrubbing biofilms and residues. They also disinfect
surfaces with UV-C light and are equipped with sensors to identify
humans in its proximity while maintaining a correct distance for
safe application.


Read more

Originally published by
SmartCitiesWorld news team | June 2, 2020
Smart
Cities World

These flexible feet help robots walk faster

An off-the-shelf six-legged robot equipped with the feet
designed by UC San Diego engineers can walk up to 40 percent faster
than when not equipped with the feet. Credit: University of
California San Diego

Roboticists at the University of California San Diego have
developed flexible feet that can help robots walk up to 40 percent
faster on uneven terrain such as pebbles and wood chips. The work
has applications for search-and-rescue missions as well as space
exploration.

„Robots need to be able to walk fast and efficiently on natural,
uneven terrain so they can go everywhere humans can go, but maybe
shouldn’t,“ said Emily Lathrop, the paper’s first author and a
Ph.D. student at the Jacobs School of Engineering at UC San
Diego.

The researchers will present their findings at the RoboSoft
conference which takes place virtually May 15 to July 15, 2020.

„Usually, robots are only able to control motion at specific
joints,“ said Michael T. Tolley, a professor in the Department of
Mechanical and Aerospace Engineering at UC San Diego and senior
author of the paper. „In this work, we showed that arobotthat can control the
stiffness, and hence the shape, of its feet outperforms
traditional designs and is able to adapt to a wide variety of
terrains.“

The feet are flexible spheres made from a latex membrane filled
with coffee grounds. Structures inspired by nature—such as
plant roots— and by man-made solutions— such as piles
driven into the ground to stabilize slopes— are embedded in
the coffee grounds.

The feet allow robots to walk faster and grip better because of
a mechanism called granular jamming that allows granular media, in
this case the coffee grounds, to go back and forth between behaving
like a solid and behaving like a liquid. When the feet hit the
ground, they firm up, conforming to the ground underneath and
providing solid footing. They then unjam and loosen up when
transitioning between steps. The support structures help the
flexible feet remain stiff while jammed.

It’s the first time that such feet have been tested on uneven
terrain, like gravel and wood chips.


Read more

Originally published by
University of California – San
Diego
| June 1, 2020
TechXplore

Machine learning helps map global ocean communities

Image: Courtesy of the researchers, edited by MIT News.

A machine-learning technique developed at MIT combs through
global ocean data to find commonalities between marine locations,
based on interactions between phytoplankton species. Using this
approach, researchers have determined that the ocean can be split
into over 100 types of “provinces,” and 12 “megaprovinces,”
that are distinct in their ecological makeup.

An MIT-developed technique could aid in tracking the ocean’s
health and productivity.

On land, it’s fairly obvious where one ecological region ends
and another begins, for instance at the boundary between a desert
and savanna. In the ocean, much of life is microscopic and far more
mobile, making it challenging for scientists to map the boundaries
between ecologically distinct marine regions.

One way scientists delineate marine communities is through
satellite images of chlorophyll, the green pigment produced by
phytoplankton. Chlorophyll concentrations can indicate how rich or
productive the underlying ecosystem might be in one region versus
another. But chlorophyll maps can only give an idea of the total
amount of life that might be present in a given region. Two regions
with the same concentration of chlorophyll may in fact host very
different combinations of plant and animal life.

“It’s like if you were to look at all the regions on land
that don’t have a lot of biomass, that would include Antarctica
and the Sahara, even though they have completely different
ecological assemblages,” says Maike Sonnewald, a former postdoc
in MIT’s Department of Earth, Atmospheric and Planetary
Sciences.

Now Sonnewald and her colleagues at MIT have developed an
unsupervised machine-learning technique that automatically combs
through a highly complicated set of global ocean data to find
commonalities between marine locations, based on their ratios and
interactions between multiple phytoplankton species. With their
technique, the researchers found that the ocean can be split into
over 100 types of “provinces” that are distinct in their
ecological makeup. Any given location in the ocean would
conceivably fit into one of these 100 ecological provinces.

The researchers then looked for similarities between these 100
provinces, ultimately grouping them into 12 more general
categories. From these “megaprovinces,†they were able to
see that, while some had the same total amount of life within a
region, they had very different community structures, or balances
of animal and plant species. Sonnewald says capturing these
ecological subtleties is essential to tracking the ocean’s
health and productivity.

“Ecosystems are changing with climate change, and the
community structure needs to be monitored to understand knock on
effects on fisheries and the ocean’s capacity to draw down
carbon dioxide,†Sonnewald says. “We can’t fully
understand these vital dynamics with conventional methods, that to
date don’t include the ecology that’s there. But our
method, combined with satellite data and other tools, could offer
important progress.â€

Read
more

Originally posted by
Jennifer Chu | MIT News Office
May 29, 2020
MIT News

 

 

Hundreds of AI solutions proposed for pandemic, but few are proven

Image: Getty Images

Companies are employing AI to respond to the Covid-19 pandemic
in a number of different ways, including diagnosing Covid-19 cases,
identifying which patients would be at highest risk and discovering
potential treatments. But not all of these approaches have been
validated, experts warned.

In a rush to find solutions for the Covid-19 pandemic,
researchers are deploying machine learning algorithms to trawl
through data that might give us more clues about the virus. Some
claim to have identified potential treatments based on the data,
while others are using it to screen patients or identify those at
highest risk.

But, like their vaccine and drug counterparts, many of these
algorithms are still unproven. With hundreds of research articles
describing the use of artificial intelligence or machine learning
— many of them preprints — it can be difficult to sort
out which ones are most effective.

“I’ve heard a lot of hype about machine learning being
applied to battling Covid-19, but I haven’t seen very many
concrete examples where you could imagine in the short- or
medium-term something that is going to have a substantial
effect,†said John Quackenbush, chair of the Department of
Biostatistics at the Harvard T.H. Chan School of Public Health, in
a phone interview.

Any good model requires good data, and that can be a challenge
to find in healthcare. Given that Covid-19 is a new disease, that
limits the amount of information researchers have. On top of that,
most clinical data is locked up in health record systems, which
often have different ways of recording it.

“Everything that we’re doing gets better with a lot
more well-annotated datasets,†said Dr. Eric Topol, director
of the Scripps Research Translational Institute, who published a
book on AI in healthcare. “In the U.S., we don’t have
centralized data. Here we are at the epicenter and all of our
healthcare data is fragmented.â€

On the other hand, as datasets get larger, they become
“noisier.†For example, a model that screens Covid-19
patients for temperature might be reasonably effective. But
expanded to the general population, “it’s a terrible
predictor,†Quackenbush said.

Still, both were cautiously optimistic about using AI in some
settings, such as determining which patients face a higher risk
from Covid-19, opening an opportunity for communication with their
physician.

Searching for a
treatment

In early April, drugmaker Eli Lillyannounced
it would launch a trial of its existing rheumatoid arthritis
treatment
, baricitinib, in severely ill Covid-19 patients.

The drug was identified by a British startup, BenevolentAI,
which used
natural language processing to skim through millions of
papers
 and create a database of biological processes related to
the novel coronavirus. From there, they
identified baricitinib as a potential treatment
 because of two
key characteristics: its anti-inflammatory properties might help
temper the body’s hyperactive immune response to the virus,
and it seemed like the drug might be able to prevent viral
infection.

As a counterpoint, a group of rheumatologists that had treated
patients in Lombardy, Italy, cautioned about potential adverse
effects from the drug. Its FDA black box warning indicates patients
taking the drug may face an increased risk of developing serious
infections,


Read more

Originally published by
Elise Reuter
| May 28, 2020
MedCityNews

ELIMINATING CONFUSION BETWEEN AI AND ML; AI DOESN’T EXIST WITHOUT ITS SUBSETS

Artificial Intelligence and Machine Learning are being used
interchangeably as a term across all segments of technological
applications. Due to their close relation, AI is often confused
with ML but one should not forget the distinction between the two.
Out of all the differences, one is surely the biggest – that
machine learning is a subset of AI. Technology professionals must
understand the trivial difference both possess. Lacking the clarity
between AI and ML, professionals as well as their companies may get
misguided and eventually lose their relevance in the market with
fake or misleading AI solutions.

According to an award-winning writer,Stephanie
Overby
, the most significant misunderstanding is how AI relates
to ML. Where Artificial Intelligence is the umbrella term used to
shelter many technologies, ML is one of its subsets. “AI is
the broad container term describing the various tools and
algorithms that enable machines to replicate human behavior and
intelligence,†explains JP Baritugo, director at management
and IT consultancyPace
Harmon
. There are numerous subsets of AI including Machine
learning, natural
language processing (NLP)
, deep learning, computer vision, and
more.

For those who prefer analogies, Timothy Havens, the William and
Gloria Jackson Associate Professor of Computer Systems in
the College of Computing at
Michigan Technological University
 and director of the Institute of Computing and Cybersystems,
likens the way AI works to learning to ride a bike: “You
don’t tell a child to move their left foot in a circle on the
left pedal in the forward direction while moving your right foot in
a circle… You give them a push and tell them to keep the bike
upright and pointed forward: the overall objective. They fall a few
times, honing their skills each time they fail,†Havens says.
“That’s Artificial Intelligence in a nutshell.â€Â

Machine learning is one way to accomplish that. The technology
uses statistical analysis to learn autonomously and improve its
function, explains Sarah Burnett, executive vice president, and
distinguished analyst at a management consultancy and research
firm Everest Group.


Read more

Originally published by
Smriti
Srivastava
  | May 28, 2020 
Analytics
Insight

Researchers incorporate computer vision and uncertainty into AI for robotic prosthetics

Imaging devices and environmental context. (a) On-glasses camera
configuration using a Tobii Pro Glasses 2 eye tracker. (b) Lower
limb data acquisition device with a camera and an IMU chip. (c) and
(d) Example frames from the cameras for the two data acquisition
configurations. (e) and (f) Example images of the data collection
environment and terrains considered in the experiments. Credit:
Edgar Lobaton

Researchers have developed new software that can be integrated
with existing hardware to enable people using robotic prosthetics
or exoskeletons to walk in a safer, more natural manner on
different types of terrain. The new framework incorporates computer
vision into prosthetic leg control, and includes robust artificial
intelligence (AI) algorithms that allow the software to better
account for uncertainty.

„Lower-limb robotic prosthetics need to execute different
behaviors based on the terrain users are walking on,“ says Edgar
Lobaton, co-author of a paper on the work and an associate
professor of electrical and computer engineering at North Carolina
State University. „The framework we’ve created allows the AI in
robotic prostheses to predict the type of terrain users will be
stepping on, quantify the uncertainties associated with that
prediction, and then incorporate that uncertainty into its
decision-making.“

The researchers focused on distinguishing between six different
terrains that require adjustments in a robotic prosthetic’s
behavior: tile, brick, concrete, grass, „upstairs“ and
„downstairs.“

„If the degree of uncertainty is too high, the AI isn’t forced
to make a questionable decision—it could instead notify the user
that it doesn’t have enough confidence in its prediction to act, or
it could default to a ’safe‘ mode,“ says Boxuan Zhong, lead author
of the paper and a recent Ph.D. graduate from NC State.

The new „environmental context“ framework incorporates both
hardware and software elements. The researchers designed the
framework for use with any lower-limb robotic exoskeleton or
robotic prosthetic device, but with one additional piece of
hardware: a camera. In their study, the researchers used cameras
worn on eyeglasses and camera mounted on the lower-limb prosthesis
itself. The researchers evaluated how the AI was able to make use
of computer vision data from both types of camera, separately and
when used together.


Read more

Originally published by
Matt Shipman,North Carolina State
University
May 27, 2020
TechXplore

Artificial intelligence for optimized mobile communication

AI will serve to develop a network control system that not only
detects and reacts to problems but can also predict and avoid them.
Credit: CC0 Public Domain

While many European states are currently setting up the 5th
generation of mobile communication, scientists are already working
on its optimization. Although 5G is far superior to its
predecessors, even the latest mobile communication standard still
has room for improvement: Especially in urban areas, where a direct
line of sight between emitter and transceiver is difficult, the
radio link does not yet function reliably. Within the recently
launched EU project ARIADNE, eleven European partners are
researching how an advanced system architecture „beyond 5G“ can be
developed by using high frequency bands and artificial
intelligence.

A major advantage of 5G is its high frequencies and consequently
its high transmission rate, which ensures an almost latency-free
connection and fast data transfer. However, high frequencies
require a directed system, which in most cases relies on a line of
sight (LOS). This means that transmitter and receiver must be able
to see each other. Unfortunately, the LOS principle can lead to
connection problems, especially in urban and heavily developed
areas.

One of the issues responsible for these connection problems in
local 5G networks is the cancelling effect. This effect occurs when
a signal is transmitted over a LOS connection and simultaneously
copied via reflections. The copy overrides the signal from the LOS
and cancels it. The result: the signal does not reach the receiver.
This multipath propagation via non-line of sight (NLOS) remains a
problem for 5G, as it did with its predecessor 4G. For this reason,
one of the main aims of ARIADNE is the development of new concepts
for better control of LOS and NLOS scenarios to massively improve
the reliability of mobile communication links.

Higher efficiency and reliability of 5G

The EU Project, with the full title „Artificial Intelligence
Aided D-band Network for 5G Long Term Evolution“ brings together
partners from research and industry from five countries. The aim is
to develop energy-efficient and reliable mobile communication links
based on frequencies in the D-band (130—174,8 GHz). With its
aggregated bandwidth of more than 30 GHz, the D-band is perfectly
suited for fast data transmission. However, this newly used band is
divided into several sub-bands and requires an adaptation of the
previously used system architecture and corresponding network
control.

ARIADNE aims to create an intelligent communication system
„beyond 5G“ by combining an innovative high-frequency radio
architecture and a new network processing concept based on
artificial intelligence. By 2022, the project consortium plans to
realize and demonstrate a radio link with extremely high data rates
in the 100 Gbit/s range at almost zero latency. The European Union
supports the project as part of the Horizon 2020 program. ARIADNE
focuses on three major research areas: the development of hardware
components, the research of metasurfaces and the adaptation of the
network control based on artificial intelligence ormachine
learning
.


Read more

Originally published by
by Fraunhofer Institute for Applied Solid State Physics | May 25,
2020
TechXplore

Crowd prevention system uses smart pole to create safer beaches

Omniflow uses smart pole and AI technology to try to prevent
overcrowded beaches

Omniflow’s technology analyses how busy the beach is and shows
real-time occupancy levels on the pole or an app so a user can make
an informed decision about where to go.

A smart pole that determines the occupancy and related safe
distance of beach areas is soon to be deployed in the Algarve
region of Portugal.

As well as evaluating and showing the number of beach goers at
any time, it also informs the public of best practice to ensure a
“safe beach season” when lockdown measures are relaxed.

Smart pole

The solution uses smart pole base technology from Omniflow that
generates its own energy through the sun and wind and stores it via
built-in batteries. The aim is for the poles to be as sustainable
and trouble-free as possible which will facilitate implementation
both in terms of infrastructure required and time for
deployment.

The smart pole, that already features smart lighting and
telecommunications’ features, will be equipped with artificial
intelligence and sensors to assess occupancy of an entire beach or
specific areas.


Read more

Originally published
bySmartCitiesWorld news team | May 22, 2020
SmartCitiesWorld

Sony, Microsoft unveil latest joint AI play

Sony’s semiconductor division and Microsoft announced a
collaboration to develop AI video analytics systems for enterprise
and industrial use, a year after the two inked a gaming-focused
partnership.

In a statement, the companies explained they plan to combine
Sony Semiconductor Solutions’ image and sensing chips with
Microsoft’s cloud and AI platform to produce advanced video
analytics systems.

The companies plan to embed Microsoft Azure AI capabilities into
Sony’s recently launched IMX500 vision sensor, designed for use
with enterprise smart camera systems. An app providing Azure IoT
software will be designed to work alongside the chip to provide
analytics.

“This integration will result in smarter, more advanced
cameras for use in enterprise scenarios as well as a more efficient
allocation of resources between the edge and the cloud to drive
cost and power consumption efficiencies,” the companies
added.

In addition to product developments, Microsoft and Sony will
also work with other partners on video analytics research at
Microsoft’s AI and IoT innovation labs.

The latest collaboration adds to acloud
gaming and AI partnership
designed to support their respective
content streaming and gaming plays.

At the time the two said they also planned to explore the
possibility of working together on intelligent imaging solutions,
leading to the latest announcement.

Originally published by
Chris
Donkin
| May 19, 2020

Mobile World Live

Debating The Future Of Autonomous Cars And Trucks

A recent debate on the topic of self-driving cars went beyond
technology issues to include societal, regulatory, economic and
ethical factors. (GETTY IMAGES)

A web-broadcasted debate recently took place that illuminated
various key issues confronting the advent of AI-based autonomous
cars and trucks.

In the debate, Princeton’s Professor Alain Kornhauser opted to
take the position that it will be the best of times ahead, and his
clashing counterpart was Dr. Sven Beiker, Founder and Managing
Director of Silicon Valley Mobility based in Palo Alto, California,
opting to take the position that it will be the roughest of times
ahead for the emergence of viable and widespread self-driving
vehicles.

The overarching theme was whether there is a potential of a
“new normal” that might overtake the existing autonomous car
and truck efforts in a post-pandemic era. For information about
future such debates, visit thesmartdrivingcar.comwebsite that
organized and announced the event.

The debate was joined by several panelists, consisting of
Richard Mudge as moderator (President of Compass Transportation and
Technology), Jim Scheinman (Founding Managing Partner at Maven
Ventures), Jane Lappin (Director of Government Affairs and Public
Policy at Toyota Research Institute), Brad Templeton (writer and
industry analyst at Robocars.com), and Michael Sena (automotive
industry expert, heralded especially for his newsletter The
Dispatcher).

I have crafted a recap of some selected points and provide those
highlights here.

Safety And Car-Related Deaths

Start with perhaps one of the most widely touted and yet
controversial topics, the safety of self-driving cars.

There are many in the media and the automotive industry that
justify the need for having self-driving cars by emphasizing that
human drivers are “unsafe†and that AI-driven vehicles
will be presumably safer. The basis for asserting that human
drivers are unsafe is typically centered on the number of annual
deaths that occur due to human-driven car crashes, amounting to
about 40,000 deaths annually in the United States alone.

Of course, any deaths due to car crashes is one too many, and we
would all undoubtedly agree that averting deaths and injuries from
car crashes is a laudable goal.

How can there be any counterargument to the notion of seeking to
avoid car-related deaths, you might be wondering?

Some say that you need to put your eye on the ball and not be
tricked into looking in the wrong places to solve the problem.

Self-driving cars, if they indeed turn out to be safer than
human drivers, which we don’t know whether that will occur,
would seem to be a pretty costly solution to the problem of
car-related deaths, some say, and there are other ways to deal with
the deaths-inducing aspects, doing so right away and at a lesser
overall cost.

For example, consider the heartbreaking and dreadful outcomes of
drunk or intoxicated driving.


Read more

Originally published by
By Lance Eliot, the AI Trends Insider
May 14, 2020
aitrends