Leasing: Das neue Haben

ARAG Experten über das Auto auf Zeit

Mobilität ist ein Grundbedürfnis. Dafür ist es aber längst nicht mehr notwendig, einen Pkw tatsächlich sein Eigen zu nennen. Alternativen gibt es genug: Car Sharing, Autoabos, Leasing…. 2019 waren laut Statista sogar 42 Prozent aller Neuzulassungen in Deutschland Leasingfahrzeuge. Lässt das Sparbuch aktuell keinen Autokauf zu, kann Leasing eine Option sein. Die ARAG Experten klären auf.… mehr “Leasing: Das neue Haben”

Harvard: Robotic swarm swims like a school of fish Fish-inspired robots coordinate movements without any outside control

These fish-inspired robots can synchronize their movements
without any outside control. Based on the simple production and
detection of LED light, the robotic collective exhibits complex
self-organized behaviors, including aggregation, dispersion and
circle formation. (Image courtesy of Self-organizing Systems
Research Group)Download
Image

Schools of fish exhibit complex, synchronized behaviors that
help them find food, migrate and evade predators. No one fish or
team of fish coordinates these movements nor do fish communicate
with each other about what to do next. Rather, these collective
behaviors emerge from so-called implicit coordination —
individual fish making decisions based on what they see their
neighbors doing.  

This type of decentralized, autonomous self-organization and
coordination has long fascinated scientists, especially in the
field of robotics. 

Now, a team of researchers at the Harvard John A. Paulson School
of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically
Inspired Engineering
 have developed fish-inspired robots that
can synchronize their movements like a real school of fish, without
any external control. It is the first time researchers have
demonstrated complex 3D collective behaviors with implicit
coordination in underwater robots. 

“Robots are often deployed in areas that are inaccessible or
dangerous to humans, areas where human intervention might not even
be possible,” said Florian Berlinger, a PhD Candidate at SEAS and
Wyss and first author of the paper. “In these situations, it
really benefits you to have a highly autonomous robot swarm that is
self-sufficient. By using implicit rules and 3D visual perception,
we were able to create a system that has a high degree of autonomy
and flexibility underwater where things like GPS and WiFi are not
accessible.”

The research is published in Science
Robotics

The fish-inspired robotic swarm, dubbed Blueswarm, was created
in the lab of Radhika
Nagpal
, the Fred Kavli Professor of Computer Science at SEAS
and Associate Faculty Member at the Wyss Institute. Nagpal’s lab
is a pioneer in self-organizing systems, from their 1,000 robot
Kilobot swarm to their termite-inspired robotic construction
crew.  

However, most previous robotic swarms operated in two-dimensional
space. Three-dimensional spaces, like air and water, pose
significant challenges to sensing and locomotion. 

To overcome these challenges, the researchers developed a
vision-based coordination system in their fish robots based on blue
LED lights. Each underwater robot, called a Bluebot, is equipped
with two cameras and three LED lights. The on-board, fish-lens
cameras detect the LEDs of neighboring Bluebots and use a custom
algorithm to determine their distance, direction and heading. Based
on the simple production and detection of LED light, the
researchers demonstrated that the Blueswarm could exhibit complex
self-organized behaviors, including aggregation, dispersion and
circle formation.

Each Bluebot implicitly reacts to its neighbors’ positions,”
said Berlinger.  “So, if we want the robots to aggregate, then
each Bluebot will calculate the position of each of its neighbors
and move towards the center. If we want the robots to disperse, the
Bluebots do the opposite. If we want them to swim as a school in a
circle, they are programmed to follow lights directly in front of
them in a clockwise direction. ”

The researchers also simulated a simple search mission with a
red light in the tank. Using the dispersion algorithm, the Bluebots
spread out across the tank until one comes close enough to the
light source to detect it. Once the robot detects the light, its
LEDs begin to flash, which triggers the aggregation algorithm in
the rest of the school. From there, all the Bluebots aggregate
around the signaling robot. 

“Our results with Blueswarm represent a significant milestone
in the investigation of underwater self-organized collective
behaviors,” said Nagpal. “Insights from this research will help
us develop future miniature underwater swarms that can perform
environmental monitoring and search in visually-rich but fragile
environments like coral reefs. This research also paves a way to
better understand fish schools, by synthetically recreating their
behavior.”

The research was co-authored by Dr. Melvin Gauci, a former Wyss
Technology Development Fellow. It was supported in part by the
Office of Naval Research, the Wyss Institute for Biologically
Inspired Engineering, and an Amazon AWS Research Award.

Originally published by
Leah
Burrows
 | Press
contact
  | January 13, 2021
Harvard John A. Paulson
School of Engineering and Applied Sciences

YouTube-Star der Logistikbranche Dirk Jakowski übernimmt seinen neuen Mercedes-Benz Actros MP 5

Ab sofort auf der Straße und bei YouTube: Dirk Jakowski mit seinem neuen Actros MP 5

YouTube-Star der Logistikbranche Dirk Jakowski übernimmt seinen neuen Mercedes-Benz Actros MP 5

Dirk Jakowski – Voller Energie auf der Straße und in den sozialen Medien unterwegs

Wenn Dirk Jakowski seit mittlerweile drei Jahren auf seine täglichen Fahrten geht, dann ist immer die Kamera dabei, denn er ist nicht nur LKW-Fahrer bei der international tätigen Spedition ExpoTrans, sondern einer der bekanntesten deutschen YouTuber der Logistik-Branche.… mehr “YouTube-Star der Logistikbranche Dirk Jakowski übernimmt seinen neuen Mercedes-Benz Actros MP 5”

Robot Displays a Glimmer of Empathy to a Partner Robot

An actor robot runs on a playpen trying to catch the visible
green food, while an observer machine learns to predictthe actor
robot’s behavior purely through visual observations. Although the
observer can always see the green foods, the actor, from its own
perspective, cannot due to occlusions.

Columbia engineers create a robot that learns to visually
predict how its partner robot will behave, displaying a glimmer of
empathy. This “Robot Theory of Mind” could help robots get
along with other robots—and humans—more intuitively

New York, NY—January 11, 2021—Like a longtime couple who can
predict each other’s every move, a Columbia Engineering robot has
learned to predict its partner robot’s future actions and goals
based on just a few initial video frames.

When two primates are cooped up together for a long time, we
quickly learn to predict the near-term actions of our roommates,
co-workers or family members. Our ability to anticipate the actions
of others makes it easier for us to successfully live and work
together. In contrast, even the most intelligent and advanced
robots have remained notoriously inept at this sort of social
communication. This may be about to change.

The study, conducted at Columbia Engineering’s Creative
Machines Lab led by Mechanical Engineering Professor
Hod Lipson
, is part of a broader effort to endow robots with
the ability to understand and anticipate the goals of other robots,
purely from visual observations.

The researchers first built a robot and placed it in a playpen
roughly 3×2 feet in size. They programmed the robot to seek and
move towards any green circle it could see. But there was a catch:
Sometimes the robot could see a green circle in its camera and move
directly towards it. But other times, the green circle would be
occluded by a tall red carboard box, in which case the robot would
move towards a different green circle, or not at all.

After observing its partner puttering around for two hours, the
observing robot began to anticipate its partner’s goal and path.
The observing robot was eventually able to predict its partner’s
goal and path 98 out of 100 times, across varying
situations—without being told explicitly about the partner’s
visibility handicap.

“Our initial results are very exciting,” says Boyuan Chen, lead author
of the study, which was conducted in collaboration with Carl
Vondrick
, assistant professor of computer science, and
published today by Nature Scientific Reports. “Our findings begin
to demonstrate how robots can see the world from another robot’s
perspective. The ability of the observer to put itself in its
partner’s shoes, so to speak, and understand, without being
guided, whether its partner could or could not see the green circle
from its vantage point, is perhaps a primitive form of
empathy.”


8417193871?profile=RESIZE_710x

 

Predictions from the observer maching: the observer sees the
left side video and predicts the behavior of the actor robt shown
on the right. With more information, the observer can correct its
predicitons about the actor’s final behaviors.

When they designed the experiment, the researchers expected that
the Observer Robot would learn to make predictions about the
Subject Robot’s near-term actions. What the researchers didn’t
expect, however, was how accurately the Observer Robot could
foresee its colleague’s future “moves” with only a few
seconds of video as a cue.

The researchers acknowledge that the behaviors exhibited by the
robot in this study are far simpler than the behaviors and goals of
humans. They believe, however, that this may be the beginning of
endowing robots with what cognitive scientists call “Theory of
Mind” (ToM). At about age three, children begin to understand
that others may have different goals, needs and perspectives than
they do. This can lead to playful activities such as hide and seek,
as well as more sophisticated manipulations like lying. More
broadly, ToM is recognized as a key distinguishing hallmark of
human and primate cognition, and a factor that is essential for
complex and adaptive social interactions such as cooperation,
competition, empathy, and deception.

In addition, humans are still better than robots at describing
their predictions using verbal language. The researchers had the
observing robot make its predictions in the form of images, rather
than words, in order to avoid becoming entangled in the thorny
challenges of human language. Yet, Lipson speculates, the ability
of a robot to predict the future actions visually is not unique:
“We humans also think visually sometimes. We frequently imagine
the future in our mind’s eyes, not in words.”

Lipson acknowledges that there are many ethical questions. The
technology will make robots more resilient and useful, but when
robots can anticipate how humans think, they may also learn to
manipulate those thoughts.

“We recognize that robots aren’t going to remain passive
instruction-following machines for long,” Lipson says. “Like
other forms of advanced AI, we hope that policymakers can help keep
this kind of technology in check, so that we can all
benefit.”

Short high-level video description of the Columbia Engineering
“Robot Theory of Mind” project (audio narrations included).

Originally published by
Holly Evarts | January 11, 2021

Columbia University | Engineering

RESEARCH IMAGE AND VIDEO CREDIT: CREATIVE MACHINES LAB/COLUMBIA
ENGINEERING | TEASER PHOTO CREDIT: SHUTTERSTOCK


original article

###

Columbia Engineering

Columbia Engineering, based in New York City, is one of the top
engineering schools in the U.S. and one of the oldest in the
nation. Also known as The Fu Foundation School of Engineering and
Applied Science, the School expands knowledge and advances
technology through the pioneering research of its more than 220
faculty, while educating undergraduate and graduate students in a
collaborative environment to become leaders informed by a firm
foundation in engineering. The School’s faculty are at the center
of the University’s cross-disciplinary research, contributing to
the Data Science Institute, Earth Institute, Zuckerman Mind Brain
Behavior Institute, Precision Medicine Initiative, and the Columbia
Nano Initiative. Guided by its strategic vision, “Columbia
Engineering for Humanity,” the School aims to translate ideas
into innovations that foster a sustainable, healthy, secure,
connected, and creative humanity.

We wouldn’t be able to control superintelligent machines

Endowing AI with noble goals may not preventunintended
consequences. © Iyad Rahwan

According to theoretical calculations of computer scientists,
algorithms cannot contain a harmful artificial intelligence

We are fascinated by machines that can control cars, compose
symphonies, or defeat people at chess, Go, or Jeopardy! While more
progress is being made all the time in Artificial Intelligence
(AI), some scientists and philosophers warn of the dangers of an
uncontrollable superintelligent AI. Using theoretical calculations,
an international team of researchers, including scientists from the
Center for Humans and Machines at the Max Planck Institute for
Human Development, shows that it would not be possible to control a
superintelligent AI.

Suppose someone were to program an AI system with intelligence
superior to that of humans, so it could learn independently.
Connected to the Internet, the AI may have access to all the data
of humanity. It could replace all existing programs and take
control all machines online worldwide. Would this produce a utopia
or a dystopia? Would the AI cure cancer, bring about world peace,
and prevent a climate disaster? Or would it destroy humanity and
take over the Earth?

Computer scientists and philosophers have asked themselves
whether we would even be able to control a superintelligent AI at
all, to ensure it would not pose a threat to humanity. An
international team of computer scientists used theoretical
calculations to show that it would be fundamentally impossible to
control a super-intelligent AI

“A super-intelligent machine that controls the world sounds
like science fiction. But there are already machines that perform
certain important tasks independently without programmers fully
understanding how they learned it. The question therefore arises
whether this could at some point become uncontrollable and
dangerous for humanity”, says study co-author Manuel Cebrian,
Leader of the Digital Mobilization Group at the Center for Humans
and Machines, Max Planck Institute for Human Development

Scientists have explored two different ideas for how a
superintelligent AI could be controlled. On one hand, the
capabilities of superintelligent AI could be specifically limited,
for example, by walling it off from the Internet and all other
technical devices so it could have no contact with the outside
world – yet this would render the superintelligent AI
significantly less powerful, less able to answer humanities quests.
Lacking that option, the AI could be motivated from the outset to
pursue only goals that are in the best interests of humanity, for
example by programming ethical principles into it. However, the
researchers also show that these and other contemporary and
historical ideas for controlling super-intelligent AI have their
limits.

In their study, the team conceived a theoretical containment
algorithm that ensures a superintelligent AI cannot harm people
under any circumstances, by simulating the behavior of the AI first
and halting it if considered harmful. But careful analysis shows
that in our current paradigm of computing, such algorithm cannot be
built.

“If you break the problem down to basic rules from theoretical
computer science, it turns out that an algorithm that would command
an AI not to destroy the world could inadvertently halt its own
operations. If this happened, you would not know whether the
containment algorithm is still analyzing the threat, or whether it
has stopped to contain the harmful AI. In effect, this makes the
containment algorithm unusable”, says Iyad Rahwan, Director of
the Center for Humans and Machines.

Based on these calculations the containment problem is
incomputable, i.e. no single algorithm can find a solution for
determining whether an AI would produce harm to the world.
Furthermore, the researchers demonstrate that we may not even know
when superintelligent machines have arrived, because deciding
whether a machine exhibits intelligence superior to humans is in
the same realm as the containment problem.

Originally published by
Kerstin Skork, Press & Public Relations, skork@mpib-berlin.mpg.de |
January 11, 2021
Max Planck Institute for Human
Development
, Berlin

The study “Superintelligence cannot be contained: Lessons from
Computability Theory“ was published in the Journal of Artificial
Intelligence Research. Other researchers on the study include
Andres Abeliuk from the University of Southern California, Manuel
Alfonseca from the Autonomous University of Madrid, Antonio
Fernandez Anta from the IMDEA Networks Institute and Lorenzo
Coviello.

 

Facial recognition identifies people wearing masks

The system hones in on uncovered features such as eyes – Getty
Images

Japanese company NEC, which develops facial-recognition
systems, has launched one that it claims can identify people
wearing masks.

It hones in on parts of the face that are not covered up, such
as the eyes, to verify their identity.

Verification takes less than one second, with an accuracy rate
of more than 99.9%, NEC says.

London’sMet
Police
 uses NEC’s NeoFace Live Facial Recognition to compare
faces in a crowd with those on a watchlist.

Other clients include Lufthansa and Swiss International
Airlines.

And NEC is trialling the system for automated payments at a shop
in its Tokyo headquarters.

Shinya Takashima, assistant manager of NEC’s digital platform
division, told the Reuters news agency the technology could help
people avoid contact with surfaces in a range of situations.

It had been introduced as „needs grew even more due to the
coronavirus situation“, he added.


8403104077?profile=RESIZE_710x

Before the coronavirus pandemic, facial-recognition algorithms
failed to identify 20-50% of images of people wearing face masks,
according to a report from the National Institute of Standards and
Technology.

But by the end of 2020, it reported a vast improvement in
accuracy.

Ruled unlawful

Facial recognition has proved controversial.

There have been questions over how well systems recognise darker
shades of skin, alongside ethical concerns about invasion of
privacy.

In
August, the use of such systems by Welsh police forces was ruled
unlawful
 in a case brought by a civil-rights campaigner.

And in the US big technology companies, including Amazon and
IBM, have suspended the use of facial-recognition software by
police officers, to allow lawmakers time to consider legislation on
how it should be deployed.

Originally published by
BBC News | January 7,
2021

 

Studie: Social Media Trends 2021

Entdecken Sie in der “Social Media Studie 2021: Trends, Tipps und Expertenprognosen” von Adenion und pressrelations die wichtigsten Trends und Zukunftsthemen für Ihre Social Media Kommunikation in 2021. Mit den aktuellsten Analysedaten von Blog2Social, den Zukunftsthemen aus dem FirstSignals®-Research und Expertenprognosen von bekannten Social-Media-Experten und -Influencern.

Test

Hallo Alex,

hier ist ein Absatz.

How to Set Up a Solid Social Media Strategy for 2021

A social media presence will gain you nothing if you don’t have a profound social media strategy outlining. Here’s how to set up a solid social media strategy.

Der Beitrag How to Set Up a Solid Social Media Strategy for 2021 erschien zuerst auf Blog2Social Blog – Tips for social media marketing, sharing, scheduling, cross-posting.

Source: How to Set Up a Solid Social Media Strategy for 2021

Karakuri’s robot chef : Brunch with Sifted EU

Karakuri control pad

“Will it be serving you chips?” my husband asks when I tell
him I am going to have brunch with a robot, a joke which, I point
out, has not been funny since 1976. If ever.My Sifted colleagues,
meanwhile, were wondering if we were going to have “Dalek
bread”. But no, my brunch with the Karakuri DK One food-making
robot arm, is in fact a bowl of muesli. 

It’s a bowl of muesli and coconut yoghurt, with a helping of
mixed berries on top, to be precise. Before it starts making the
food, I can specify exactly how many grams I want of each thing.
Then the robot arm — the first automated canteen for making meals
—  whizzes around an enclosure the size of a small walk-in
wardrobe taking the bowl to the different slots that will dispense
the yoghurt, the muesli mix and the berries, before placing the
bowl back in a small cubicle with a sliding door from which I can
retrieve it. 

Created by the three-year-old London-based startup Karakuri, the
robot might be the closest we have got, so far, to the dream of a
machine that makes you a tailored meal at the press of a few
buttons. Like the Nutrimatic Drinks Dispenser in the Hitchhiker’s
Guide to the Galaxy. Only hopefully more successful. 

Though the winter dusk is deepening outside, the machine is
making us brunch. Bowls and bowls of yoghurty muesli in different
combinations that we can specify on a touch screen display that is
connected to the machine.  Equally it could be making us bowls of
dim sum, or poke, or salad — the components and ingredients
inside the machine can be changed to suit the menu, hot or cold.
But breakfast bowls are a simple thing for the prototype to
practice on.

Karakuri unveiled the prototype machine in December, along with
a £6.3m investment, led by firstminute capital, and supported by
Hoxton Ventures, Taylor Brothers, Ocado Group and the UK’s
government-backed Future Fund. 

It is easy to get mesmerised by the moving robot arm, but this,
points out Barney Wragg, Karakuri’s cofounder and CEO, is not the
clever part of the system. The arm is a pretty standard piece of
equipment that you can see in industrial production lines all over
the world.  The clever part is an array of sensors, motors and AI
behind the scenes. 

“You can get robot arms to mix and serve food. But the really
difficult part in the food industry is serving exactly the right
portions, weighing and measuring and dispensing them,” says
Wragg. 


8396890875?profile=RESIZE_710x

The machines made by rival robot-kitchen company Moley Robotics,
for example,  he says, still needs a human to present it with all
the ingredients of the meal in the right weights and quantities.
This preparation and serving part is the bit that Karakuri wants to
solve, in order to make robot catering a truly useful and scalable
proposition. 

Prototypes for the clever innards of the DK One are littered
around Karakuri’s spacious office-and-workshop space in
Hammersmith. There is a machine that squeezes yoghurt through a
plastic tube in a peristaltic motion, exactly the way your gut
moves food through. Wragg and his team have calibrated exactly how
much yoghurt comes out at each squeeze, but also they have had to
make sure that the parts of the machine that come into contact with
the yoghurt — e.g the stainless steel pot in which it is stirred
— are easy to remove and wash in a standard dishwasher. There is
no point in building a labour-saving machine that takes hours to
clean afterwards. 

Another type of dispenser, for nuts and berries, works a little
like the penny falls machines at an amusement arcade, gently
shaking the contents so that they fall in precise quantities onto
the serving bowl. There are sensors and scales that measure how
much has already been dispensed and adjusts the shaking — more
vigorous to start with and tapering off as the bowl fills — so
that you get just the right amount. 

In another corner, a robot arm that is being taught how to
identify varying bits of grilled chicken, pick them up and place
them in another dish. This is the mundane reality of AI. Newspaper
headlines suggest it is going to take over the world. Right now it
is being taught the work of a junior at a chicken shop. 

An automated canteen could be a useful solution for a
pandemic-scarred world where people are wary of human contact. The
DK-One can serve somewhere between 60-100 bowls of food an hour to
customers without the need for any customer-facing serving staff.
Wragg says there is interest for the machine from large restaurant
chains, big catering companies that serve hospitals, schools and
work canteens, as well as from supermarkets. 

The interest from supermarkets is possibly the most intriguing.
Supermarkets are increasingly getting into the meal kit game —
Morrison’s has operated its Eat Fresh meal kit business since
2018 and Waitrose had plans
to buy Mindful Chef
 earlier this year. A meal-making machine
would potentially make the meal-kit business more scalable,
although Wragg has nothing specific to announce at the moment.
Karakuri does have a relationship with Ocado, though,  which owns a
minority stake in the startup. 

At least as important as the touch-free food service is the
portion control the machine can give. Restaurants work on
razor-thin margins with protein being one of their biggest costs.
Accidentally give every customer a sliver too much chicken and you
can easily wipe out your profits, Wragg explains. 

This was what originally led him to found Karakuri. “I had a
couple of friends with restaurants and I was stunned at how little
data there was in the trade. I saw a restaurant as something of a
food manufacturing plant, and manufacturing is all about data and
controls. There is typically nothing like that in a
kitchen.” 

Getting accurate data on ingredient usage  — more than any
labour savings — is the biggest selling point for customers, says
Wragg. 

“It’s a known secret of the restaurant business that there
is a problem with portioning and wastage.”

One potential customer, for example, wants a solution that would
dole out bowls of biryani, a mix of rice and meat cooked together,
so that there are at least four but no more than six pieces of meat
in each portion. Doing that requires a mix of shaking machines to
serve out the food and machine vision to spot the number of meat
pieces. 

Wragg — who has a varied background that includes both working
at Arm, the microchip architecture designer and for many years at
Andrew Lloyd Webber’s Really Useful Group —  founded the
company at the end of 2017 and raised the first £7.2m seed round
in April 2019. A further round in December has allowed them to
complete work on the first prototype. Karakuri has now raised a
total of £13.5m in funding. 

The team has grown from 13 people this time last year, to nearly
30. Wragg expects the company to grow to around 50 people within
the next two years. 

Covid has had a mixed impact on the business. On the one hand,
with the restaurant trade in severe distress, there are fewer
customers willing to take a risk on something as experimental as a
robot kitchen. On the other hand, it has made many bigger companies
aware of the need to automate their businesses. Selling to bigger
companies takes longer, but Wragg is hoping to have the first
installations this year. 

The first DK-One was supposed to be installed in the Ocado staff
canteen as a test run, but with only a skeleton staff working
physically at the offices because of Covid restrictions, it
didn’t seem worth doing. Instead Sifted is getting this
demonstration in the Karakuri workshop. Despite the machine
producing a prodigious number of breakfast bowls, nobody is
actually allowed to eat anything because it would violate various
food safety regulations (and also, after seeing the yoghurt
squeezing through the intestine-like tubes it has started to seem
less appetising).  

Instead we make do with black coffee from the modest capsule
coffee machine in the corner of the office. That’s a catering
machine that was invented
some 36 years ago
, initially for the professional, specialist
market, but which has subsequently become ubiquitous and changed
our coffee habits for good. Wragg can only hope that the
Karakuri’s machines will have anything like the same impact. 

Originally published by
Maija Palmer |
January 5, 2021
Sifted