Q&A: Artificial intelligence and the classroom of the future

Screenshot of the embodied avatar system “Diana.“ Credit:
Brandeis University

Imagine a classroom in the future where teachers are working
alongside artificial intelligence partners to ensure no student
gets left behind.

 The AI partner’s careful monitoring picks up on a student in
the back who has been quiet and still for the whole class and the
AI partner prompts the teacher to engage the student. When called
on, the student asks a question. The teacher clarifies the material
that has been presented and every student comes away with a better
understanding of the lesson.

This is part of a larger vision of future classrooms where human
instruction and AI technology interact to improve educational
environments and the learning experience.

James Pustejovsky, the TJX Feldberg Professor of Computer
Science, is working towards that vision with a team led by the
University of Colorado Boulder, as part of the new $20 million
National Science Foundation-funded AI Institute for Student-AI
Teaming.

The research will play a critical role in helping ensure the AI
agent is a natural partner in the classroom, with
language and vision capabilities, allowing it to not only hear what
the teacher and each student is saying, but also notice gestures
(pointing, shrugs, shaking a head), eye gaze, and facial
expressions (student attitudes and emotions).

Pustejovsky took some time to answer questions from BrandeisNOW
about his research.

How does your research help build this classroom of the
future?

For the past five years, we have been working to create a
multimodal embodied avatar system, called „Diana,“ that interacts
with a human to perform various tasks. She can talk, listen, see,
and respond to language and gesture from her human partner, and
then perform actions in a 3-D simulation environment called
VoxWorld. This is work we have been conducting with our
collaborators at Colorado State University, led by Ross Beveridge
in their vision lab. We are working together again (CSU and
Brandeis) to help bring this kind of „embodied human computer
interaction“ into the classroom. Nikhil Krishnaswamy, my former
Ph.D. student and co-developer
of Diana, has joined CSU as part of their team.

How does it work in the context of a classroom
setting?

At first it’s disembodied, a virtual
presence
 on an iPad, for example, where it is able to recognize
the voices of different students. So imagine a classroom: Six to 10
children in grade school. The initial goal in the first year is to
have the AI partner passively following the different students, in
the way they’re talking and interacting, and then eventually the
partner will learn to intervene to make sure that everyone is
equitably represented and participating in the classroom.

Are there other settings that Diana would be useful in
besides a classroom?

Let’s say I’ve got a Julia Child app on my iPad and I want her
to help me make bread. If I start the program on the iPad, the
Julia Child avatar would be able to understand my speech. If I have
my camera set up, the program allows me to be completely embedded
and embodied in a virtual space with her so that she can help
me.

How does she help you?

She would look at my table and say, „Okay, do you have
everything you need.“ And then I’d say, „I think so.“ So the camera
will be on, and if you had all your baking materials laid out on
your table, she would scan the table. She’d say, I see flour,
yeast, salt, and water, but I don’t see any utensils: you’re going
to need a cup, you’re going to need a teaspoon. After you had
everything you needed, she would tell you to put the flour in „that
bowl over there.“ And then she’d show you how to mix it.

Is that where Diana comes in?

Yes, Diana is basically becoming an „embodied presence“ in
the human-computer
interaction
: she can see what you’re doing, you can see what
she’s doing. In a classroom interaction, Diana could help with
guiding students through lesson plans, through dialog and gesture,
while also monitoring the students‘ progress, mood, and levels of
satisfaction or frustration.

Does Diana have any uses in virtual learning in
education?

Using an AI partner for virtual learning could be a fairly
natural interaction. In fact, with a platform such as Zoom, many of
the computational issues are actually easier since voice and video
tracks of different speakers have already been segmented and
identified. Furthermore, in a Hollywood Squares display of all the
students, a virtual AI partner may not seem as unnatural, and Diana
might more easily integrate with the students online.

What stage is the research at now?

Within the context of the CU Boulder-led AI Institute, the
research has just started. It’s a five-year project, and it’s
getting off the ground. This is exciting new research that is
starting to answer questions about using our avatar and agent
technology with students in the classroom.


Link to original article

Originally published by Tessa Venell, Brandeis University 

November 20th 2020

 

Industry Voices—How cloud, AI and machine learning are transforming healthcare through COVID-19 and beyond

Cloud-enabled Al and machine learning are providing healthcare
stakeholders with the tools needed for a faster and smarter
approach to combatting the COVID-19 virus.
(WrightStudios/Shutterstock)

As COVID-19 began spreading across the U.S., healthcare
organizations were forced to quickly reassess their technology, and
pull future plans for digital transformation forward.

In record time, many organizations overhauled legacy systems to
better manage and care for the uptick in patient visits, while
safely storing data to ensure efficiency as the pandemic
evolved.

One of the most pressing priorities for healthcare organizations
was expediting their adoption of cloud technologies to more
efficiently manage the deluge of patient information, ensure
streamlined workplace practices and enable information sharing with
greater ease. As local leaders made decisions about how to keep
their populations safe, cloud infrastructure provided the ability
to collect, analyze, and share data securely across and among a
global network of organizations.

Through this period of rapid cloud adoption, there has also been
a swift uptick in the use of artificial intelligence (AI) and
machine learning technologies. From enabling information sharing
and analysis without sacrificing data privacy, to ensuring patients
with the most urgent needs are given the quickest response, these
technologies have revolutionized the COVID-19 healthcare response
and will remain critical well beyond the pandemic.

Here are just a few of the ways in which COVID-19 has spurred
lasting digital transformation within the healthcare industry:

De-identification of patient data

With machine learning capabilities, healthcare organizations are
better equipped to ensure the privacy of patient data, making it
easier to aggregate data across multiple sources and garner helpful
insights about the COVID-19 virus. De-identification, the process
of removing identifying information from patient data, is critical
to the sharing of health information with non-privileged parties
for research purposes, the creation of datasets from multiple
sources for analysis, and anonymizing data so it can be used in
advanced analytics and machine learning models.

As an example, the Google Cloud Healthcare API can detect
sensitive data, such as protected health information (PHI), and
mask, delete, or otherwise obscure it.

To enable researchers to study critical COVID-19 information for
fighting the virus, patient identities from DICOM assets, such as
lung x-rays, can be removed at scale using the same type of machine
learning technology that scans YouTube for copyright infringement,
making the data usable for analytics in high-definition. Further,
testing data can be de-identified, accelerating discovery. When
properly hashed, such data can then be safely re-identified
allowing researchers to more effectively recruit for public health
programs like clinical trials.

Natural language processing for call center responses

All types of public health organizations today are inundated
with more patient requests than ever beforeand many were not
initially equipped to manage this increase.

With cloud-based AI and machine learning models, however,
organizations can build the call center of the future. Using
natural language processing and sentiment analysis, healthcare
providers can automatically prioritize calls based on need.

This technology allows an organization to optimize its approach
to answering/prioritizing inquiries based on everything from the
distress of the voice to the age of the voice. And while they’re
smart, many of these APIs are engineered with privacy in mind. They
don’t store private data, helping ensure patient
confidentiality.

Supply chain decisions informed by predictive analytics

Cloud isn’t just supporting healthcare organizations through
research and treatment decisions. It is also helping them get ahead
of supply shortages at a time when equipment is more critical to
survival than ever before.

As organizations look to provide critical healthcare equipment
such as PPE and ventilators to those in need, cloud’s predictive
analytics can help those managing the supply chain better
understand where shortages exist, and where they will soon be, in
order to allocate before there is an issue.    

Matching algorithms are easily implemented alongside predictive
services to reduce waste in the supply chain, enabling real-time
visibility to both suppliers and procurers.

Cloud-enabled Al and machine learning are providing healthcare
stakeholders with the tools needed for a faster and smarter
approach to combatting the COVID-19 virus. While the mission today
is singular, this technology, along with the innovative ideas
coming from our nation’s top minds, will change the face of
healthcare as we know it, allowing for a greater patient experience
than ever before.

Originally published by
Lisa
Noon, Deloitte
 | Nov 23, 2020
Fierce
Healthcare

A neural network learns when it should not be trusted A faster way to estimate uncertainty in AI-assisted decision-making could lead to safer outcomes.

MIT researchers have developed a way for deep learning neural
networks to rapidly estimate confidence levels in their output. The
advance could enhance safety and efficiency in AI-assisted decision
making.     Image: iStock image edited by MIT News
 

Increasingly, artificial intelligence systems known as deep
learning neural networks are used to inform decisions vital to
human health and safety, such as in autonomous driving or medical
diagnosis. These networks are good at recognizing patterns in
large, complex datasets to aid in decision-making. But how do we
know they’re correct? Alexander Amini and his colleagues at MIT
and Harvard University wanted to find out.

They’ve developed a quick way for a neural network to crunch
data, and output not just a prediction but also the model’s
confidence level based on the quality of the available data. The
advance might save lives, as deep learning is already being
deployed in the real world today. A network’s level of certainty
can be the difference between an autonomous vehicle determining
that “it’s all clear to proceed through the intersection” and
“it’s probably clear, so stop just in case.” 

Current methods of uncertainty estimation for neural networks
tend to be computationally expensive and relatively slow for
split-second decisions. But Amini’s approach, dubbed “deep
evidential regression,” accelerates the process and could lead to
safer outcomes. “We need the ability to not only have
high-performance models, but also to understand when we cannot
trust those models,” says Amini, a PhD student in Professor
Daniela Rus’ group at the MIT Computer Science and Artificial
Intelligence Laboratory (CSAIL).

“This idea is important and applicable broadly. It can be used
to assess products that rely on learned models. By estimating the
uncertainty of a learned model, we also learn how much error to
expect from the model, and what missing data could improve the
model,” says Rus.

Amini will present the research at next month’s NeurIPS
conference, along with Rus, who is the Andrew and Erna Viterbi
Professor of Electrical Engineering and Computer Science, director
of CSAIL, and deputy dean of research for the MIT Stephen A.
Schwarzman College of Computing; and graduate students Wilko
Schwarting of MIT and Ava Soleimany of MIT and Harvard.

Efficient uncertainty

After an up-and-down
history
, deep learning has demonstrated remarkable performance
on a variety of tasks, in some cases even surpassing human
accuracy. And nowadays, deep learning seems to go wherever
computers go. It fuels search engine results, social media feeds,
and facial recognition. “We’ve had huge successes using deep
learning,” says Amini. “Neural networks are really good at
knowing the right answer 99 percent of the time.” But 99 percent
won’t cut it when lives are on the line.

“One thing that has eluded researchers is the ability of these
models to know and tell us when they might be wrong,” says Amini.
“We really care about that 1 percent of the time, and how we can
detect those situations reliably and efficiently.”

Neural networks can be massive, sometimes brimming with billions
of parameters. So it can be a heavy computational lift just to get
an answer, let alone a confidence level. Uncertainty analysis in
neural networks isn’t new. But previous approaches, stemming from
Bayesian deep learning, have relied on running, or sampling, a
neural network many times over to understand its confidence. That
process takes time and memory, a luxury that might not exist in
high-speed traffic.

The researchers devised a way to estimate uncertainty from only
a single run of the neural network. They designed the network with
bulked up output, producing not only a decision but also a new
probabilistic distribution capturing the evidence in support of
that decision. These distributions, termed evidential
distributions, directly capture the model’s confidence in its
prediction. This includes any uncertainty present in the underlying
input data, as well as in the model’s final decision. This
distinction can signal whether uncertainty can be reduced by
tweaking the neural network itself, or whether the input data are
just noisy.

Confidence check

To put their approach to the test, the researchers started with
a challenging computer vision task. They trained their neural
network to analyze a monocular color image and estimate a depth
value (i.e. distance from the camera lens) for each pixel. An
autonomous vehicle might use similar calculations to estimate its
proximity to a pedestrian or to another vehicle, which is no simple
task.

Their network’s performance was on par with previous
state-of-the-art models, but it also gained the ability to estimate
its own uncertainty. As the researchers had hoped, the network
projected high uncertainty for pixels where it predicted the wrong
depth. “It was very calibrated to the errors that the network
makes, which we believe was one of the most important things in
judging the quality of a new uncertainty estimator,” Amini
says.

To stress-test their calibration, the team also showed that the
network projected higher uncertainty for “out-of-distribution”
data — completely new types of images never encountered during
training. After they trained the network on indoor home scenes,
they fed it a batch of outdoor driving scenes. The network
consistently warned that its responses to the novel outdoor scenes
were uncertain. The test highlighted the network’s ability to
flag when users should not place full trust in its decisions. In
these cases, “if this is a health care application, maybe we
don’t trust the diagnosis that the model is giving, and instead
seek a second opinion,” says Amini.

The network even knew when photos had been doctored, potentially
hedging against data-manipulation attacks. In another trial, the
researchers boosted adversarial noise levels in a batch of images
they fed to the network. The effect was subtle — barely
perceptible to the human eye — but the network sniffed out those
images, tagging its output with high levels of uncertainty. This
ability to sound the alarm on falsified data could help detect and
deter adversarial attacks, a growing concern in the age of deepfakes.

Deep evidential regression is “a simple and elegant approach
that advances the field of uncertainty estimation, which is
important for robotics and other real-world control systems,”
says Raia Hadsell, an artificial intelligence researcher at
DeepMind who was not involved with the work. “This is done in a
novel way that avoids some of the messy aspects of other approaches
—  e.g. sampling or ensembles — which makes it not only elegant
but also computationally more efficient — a winning
combination.”

Deep evidential regression could enhance safety in AI-assisted
decision making. “We’re starting to see a lot more of these
[neural network] models trickle out of the research lab and into
the real world, into situations that are touching humans with
potentially life-threatening consequences,” says Amini. “Any
user of the method, whether it’s a doctor or a person in the
passenger seat of a vehicle, needs to be aware of any risk or
uncertainty associated with that decision.” He envisions the
system not only quickly flagging uncertainty, but also using it to
make more conservative decision making in risky scenarios like an
autonomous vehicle approaching an intersection.

“Any field that is going to have deployable machine learning
ultimately needs to have reliable uncertainty awareness,” he
says.

Originally published by
Daniel Ackerman | MIT News
Office
| November 20, 2020
MIT

This work was supported, in part, by the National Science
Foundation and Toyota Research Institute through the Toyota-CSAIL
Joint Research Center.

original
article

AI research helps Soldiers navigate complex situations

ADELPHI, Md.— Researchers at the U.S.
Army’s corporate research laboratory developed an artificial
intelligence architecture that can learn and understand complex
events, enhancing the trust and coordination between human and
machine needed to successfully complete battlefield missions.

The overall effort, worked in collaboration with the University of California, Los
Angeles
 and Cardiff
University
, and funded by the laboratory’s Distributed
Analytics and Information Science International Technology
Alliances, addresses the challenge of sharing relevant knowledge
between coalition partners about complex events using
neuro-symbolic artificial intelligence.

Complex events are compositions of primitive activities
connected by known spatial and temporal relationships, said U.S.
Army Combat Capabilities Development Command, now referred to as
DEVCOM, Army Research
Laboratory
 researcher Dr.
Lance Kaplan
. For such events, he said, the training data
available for machine learning is typically sparse.

To further understand complex events, imagine people in a crowd
taking pictures of an iconic government building. The act of
picture taking involves primitive events/actions. Now, imagine that
some of the people are coordinating their picture taking for the
purpose of a reconnaissance mission. A certain sequence of
primitive events such as picture taking occurs. Clearly, it would
be good for a force protection system to detect and identify these
complex events without generating too many false alarms due to
random primitive events acting as clutter, Kaplan said.

This new neuro-symbolic architecture enables injection of human
knowledge through symbolic rules (i.e. tellability), while
leveraging the power of deep learning to discriminate between the
different primitive activities.

This is accomplished following a neuro-symbolic architecture
where the lower layer is composed of neural networks that are
connected through a logical layer to form the complex event
classification decision, Kaplan said. The symbolic layer
incorporates known rules that enable learning the lower layers
without having to train labeled data for the primitive
activities.

Two different approaches have been developed to enable learning
at the neural layers by propagating gradients through the logic
layer.

The first, Neuroplex, uses a neural surrogate for the symbolic
layer. Second, DeepProbCEP, uses DeepProbLog to propagate the
gradients.

Neuroplex was evaluated against pure deep learning methods over
three types of complex events formed by a sequence of images, a
sequence of sound clips and a nursing activity data set collected
from motion capture, meditag and accelerometer sensors.

The experiments and evaluation showed that Neuroplex is capable
of learning to efficiently and effectively detect complex events,
which cannot be handled by state-of-the-art neural network
models.


8175789690?profile=RESIZE_710x

During the training, Neuroplex not only reduced data annotation
requirements by one hundred times, but also significantly sped up
the learning process for complex event detection by four times.

Similarly, experiments on urban sound clips demonstrated over a
two times improvement in complex event accuracy for DeepProbCEP
against a two-stage neural network architecture.

“This research demonstrates the potential for
neuro-symbolic artificial intelligence architecture to learn how to
distinguish complex events with limited training samples,” Kaplan
said. “Furthermore, it is demonstrated that the systems can learn
primitive activities without the need for annotations of the simple
activities.”

In practice, he said, the initial layers of the neuro-symbolic
are pre-trained, but the amount of data to collect and label can be
greatly reduced, lowering costs.

In addition, the neuro-symbolic architecture can leverage the
symbolic rules to update the neural layers using a small set of
labeled complex activities in situations where the raw data
distribution has changed. For example, in the field it is raining,
but the neural network models were trained over data collected in
sunny weather.

The neuro-symbolic learning is also able to incorporate the
superior pattern recognition capabilities of deep learning with
high level symbolic reasoning.

“This means the AI system can naturally provide explanations
of its recommendations in a human understandable form,” Kaplan
said. “Ultimately, this will enable better trust and coordination
with the AI agent and human decision makers.”

The symbolic layer also enables tellability. In other words, he
said, the decision maker can define and update the rules that
connect primitive activities to complex events, both at the
initialization stage, where a perception module is untrained, as
well as during the fine-tuning stage, where a perception module is
a pre-trained off-the-shelf model and needs to be fine-tuned to a
specific environment.

“This research is able to leverage the advancements
in deep learning and symbolic reasoning,” Kaplan said. “The
dimensionality of the sensor data whether it is time-series data
(e.g., accelerometers) or video is infeasible for pure symbolic
reasoning. Similarly, deep learning is unable to learn patterns
that manifest over large time and spatial scales inherent in
complex events. The hybrid of neuro-symbolic learning is necessary
for complex event processing.”

The research directly supports the Army Priority Research Area
of artificial intelligence by increasing speed and agility in which
we respond to emerging threats, Kaplan said. Specifically, the
research is focused on AI for detecting and classifying complex
events using limited training data to adapt to changing
environments and learn emerging events.

The work also supports network command, control, communications
and intelligence by engendering trust between AI and human agents
representing different coalition partners by the AI incorporating
symbolic explanations and the human agents using symbolic rules to
guide the reasoning of the AI, he said.

“Complex event processing is difficult, and the
research is still in its infancy,” Kaplan said. “Nevertheless,
we have made great advancements over the last two years to develop
the neuro-symbolic framework. The research still needs to consider
more relevant data sets, and there are still other questions to
answer regarding the human-machine interface. In the long term, I
am confident that a neuro-symbolic AI will be available to Soldiers
to provide superior situational awareness than the available to the
adversary.”

Originally published by

U.S. Army DEVCOM Army Research Laboratory Public Affairs

November 17, 2020

This research was recently presented at the virtual International Conference on Logic
Programming
, and will be featured during the upcoming ACM Conference on Embedded Networked
Sensor Systems, or SenSys 2020
, scheduled virtually for Nov.
16-19.

 

UChicago scientists turn IBM computer into a quantum material

UChicago scientists programmed an IBM quantum computer to
become a type of material called an exciton condensate.  Photo by
Andrew Lindemann/IBM
 
Pioneering experiment could help design energy-efficient
materials

In a groundbreaking study, a
group of University of Chicago scientists announced they were able
to turn IBM’s largest quantum computer into a quantum material
itself.

They programmed the computer such that it turned into a type of
quantum material called an exciton condensate, which has only
recently been shown to exist. Such condensates have been identified
for their potential in future technology, because they can conduct
energy with almost zero loss.

“The reason this is so exciting is that it shows you can use
quantum computers as programmable experiments themselves,” said
paper co-author David Mazziotti, a professor in the Department of
Chemistry, the James Franck Institute and the Chicago Quantum
Exchange, and an expert in molecular electronic structure. “This
could serve as a workshop for building potentially useful quantum
materials.”

For several years, Mazziotti has been watching as scientists
around the world explore a type of state in physics called an
exciton condensate. Physicists are very interested in these kinds
of novel physics states, in part because past discoveries have
shaped the development of important technology; for example, one
such state called a superconductor forms the basis of MRI
machines.

Though exciton condensates had been predicted half a century
ago, until recently, no one had been able to actually make one work
in the lab without having to use extremely strong magnetic fields.
But they intrigue scientists because they can transport energy
without any loss at all—something which no other material we know
of can do. If physicists understood them better, it’s possible
they could eventually form the basis of incredibly energy-efficient
materials.

“This could serve as a workshop for building
potentially useful quantum materials.” —Prof. David
Mazziotti

 

To make an exciton condensate, scientists take a material made
up of a lattice of particles, cool it down to below -270 degrees
Fahrenheit, and coax it to form particle pairs called excitons.
They then make the pairs become entangled—a quantum phenomenon
where the fates of particles are tied together. But this is all so
tricky that scientists have only been able to create exciton
condensates a handful of times.

“An exciton condensate is one of the most quantum-mechanical
states you can possibly prepare,” Mazziotti said. That means
it’s very, very far from the classical everyday properties of
physics that scientists are used to dealing with.

Enter the quantum computer. IBM makes its quantum computers
available for people around the world to test their algorithms; the
company agreed to “loan” its largest, called Rochester, to
UChicago for an experiment.

Graduate students LeeAnn Sager and Scott Smart wrote a set of
algorithms that treated each of Rochester’s quantum bits as an
exciton. A quantum computer works by entangling its bits, so once
the computer was active, the entire thing became an exciton
condensate.

“It was a really cool result, in part because we found that
due to the noise of current quantum computers, the condensate does
not appear as a single large condensate, but a collection of
smaller condensates,” Sager said. “I don’t think any of us
would have predicted that.”

Mazziotti said the study shows that quantum computers could be a
useful platform to study exciton condensates themselves.

“Having the ability to program a quantum computer to act like
an exciton condensate may be very helpful for inspiring or
realizing the potential of exciton condensates, like
energy-efficient materials,” he said.

Beyond that, just being able to program such a complex quantum
mechanical state on a computer marks an important scientific
advance.

Because quantum computers are so new, researchers are still
learning the extent of what we can do with them. But one thing
we’ve known for a long time is that there are certain natural
phenomena that are virtually impossible to model on a classical
computer.

“On a classical computer, you have to program in this element
of randomness that’s so important in quantum mechanics; but a
quantum computer has that randomness baked in inherently,” Sager
said. “A lot of systems work on paper, but have never been shown
to work in practice. So to be able to show we can really do
this—we can successfully program highly correlated states on a
quantum computer—is unique and exciting.”

Originally published by
Louise
Lerner
 | November 12, 2020
UChicago News | The University of Chicago

Citation: “Preparation of an exciton condensate of photons on
a 53-qubit quantum computer.” Sager, Smart, and
Mazziotti, Physical Review Research, Nov. 9, 2020. DOI: 10.1103/PhysRevResearch.2.043205

Funding: U.S. Department of Energy Office of Basic Energy
Sciences, National Science Foundation, U.S. Army Research
Office.


Original article

Erster Durchbruch bei Einsatz der DeepMind KI

Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer: Welche Struktur hat ein Protein? Diese Frage wurde zum ersten Mal anscheinend gelöst. Die Lösung wurde aber nicht etwa von einem Wissenschaftler beantwortet, sondern von einem Computer:

Continue reading Erster Durchbruch bei Einsatz der DeepMind KI

Test mit 2000 Zeichen

Die Angebote an Social Media Diensten sind vielfältig und neue kommen ständig hinzu. Während Shooting Stars binnen weniger Wochen ware Hypes auslösen dümpeln andere über Jahre vor sich hin und/oder verschwinden wieder ganz. Doch welche Plattformen und Tools sind für meine Social Media Aktivitäten die geeignetsten? Muss ich überall vertreten sein? Angesichts der Fülle an Angeboten ist eine gewisse Abschreckung verständlich.Die Angebote an Social Media Diensten sind vielfältig und neue kommen ständig hinzu. Während Shooting Stars binnen weniger Wochen ware Hypes auslösen dümpeln andere über Jahre vor sich hin und/oder verschwinden wieder ganz. Doch welche Plattformen und Tools sind für meine Social Media Aktivitäten die geeignetsten? Muss ich überall vertreten sein? Angesichts der Fülle an Angeboten ist eine gewisse Abschreckung verständlich. Doch welche Plattformen und Tools sind für meine Social Media Aktivitäten die geeignetsten? Muss ich überall vertreten sein? deEEEr vertreten sein?Die Angebote an Social Media Diensten sind vielfältig und neue kommen ständig hinzu. Während Shooting Stars binnen weniger Wochen ware Hypes auslösen dümpeln andere über Jahre vor sich hin und/oder verschwinden wieder ganz. Doch welche Plattformen und Tools sind für meine Social Media Aktivitäten die geeignetsten? Muss ich überall vertreten sein? Angesichts der Fülle an Angeboten ist eine gewisse Abschreckung verständlich.Die Angebote an Social Media Diensten sind vielfältig und neue kommen ständig hinzu. Während Shooting Stars binnen weniger Wochen ware Hypes auslösen dümpeln andere über Jahre vor sich hin und/oder verschwinden wieder ganz. Doch welche Plattformen und Tools sind für meine Social Media Aktivitäten die geeignetsten? Muss ich überall vertreten sein? Angesichts der Fülle an Angeboten ist eine gewisse Abschreckung verständlich. Doch welche Plattformen und Tools sind für meine Social Media Aktivitäten die geeignetsten? Muss ich überall vertreten sein? deEEEr vertreten sein?

Monitoring für PR-Veröffentlichungen

PR-Meldungen werden heutzutage crossmedial geteilt – meist angefangen mit einer Veröffentlichung auf der eigenen Presseseite, werden Sie mit Hilfe von Distribustionsservices über Presseportale, Newsseiten, Magazine, Blogs, Content Plattformen und Social Media Netzwerke sowie an Journalisten und Redaktionen verbreitet.

Doch die vielfältigen Verbreitungsmöglichkeiten von Meldungen stellen PR-Verantwortliche in Hinblick auf das PR-Monitoring vor neue Herausforderungen. Sie müssen die gesamte Medienlandschaft, von Presseportalen bis hin zu den Social Media beobachten und gleichzeitig die klassischen Medien: Print, Hörfunk und TV nicht aus dem Blickfeld verlieren.