How shut are we to an correct AI pretend information detector?

In an ambitious effort to combat the harms of false content on social media and news websites, data scientists are getting creative.

The Large Language Models (LLMs) used to create chatbots like ChatGPT are still in their careers, but are being recruited to detect fake news. With better detection, AI fake news checking systems may be able to warn about and ultimately counteract serious harm from deepfakes, propaganda, conspiracy theories, and misinformation.

The next level of AI tools will personalize the detection of false content and protect us from it. For this ultimate leap into user-centered AI, data science must draw on behavioral and neuroscience.

Recent research suggests that we may not always be aware that we are encountering fake news. Neuroscience helps to find out what goes on unconsciously. Biomarkers such as heart rate, eye movements and brain activity appear to subtly change in response to fake and real content. In other words, these biomarkers can be “tells” that indicate whether we have been included or not.

For example, when people look at faces, eye tracking data shows that we look for blink rates and changes in skin color caused by blood flow. If such elements appear unnatural, it can help us decide that we are dealing with a deepfake. This knowledge can give AI an advantage – we can, among other things, train it to mimic what humans are looking for.

The personalization of an AI fake news checker is taking shape using insights from human eye movement data and electrical brain activity that reveal what types of false content have the greatest neural, psychological and emotional impact and for whom.

If we know our specific interests, personality, and emotional reactions, an AI fact-checking system could detect and predict which content would trigger the strongest reactions in us. This could help identify when people are being tricked and what type of material fools them most easily.

Counteract damage

Next, the protective measures must be adjusted. To protect ourselves from the harms of fake news, we also need to develop systems that can intervene – a kind of digital countermeasure against fake news. There are several ways to do this, such as warning labels, links to expert-approved, credible content, and even encouraging people to consider different perspectives when reading.

Our own personalized AI fake news checker could be designed to give each of us one of these countermeasures to eliminate the harm caused by false online content.

Such technology is already being tested. Researchers in the US studied how people interact with a personalized AI fake news checker for social media posts. It has learned to reduce the number of posts in a newsfeed to the number it believes to be true. As a proof of concept, another study using social media posts tailored additional news content to each media post to encourage users to consider alternative perspectives.

Precise detection of fake news

But whether this all sounds impressive or dystopian, before we get carried away, it might be worth asking a few basic questions.

Many, if not all, works on fake news, deepfakes, disinformation, and misinformation illustrate the same problem that any lie detector would face.

There are many types of lie detectors, not just the lie detector test. Some rely solely on linguistic analysis. Others are systems designed to read people's faces to see if they reveal micro-emotions that reveal they are lying. There are also AI systems that are supposed to recognize whether a face is real or a fake.

Before the detection begins, we all need to agree on what a lie looks like in order to recognize it. In fact, deception research shows that it can be easier because you can tell people when to lie and when to tell the truth. And so you have a way of knowing the ground truth before you teach a human or a machine to tell the difference, because they are provided with examples on which to base their judgments.

How good an experienced lie detector is depends on how often he detects a lie when there is one (hits). But also that they don't often give the impression that someone is telling the truth when in reality they are lying (Miss). This means they need to know what the truth is when they see it (true rejection) and not accuse someone of lying even though they were telling the truth (false alarm). This means signal detection, and the same logic also applies to fake news detection, which you can see in the diagram below.

In order for an AI system to detect fake news very accurately, the hits must be very high (e.g. 90%), the false reports must therefore be very low (e.g. 10%) and the false alarms must remain low (e.g. B. 10). %), which means that real news is not called fake. When an AI fact-checking system or a human system based on signal detection is recommended to us, we can better understand how good it is.

As reported in a recent survey, there may be cases where the news content may not be completely false or completely true, but may be partially correct. We know this because the speed of news cycles means that what was thought to be accurate at one point in time may later turn out to be inaccurate, or vice versa. So a fake news checking system has a lot to do.

If we knew in advance what was fake and what was real news, how accurately can biomarkers subconsciously indicate which is which? The answer is not very. Neural activity is mostly the same when we encounter real and fake news articles.

When it comes to eye tracking studies, it is important to know that there are different types of data collected using eye tracking techniques (e.g. the duration our eye fixates on an object, the Frequency with which our eyes move across a visual scene). ).

Depending on what is being analyzed, some studies show that we pay more attention when viewing false content, while others show the opposite.

Are we there yet?

AI fake news detection systems in the market already use insights from behavioral science to help flag and alert us to fake news content. So it won't be difficult to see the same AI systems popping up in our news feeds, with protections tailored to our unique user profile. The problem with all of this is that we still have a lot of bases to cover in order to know what works, but also to check whether this is what we want.

In the worst case scenario, we only see fake news as an online problem as an excuse to use AI to solve it. But false and inaccurate content is everywhere and discussed offline. Furthermore, we don't automatically believe all fake news, sometimes we use it in discussions to illustrate bad ideas.

In a possible best-case scenario, data and behavioral sciences are confident about the magnitude of the various harms that fake news could cause. But even here, AI applications combined with scientific wizardry could still be a very poor substitute for less sophisticated but more effective solutions.The conversation

Magda Osman, Professor of Policy Impact, University of Leeds

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Comments are closed.