online education

TRUE OR FALSE. MAJOR MAINTENANCE. How did post-truth acquire such an vital place in our day by day lives?

Misinformation, fake news, infox, conspiracy theories… Fake news, which is widely distributed via social networks, is now part of our everyday life. In January 2022, a report on the phenomenon commissioned by the Presidency of the Republic underlined the dangers posed by the “Information Disruption in the Digital Age” likely to eliminate “the common space (…) that is necessary for the discussion of opinions, ideas and values, i.e. for democratic life”.

How does fake news spread? Why do they manage to break up our society? To answer these questions, franceinfo interviewed American researcher James Owen Weatherall, professor of logic and philosophy of science at the University of California. He is co-author of The age of misinformation (‎Yale University Press, 2019)a book that analyzes the social dynamics involved in the spread of disinformation.

Franceinfo: Disinformation is not a new phenomenon, why does the problem seem more worrying today?

James Owen Weatherall: At least for two reasons. The first is the emergence of new technologies – notably social networks and their algorithms – that make it more difficult to detect and disseminate disinformation to a wide audience. The second is an evolution of the goals of disinformation. Increasingly, their purpose is to create uncertainty and instability, undermining institutions of trust rather than convincing people that some fake news is true. Although, of course, sometimes they succeed.

What are the main issues currently being targeted by misinformation?

They are numerous and vary from country to country. One of the main issues, especially in Europe, is the war in Ukraine. Both sides and their allies are trying to convince the world (or their own citizens) that they are winning the conflict. Another issue that continues to be targeted is vaccinations, be they traditional vaccines for children or, of course, those against Covid. In the United States, misinformation about the reliability of the electoral process is also a major problem.

Why do some people trust fake news? Is there a tipping point where they stop trusting traditional media?

For some people it’s because the fake news tells them what they want to hear and that it best fits their worldview. Others have stopped trusting traditional news sources and see all media, good and bad, as equals. After all, some don’t believe in it fake News, but share them to express your political affiliation or get feedback from others.

I don’t know if there’s a definite turning point, but I think a lot of people feel that the mainstream media is biased and trying to mislead people.

What is the responsibility of the traditional media in this crisis of confidence?

Sometimes it’s just because that media isn’t saying what people want to hear. But there are other factors. For example, in some areas of science journalism, there is a tendency to highlight studies with surprising or contrary results. This can create the impression that scientific knowledge is constantly being devalued, which in turn undermines trust in science journalism.

In other areas, it is very difficult, if not impossible, to write an article without taking a specific point of view or siding with certain values. People who don’t share these values ​​might conclude that the mainstream media is biased.

Does fake news affect certain groups of people?

This phenomenon can affect anyone, regardless of their social background and education. First of all, I think that websites masquerading as legitimate journalism but spreading false information are only a very small part of the misinformation problem.

The real problem is more subtle: the same [des images ou des vidéos détournées à des fins souvent humoristiques] can spread misinformation without appearing to say anything wrong while eliciting emotional responses. Politicians or economic actors can spread misleading information based on partially or even fully confirmed facts. Finally, misinformation can also be spread by “non-experts”, by people who “Research” online without having the necessary background to put into context what they read in academic journals.

“Fake news can convince very well educated people like Elon Musk. Consider the magnitude of the belief that Iraq developed weapons of mass destruction in the early 2000s. Or the number of very knowledgeable people who shared or believed wrong things about hydroxychloroquine in 2020.”

James Owen Weatherall

at franceinfo

>> True or False: Will Elon Musk’s Takeover of Twitter Lead to More Fake News?

I also think this is the case given the politicized nature of the disinformation counterproductive Portraying people who believe in fake news as unintelligent or uneducated. Moreover, this vision is itself an instrument of manipulation, designed to increase the polarization of society. Many misinformation writers use it to persuade a group (those who don’t believe misinformation) than the other group (Someone who shares or believes them fake News) is stupid. Who falls into the trap most often in this case?

Fake news is often shared in communities, on social networks or in real life…

Small communities of like-minded people are often more receptive to information shared by other members of the group. The two key factors here are trust and conformism. Very often these people trust the members of their group more than the gfringe media. Sharing certain types of information can be a way to blend in with other people, but it can also signal belonging to a group. Online communities can amplify these effects by making it easy for like-minded people living in different places to find each other.

What role do the algorithms used in social networks play in the formation of information bubbles?

The existence of information bubbles is a complex issue, as numerous studies have shown that most people are exposed to a variety of information sources. the perhaps even more important is how they react to this information. The main task of algorithms is to highlight content with high engagement, especially emotionally charged content, which in turn can generate strong reactions. (positive or negative).

How do countries like Russia use disinformation to reach Western audiences?

Russian intelligence and the Internet Research Agency (IRA), a Russian-backed organization, study Western media and culture very closely. They create content – often memes – designed to sow division. They tend to do this in the run-up to elections in the United States and Western Europe. The goal is partly to help a candidate, but also to destabilize Western political institutions. For example, Russia broadcast content targeting black movement supporters Lives matter in the summer of 2016.

What differences or similarities do you see in Europe and the United States in relation to disinformation phenomena?

There are many similarities. In any case, Russia is a major source of destabilizing disinformation. The political controversies are similar, with meme-based disinformation being one of the main tools of an ethno-nationalist right on the rise in several European countries and the United States. We also see many conspiracy theories, particularly about the safety and effectiveness of vaccines or the severity of Covid, coming and going on both sides of the Atlantic.

And yet there are differences. For example, it appears that the recent episode of misinformation observed during the 2020 US presidential election has not been repeated in Western Europe. In addition, the European Union and several states members are much more proactive than the United States when it comes to regulating social media. It will be very interesting to measure the effectiveness of these efforts.

How do you rate the effectiveness of initiatives to combat disinformation, such as fact checking or media literacy?

It’s difficult to rate these initiatives, although I haven’t seen much evidence of their effectiveness.

“Fact checking often only manages to get a larger audience for the fake news you’re trying to refute.”

James Owen Weatherall

at franceinfo

Some of the more effective methods include removing problematic content from social media and making it harder for false information to be shared and viewed by adding steps to the posting process.

So what ifnt the solutions to implement to reduce misinformation?

The biggest changes that need to be made are algorithmic. Businesses that run social media value user engagement and time spent on their site. Sensational and emotionally charged information increases engagement. However, this type of content is often misleading. Algorithms need to be changed to assess the truth and accuracy of facts.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button