You are on page 1of 2

Fake News

The advancement in technology has over the years, facilitated the extension of news-sharing
outlets. Information spreads very fast through mediums such as social media platforms.
Nowadays, a Facebook post of a tweet is considered news. The ease of access and control of
these social media platforms has made it possible for some users to exploit them to spread
fake and malicious content. The negative consequences of fake news are tremendous. The
executives of the social media channels, therefore, have an essential responsibility to limit the
abuse of their platforms as outlets of fake news.

The technology of Artificial Intelligence (AI) is a great tool that developers can use to limit
the spread of fake news on social media. They can design an automated AI tool that quickly
identifies and blocks content that is considered fake news.

The question of whether to use black-box models are appropriate in detecting fake news is a
crucial one. Black-box models, in this case, will label certain content as fake news even if the
conclusion is not understandable to humans. And since the traditional belief is that the most
accurate models are the uninterpretable ones, we are supposed to accept all these predictions.
This train of through is absurd.

Recently, news about police brutality in Nigeria surfaced the internet. Many users showed
their solidarity with Nigeria by sharing posts on photos of victims, protestants, and many
more affiliated pictures. Instagram’s algorithms incorrectly started flagging the posts are false
information. The company later apologized for the error, but the damage had already been
done. Such is an example of what can happen if we continue putting complicated black box
models synonymous with accuracy.

An interpretable model for detecting fake news would be ideal and effective in avoiding such
mistakes. Human actions are dynamic. There are a lot of factors that may change to limit the
accuracy of a predictive model. For instance, the population that the black-box model based
its algorithms may be different. If fake news mongers realize that their content is being
blocked if they used certain words, they would consequently refine it to try and beat the
system. If there is open communication between the model and its users, it will be possible to
identify and rectify such mistakes and thus make the system more accurate.

Similar to the Explainable AI competition, detecting fake news on news outlets like social
media platforms do not require a black-box model. It is possible to build an accurate system
just by analyzing data on previous cases of fake news sharing on these sites. For a high-stake
decision such as declaring information fake, a transparent model to both the developers, the
source and the audience is necessary. The model would have comprehensible details on why
it decided a particular content to be fake news. It could do this by having a checklist of
variables, such as verification of the source, timeline, or the wording used, that are
comprehensible to all parties involved. The result would be an AI model that is interpretive
and, thus, better equipped to serve humans.

You might also like