Professional Documents
Culture Documents
https://ieeexplore.ieee.org/document/9392657/authors?
fbclid=IwAR3WrgEGTY5M3zLS_NpELawsaNELx5VJiPlEmhwp
m2yQiMIWIaT_tb1eD0E#authors
electronic data: urgent information about the patient and better quality treatment
Challenges of information retrieval from heterogeneous and distributed databases:
(i) resolving the structural or data model variability,
(ii) bridging among various data querying syntaxes, and
(iii) mitigating semantic heterogeneity of data created in data storing time.
Certain AI applications can also detect fake news about the disease by applying machine-
learning techniques for mining social media information, tracking down words that are
sensational or alarming, and identifying which online sources are deemed authoritative for
fighting what has been called an infodemic. Facebook, Google, Twitter and TikTok have
partnered with the WHO to review and expose false information about COVID-19. (What if
we could fight coronavirus with AI?)
The WHO has warned that the global distribution through online platforms of information
regarding COVID-19 might lead to the facilitated spread of misinformation. The WHO has
been facing the infodemic by using its platform, the Information Network for Epidemic (EPI-
WIN), in order to share information with key stakeholders.
A study was conducted that analyzed the first 110 google search results for “Wuhan
Coronavirus.” This was done using the Health on the Net Foundation Code of
Conduct (HONcode), the Journal of the American Medical Association (JAMA)
benchmark, and the DISCERN Instrument (Cuan-Baltazar et al., 2020). HONcode
aims to help standardize the reliability of medical and health information available on
the World-Wide Web and provides certification (The Health on the Net Foundation,
2019), and JAMA uses 4 core standards to evaluate websites: authorship, attribution,
disclosure, and currency (Cassidy & Baker, 2016). DISCERN is “an instrument, or
tool, that has been designed to help users of consumer health information judge the
quality of written information about treatment choices” (DISCERN, n.d). Of the 110
sites analyzed, only 1.8% had the HONcode seal, 39.1% met none of the standards
set out by the JAMA benchmark and only 10.0% met all 4 standards. Additionally, the
DISCERN instrument evaluated 70.0% of websites as having a low score and did not
give a high score to any website. This evaluation demonstrates the ease with which
misinformation can be spread across the internet, as the first 110 sites have shown
little to no reliability in the information that is being conveyed. Unfortunately,
misinformation is not only circulating throughout the internet, but is in fact being
interacted with far more than credible information. The result of this is an increase in
the spread of COVID-19, and people opposing vaccines and refusing to listen to
public health advice.
One problem that all AI-based systems built to fight misinformation experience is
difficulty detecting images that appear alike to a person but not alike to a computer.
For example, if an image was screenshotted from an existing post, the image would
appear identical to a human. However, the pixels would be very different to a
computer and it would thus have difficulty flagging the post.
To tackle misinformation, first and foremost, it is important to ensure that the only
sources that are being consulted are credible ones. For COVID-19, this can include
government and public health websites, university pages such as that of John
Hopkins, or research articles found through a variety of online databases.
In 2018, growing concern about the impact of online disinformation (information that is
false and deliberately created to harm a person, a social group, an organization or a country)
prompted the European Commission to issue a series of measures, including an EU-wide
code of practice on disinformation. 6 In 2020, the World Health Organization (WHO) has
listed the “uncontrolled dissemination of misinformation,” including in the field of vaccination,
among its urgent health challenges for the next decade. 7
Rozeenbenk et al. have found that susceptibility to misinformation may make people less
likely to report willingness to get vaccinated against COVID-19, and less likely to recommend
vaccination to vulnerable people in their social
circle.https://royalsocietypublishing.org/doi/10.1098/rsos.201199
In this work, we propose a robust, dynamic fake news detection system, that can not only
estimate the “correctness” of a claim but also provides users with pertinent information
regarding the said claim. This is achieved using a knowledge base of verified information
that can be constantly updated.
Fake news detection: The paper (Riedel et al., 2017) uses traditional approaches with a
simple classifier model that makes use of Term Frequency(TF), Term Frequency and
Inverse Document Frequency(TF-IDF), and cosine similarity between vectors as features to
classify fake news. They have provided a baseline for fake news stance detection on Fake
News Challenge (FNC-1) dataset1.
In (Nie et al., 2019), the authors present a connected system consisting of three
homogeneous neural semantic matching models that perform document retrieval, sentence
selection, and claim verification on the FEVER Dataset (Thorne et al., 2018) jointly for fact
extraction and verification. Their Neural Semantic Matching Network (NSMN) is a
modification of the Enhanced Sequential Inference Model (ESIM) (Chen et al., 2017), where
they add skip connections from input to matching layer and change output layer to only max-
pool plus one affine layer with ReLU activation. They use three stage pipeline in which given
a claim, they first retrieve candidate documents from the corpus, followed by retrieving
candidate sentences from the selected candidate documents and finally, the last stage
classifies the sentence in to one of three classes. They used Bidirectional LSTM (BiLSTM) to
encode the claim and sentences using GloVe (Pennington et al., 2014) and ELMO (Peters et
al., 2018) embeddings.
Dataset: Manually curated dataset specific to COVID-19. The proposed dataset consists of
5500 claim and explanation pairs. We scraped data from “Poynter”2, a fact checking website
which collects fake news. We further manually rephrase these false claims to generate true
claims, as the ones that align with the explanation so as to create an equal proportion of
true-claim and explanation pairs. 85.5% accuracy reached
This paper introduces a novel automatic fake news credibility inference model, namely
FAKEDETECTOR. Based on a set of explicit and latent features extracted from the textual
information, FAKEDETECTOR builds a deep diffusive network model to learn the
representations of news articles, creators and subjects simultaneously. Extensive
experiments have been done on a real-world fake news dataset to compare
FAKEDETECTOR with several state-of-the-art models, and the experimental results have
demonstrated the effectiveness of the proposed model.
Results: 10% of Ebola-related tweets contained false or partially false information and 72%
were related to health.
12. Misinformation: A Threat to the Public's Health and the
Public Health System
While digital channels are valuable mechanisms for Local Health Departments LHDs to
communicate accurate health information and guidance to the public, they have recently
become gateways for the rapid spread of mis- and disinformation.
Misinformation proliferates partly because people are more likely to accept advice and
information from friends, family, and people they feel their community trusts. In fact, the
ramifications of inaccurate or false information can be very resource-intensive, placing a
strain on health departments.
More than two-thirds of anti-vaccine Web sites represent information as “scientific evidence”
to support the idea that vaccines are dangerous, and nearly one-third use anecdotes to
reinforce that perception. By targeting certain groups—including parents of children with
autism spectrum disorder, grieving mothers who have lost children, immigrant communities,
and religious groups—anti-vaccine proponents have sowed skepticism about the safety of
vaccinations. Anti-vaccination websites use “science” and stories to support claims, study
finds: anti-vaccine positions are frequently embedded with information about positive
behaviors like eating healthy, breastfeeding. ScienceDaily.
13.
Twitter ensures that when users search for vaccine-related topics in the
UK, the first result is for the National Health Service. In the USA, the same
search first turns up a link to the Department of Health and Human
Services.
17. Too little, too late: social media companies’ failure to tackle
vaccine misinformation poses a real threat
People wishing to undermine trust in the vaccine don’t use outright lies. Instead, they lead
campaigns designed to undermine the institutions, companies, and people managing the
rollout. They post vaccine injury stories and provid first person videos detailing side effects
that are difficult to factcheck.
18.
We find 10.47% of search-results promote misinformative health products. We also observe ranking-
bias, with Amazon ranking misinformative search-results higher than debunking search-results.
21.The effects of source expertise and trustworthiness on
recollection: the case of vaccine misinformation