You are on page 1of 11

Vaccine, mis info- related work, look at

seminext, not accuracy,


implementation important
 A Simplified Architecture to Integrate and Interoperate
Heterogeneous and Distributed Healthcare Data

https://ieeexplore.ieee.org/document/9392657/authors?
fbclid=IwAR3WrgEGTY5M3zLS_NpELawsaNELx5VJiPlEmhwp
m2yQiMIWIaT_tb1eD0E#authors
electronic data: urgent information about the patient and better quality treatment
Challenges of information retrieval from heterogeneous and distributed databases:
(i) resolving the structural or data model variability,
(ii) bridging among various data querying syntaxes, and
(iii) mitigating semantic heterogeneity of data created in data storing time.

a simplified architecture to integrate and interoperate data among different databases


having distinct nature and query language developed by different vendors.

 Coronavirus vaccines: Fake news and myths go viral


 BioNTech/Pfizer vaccine alter the human DNA. BioNTech/Pfizer vaccine alter the
human DNA
 Peru's government carrying out forced vaccinations

1. Ethical Implications of the Use of AI to Manage the


COVID-19 Outbreak
In this Research Brief, the current and potential uses for AI-based tools in managing
pandemics are outlined and the ethical implications of these efforts are discussed.

AI can be useful in dissemination during crisis. Infodemic is a phenomenon in which


populations are overwhelmed with information prevalently from non-reliable sources
(Bullock, 2020). AI-based tools can also be used to identify deep fakes, fake news and
misinformation and model the spread of disinformation on social platforms (Bullock, 2020).

Certain AI applications can also detect fake news about the disease by applying machine-
learning techniques for mining social media information, tracking down words that are
sensational or alarming, and identifying which online sources are deemed authoritative for
fighting what has been called an infodemic. Facebook, Google, Twitter and TikTok have
partnered with the WHO to review and expose false information about COVID-19. (What if
we could fight coronavirus with AI?)
The WHO has warned that the global distribution through online platforms of information
regarding COVID-19 might lead to the facilitated spread of misinformation. The WHO has
been facing the infodemic by using its platform, the Information Network for Epidemic (EPI-
WIN), in order to share information with key stakeholders.

AI can be used to develop vaccines and treatment:


 For instance, DeepMind, Google’s AI division, is using its computing power to
analyze the proteins that might be components of the virus in order to support
researchers in developing treatments
 It can also be employed to develop individual treatment plans based on
patient history and response to treatments during care.

2. ROLES OF TECHNOLOGY DURING COVID-19


Modern technology is one of the key distinguishing factors between COVID-19 and
pandemics of the past. Misinformation is defined as false, inaccurate, or otherwise
misleading information, especially when it is meant to purposefully deceive an
audience. The role that the internet and media have played in the perpetuation of
misinformation has significantly harmed the fight against COVID-19. In fact, in the
month of April alone, fifty million posts about COVID-19 were disseminated on
Facebook, with two and a half million of these being ads for face masks, testing kits
and other products that were banned from advertising on the platform. misinformation
(Perry, 2020).

According to Lee Kum Kee, a Professor of Health Communication at the Harvard


T.H. Chan School of Public Health. He goes on to state that “the sheer volume of
COVID-19 misinformation and disinformation online is ‘crowding out’ the accurate
public health guidance, ‘making [their] work a bit more difficult’”

A study was conducted that analyzed the first 110 google search results for “Wuhan
Coronavirus.” This was done using the Health on the Net Foundation Code of
Conduct (HONcode), the Journal of the American Medical Association (JAMA)
benchmark, and the DISCERN Instrument (Cuan-Baltazar et al., 2020). HONcode
aims to help standardize the reliability of medical and health information available on
the World-Wide Web and provides certification (The Health on the Net Foundation,
2019), and JAMA uses 4 core standards to evaluate websites: authorship, attribution,
disclosure, and currency (Cassidy & Baker, 2016). DISCERN is “an instrument, or
tool, that has been designed to help users of consumer health information judge the
quality of written information about treatment choices” (DISCERN, n.d). Of the 110
sites analyzed, only 1.8% had the HONcode seal, 39.1% met none of the standards
set out by the JAMA benchmark and only 10.0% met all 4 standards. Additionally, the
DISCERN instrument evaluated 70.0% of websites as having a low score and did not
give a high score to any website. This evaluation demonstrates the ease with which
misinformation can be spread across the internet, as the first 110 sites have shown
little to no reliability in the information that is being conveyed. Unfortunately,
misinformation is not only circulating throughout the internet, but is in fact being
interacted with far more than credible information. The result of this is an increase in
the spread of COVID-19, and people opposing vaccines and refusing to listen to
public health advice.

One problem that all AI-based systems built to fight misinformation experience is
difficulty detecting images that appear alike to a person but not alike to a computer.
For example, if an image was screenshotted from an existing post, the image would
appear identical to a human. However, the pixels would be very different to a
computer and it would thus have difficulty flagging the post.

Misinformation is defined as false, inaccurate, or otherwise misleading information,


especially when it is meant to purposefully deceive an audience. In addition to the
general anti-vaccine arguments that are often made by opposing groups, such as the
claim that they cause autism and a slew of other health complications, the COVID-19
vaccine has come with some rumours of its own. Some of the most prominent pieces
of misinformation currently making their way across the web are the rumours
surrounding Bill Gates’ involvement with the virus, with individuals and groups
claiming that he had created the virus and patented the vaccine or that he would use
these vaccines to track and control individuals (Ball & Maxmen, 2020). 10% of adults
and 31% of children and teenagers have shared fake news stories online, a study
done in May of 2020 found (Watson, 2020).

To tackle misinformation, first and foremost, it is important to ensure that the only
sources that are being consulted are credible ones. For COVID-19, this can include
government and public health websites, university pages such as that of John
Hopkins, or research articles found through a variety of online databases.

3. Online mis/disinformation and vaccine hesitancy in the era


of COVID-19: Why we need an eHealth literacy revolution
eHealth literacy, a skill set including media literacy, is key to navigating the web in search for
health information and processing the one encountered through social media. eHealth and
media literacies should be viewed as fundamental skills that have the potential to empower
citizens to better recognize online mis/disinformation and make informed decisions about
vaccination as any other health matters. Norman and Skinner, first introduced the term
ehealth in 2006, defined it as “the ability to seek, find understand, and appraise health
information from electronic sources and apply the knowledge gained to preventing,
addressing or solving a health problem.” 18

In 2018, growing concern about the impact of online disinformation (information that is
false and deliberately created to harm a person, a social group, an organization or a country)
prompted the European Commission to issue a series of measures, including an EU-wide
code of practice on disinformation. 6 In 2020, the World Health Organization (WHO) has
listed the “uncontrolled dissemination of misinformation,” including in the field of vaccination,
among its urgent health challenges for the next decade. 7

Rozeenbenk et al. have found that susceptibility to misinformation may make people less
likely to report willingness to get vaccinated against COVID-19, and less likely to recommend
vaccination to vulnerable people in their social
circle.https://royalsocietypublishing.org/doi/10.1098/rsos.201199

4. Two Stage Transformer Model for COVID-19 Fake News


Detection and Fact Checking
The study developed a two stage automated pipeline for COVID-19 fake news detection
using state of the art machine learning models for natural language processing. The first
model leverages a novel fact checking algorithm that retrieves the most relevant facts
concerning user claims about particular COVID-19 claims. The second model verifies the
level of “truth” in the claim by computing the textual entailment between the claim and the
true facts retrieved from a manually curated COVID-19 dataset. The dataset is based on a
publicly available knowledge source consisting of more than 5000 COVID-19 false claims
and verified explanations, a subset of which was internally annotated and cross-validated to
train and evaluate our models.

In this work, we propose a robust, dynamic fake news detection system, that can not only
estimate the “correctness” of a claim but also provides users with pertinent information
regarding the said claim. This is achieved using a knowledge base of verified information
that can be constantly updated.

Fake news detection: The paper (Riedel et al., 2017) uses traditional approaches with a
simple classifier model that makes use of Term Frequency(TF), Term Frequency and
Inverse Document Frequency(TF-IDF), and cosine similarity between vectors as features to
classify fake news. They have provided a baseline for fake news stance detection on Fake
News Challenge (FNC-1) dataset1.

In (Nie et al., 2019), the authors present a connected system consisting of three
homogeneous neural semantic matching models that perform document retrieval, sentence
selection, and claim verification on the FEVER Dataset (Thorne et al., 2018) jointly for fact
extraction and verification. Their Neural Semantic Matching Network (NSMN) is a
modification of the Enhanced Sequential Inference Model (ESIM) (Chen et al., 2017), where
they add skip connections from input to matching layer and change output layer to only max-
pool plus one affine layer with ReLU activation. They use three stage pipeline in which given
a claim, they first retrieve candidate documents from the corpus, followed by retrieving
candidate sentences from the selected candidate documents and finally, the last stage
classifies the sentence in to one of three classes. They used Bidirectional LSTM (BiLSTM) to
encode the claim and sentences using GloVe (Pennington et al., 2014) and ELMO (Peters et
al., 2018) embeddings.
Dataset: Manually curated dataset specific to COVID-19. The proposed dataset consists of
5500 claim and explanation pairs. We scraped data from “Poynter”2, a fact checking website
which collects fake news. We further manually rephrase these false claims to generate true
claims, as the ones that align with the explanation so as to create an equal proportion of
true-claim and explanation pairs. 85.5% accuracy reached

5. A smart System for Fake News Detection Using Machine


Learning
This paper demonstrates a model and the methodology for fake news detection. With
the help of Machine learning and natural language processing, author tried to aggregate
the news and later determine whether the news is real or fake using Support Vector
Machine. The results of the proposed model is compared with existing models. The
proposed model is working well and defining the correctness of results upto 93.6% of
accuracy.

6. FakeDetector: Effective Fake News Detection with Deep


Diffusive Neural Network
This paper aims at investigating the principles, methodologies and algorithms for detecting
fake news articles, creators and subjects from online social networks and evaluating the
corresponding performance.
This paper addresses the challenges introduced by the unknown characteristics of fake
news and diverse connections among news articles, creators and subjects.

This paper introduces a novel automatic fake news credibility inference model, namely
FAKEDETECTOR. Based on a set of explicit and latent features extracted from the textual
information, FAKEDETECTOR builds a deep diffusive network model to learn the
representations of news articles, creators and subjects simultaneously. Extensive
experiments have been done on a real-world fake news dataset to compare
FAKEDETECTOR with several state-of-the-art models, and the experimental results have
demonstrated the effectiveness of the proposed model.

7. IFCN Fact Checking Organizations on WhatsApp

8. Google: Find fact checks in search results

9. Google patents: Toolbar/sidebar browser extension


Methods and other embodiments associated with a web browser extension are described. One
example browser extension includes a toolbar logic to provide a toolbar. The toolbar may include, for
example, a set of graphical user interface elements displayed in connection with a browser window.
The browser extension may also include a sidebar logic to provide a sidebar. The browser extension
may also include a coordination logic to coordinate the presentation and functionality of a combination
of the toolbar and the sidebar to be provided to a browser. The presentation and functionality may be
based, at least in part, on a selectable presentation mode and a selectable attachment mode.

10. The COVID-19 social media infodemic


The paper identifies information spreading from questionable sources,
finding different volumes of misinformation in each platform. However,
information from both reliable and questionable sources do not present
different spreading patterns. Platform-dependent numerical estimates of
rumors’ amplification are provided in this paper.

11. Misinformation and the US Ebola communication crisis:


analyzing the veracity and content of social media messages
related to a fear-inducing infectious disease outbreak
The paper examined tweets from a random 1% sample of all tweets published September
30th - October 30th, 2014, filtered for English-language tweets mentioning “Ebola” in the
content or hashtag, that had at least 1 retweet (N = 72,775 tweets). A randomly selected
subset of 3639 (5%) tweets were evaluated for inclusion. 3113 tweets that meet inclusion
criteria using public health trained human coders were assessed to find tweet characteristics
(joke, opinion, discord), veracity (true, false, partially false), political context, risk frame,
health context, Ebola specific messages, and rumors. The proportion of tweets with specific
content was assesed using descriptive statistics and chi-squared tests.

Results: 10% of Ebola-related tweets contained false or partially false information and 72%
were related to health.
12. Misinformation: A Threat to the Public's Health and the
Public Health System
While digital channels are valuable mechanisms for Local Health Departments LHDs to
communicate accurate health information and guidance to the public, they have recently
become gateways for the rapid spread of mis- and disinformation.

Misinformation proliferates partly because people are more likely to accept advice and
information from friends, family, and people they feel their community trusts. In fact, the
ramifications of inaccurate or false information can be very resource-intensive, placing a
strain on health departments.

More than two-thirds of anti-vaccine Web sites represent information as “scientific evidence”
to support the idea that vaccines are dangerous, and nearly one-third use anecdotes to
reinforce that perception. By targeting certain groups—including parents of children with
autism spectrum disorder, grieving mothers who have lost children, immigrant communities,
and religious groups—anti-vaccine proponents have sowed skepticism about the safety of
vaccinations. Anti-vaccination websites use “science” and stories to support claims, study
finds: anti-vaccine positions are frequently embedded with information about positive
behaviors like eating healthy, breastfeeding. ScienceDaily.

13.

14. Officials gird for a war on vaccine misinformation


Polls have found as few as half of Americans are committed to taking the coronavirus
vaccine. World Health Organization (WHO) in 2019 listed “vaccine hesitancy” as one of 10
major global health threats.
In West Africa, officials are deploying the same tools that spread rumors about vaccines to
counter them, says Thabani Maphosa, who oversees operations in 73 countries for Gavi, the
Vaccine Alliance, which supplies and promotes vaccines around the world. In Liberia, for
example, officials are using Facebook’s WhatsApp messaging app to survey people and to
address the rumors behind a drop in routine vaccinations.

15. Vaccine misinformation and social media


WHO defines vaccine hesitancy as a “delay in acceptance or refusal of
vaccines despite availability of vaccination services”.
A survey by the Royal Society for Public Health found that 50% of British
parents of children younger than 5 years regularly encountered negative
messages about vaccination on social media.

Twitter ensures that when users search for vaccine-related topics in the
UK, the first result is for the National Health Service. In the USA, the same
search first turns up a link to the Department of Health and Human
Services. 

16. Measuring the impact of COVID-19 vaccine misinformation


on vaccination intent in the UK and USA
This study conducted a randomized controlled trial in the UK and the USA
to quantify how exposure to online misinformation around COVID-19
vaccines affects intent to vaccinate to protect oneself or others. Here it is
shown that in both countries—as of September 2020—fewer people would
‘definitely’ take a vaccine than is likely required for herd immunity, and that,
relative to factual information, recent misinformation induced a decline in
intent of 6.2 percentage points (95th percentile interval 3.9 to 8.5) in the UK
and 6.4 percentage points (95th percentile interval 4.0 to 8.8) in the USA
among those who stated that they would definitely accept a vaccine.

Some sociodemographic groups are differentially impacted by exposure to


misinformation.

17. Too little, too late: social media companies’ failure to tackle
vaccine misinformation poses a real threat
People wishing to undermine trust in the vaccine don’t use outright lies. Instead, they lead
campaigns designed to undermine the institutions, companies, and people managing the
rollout. They post vaccine injury stories and provid first person videos detailing side effects
that are difficult to factcheck.

18.

19.Measuring the Impact of Exposure to COVID-19 Vaccine


Misinformation on Vaccine Intent in the UK and US
We find evidence that socio-econo-demographic, political, and trust factors are associated
with low intent to vaccinate and susceptibility to misinformation: notably, older age groups in
the US are more susceptible to misinformation. 

20.Auditing E-Commerce Platforms for Algorithmically Curated


Vaccine Misinformation
Accounts performing actions on misinformative products are presented with more misinformation
compared to accounts performing actions on neutral and debunking products. Interestingly, once user
clicks on a misinformative product, homepage recommendations become more contaminated
compared to when user shows an intention to buy that product.

We find 10.47% of search-results promote misinformative health products. We also observe ranking-
bias, with Amazon ranking misinformative search-results higher than debunking search-results. 
21.The effects of source expertise and trustworthiness on
recollection: the case of vaccine misinformation

22.Vaccine Safety: Myths and Misinformation

23.A postmodern Pandora's box: Anti-vaccination


misinformation on the Internet

24.The biggest pandemic risk? Viral misinformation

25.Representing vaccine misinformation using ontologies

26.Catching Zika Fever: Application of Crowdsourcing and


Machine Learning for Tracking Health Misinformation on Twitter

27.Adapting and Extending a Typology to Identify Vaccine


Misinformation on Twitter

28.A Simplified Architecture to Integrate and Interoperate


Heterogeneous and Distributed Healthcare Data

29.COVID-19 infodemic and misinfodemic: A tale of India

30. Mapping the Landscape of Artificial Intelligence


Applications against COVID-19

31. A Multiple Feature Category Data Mining and Machine


Learning Approach to Characterize and Detect Health
Misinformation on Social Media

32.Fake News Detection Using Source Information and Bayes


Classifier
33. FAKE NEWS DETECTION USING NAIVE BAYES
CLASSIFIER

34. Detecting fake news stories via multimodal analysis

35. Fake News Detection Using Passive-Aggressive Classifier

36. A Dynamic Approach for Detecting the Fake News Using


Random Forest Classifier and NLP

37. Misinformation Analysis During Covid-19 Pandemic

You might also like