You are on page 1of 10

International Journal of All Research Education and Scientific Methods (IJARESM), ISSN: 2455-6211

Volume 11, Issue 1, January-2023, Impact Factor: 7.429, Available online at: www.ijaresm.com

Design of Fake News Identification System Using


Machine Learning
T. Akhila1, K. Aasritha2, G. Devesha3, Dr. Abishek Choubey4
1,2,3
Student, Department of Electronics and Communication Engineering, Sreenidhi Institute of Science and Technology,
Ghatkesar, India
4
Professor, Department of Electronics and Communication Engineering, Sreenidhi Institute of Science and Technology,
Ghatkesar, India

-----------------------------------------------------------*****************-----------------------------------------------------------

ABSTRACT

This paper presents the possibility to create an information database that contains all legislation, official papers,
news releases, material shared on social media by government institutions, and other real-time government
related information. In addition, a system of monitoring incorrect information spreading online using the APIs of
various social media platforms, it is possible to keep an eye on social and online media, as well as content that is
released by digital news media companies. The discrepancies between the official information and the
information being shared on social media may be compared using an AI/ML-based approach. The system may
identify these inconsistencies, which may then be reported to senior levels in the relevant Ministries, Departments,
or Organizations of theGovernment of India for any necessary corrective action.

Keywords—False news, Database Machine Learning

INTRODUCTION

More and more individuals are choosing to search for and consume news from social media platforms rather than
traditional news organizations as we spend a growing amount of our time communicating online through social media
platforms. These social media platforms‟s basic characteristics provide the following explanations for this change in
consumption habits: I n comparison to traditional journalism, such as newspaper or television, news is frequently more
timely and less expensive to consume on social media. It is also simpler to share, discuss the news with friends or other
readers on social media. For instance, 62 percent of U.S. adults receive news on social media in 2016, compared to only
49 percent in 2012. Additionally, it was discovered that social media currently performs better than television as the
primary news source.

The quality of stories on social media is lower than that of traditional news agencies, despite the advantages that social
media offers However, a lot of faux news, or news pieces containing purposefully incorrect material, is generated online
for a variety of reasons, such financial and political benefit, because it's cheap to supply news online and much faster
and easier to promote through social media Finished 1 million tweets were reportedly connected to Pizzagate's fake
news "after the presidential election is over. Fake news is a very common phenomenon right now, "was even selected by
the Macquarie dictionary as the term of the year in 2016. The widespread dissemination of fake news can have a serious
detrimental effect on people and society. First, fake news can upset the ecosystem's delicate balance of authenticity. For
instance, it's clear that during the 2016 presidential election in the United States, the most popular false news circulated
even more widely on Facebook than the most widely believed legitimate mainstream news. Second, deceptive
information deliberately persuades readers to adopt prejudiced or erroneous viewpoints. Propagandists frequently use
fake news to promote misleading information or exert political influence. For instance, some reports indicate that
Russia has developed phoney accounts and social media bots. Third, fake news affects how people interpret and react to
actual news. For example, some fake news was simply produced to incite people's mistrust and confusion, hindering
their ability to tell what is true from what is false. To aid in reducing the harmful consequences of fake news (so that
both the general public and the news ecosystem can benefit). We must develop tools that can automatically identify bogus
news posted on social media. Access to news information has become considerably more convenient and easy thanks to
the internet and social media. Online consumers can frequently follow the events that are of interest to them, and the
proliferation of mobile deviceshas made this procedure much simpler.

However, huge opportunities often provide great problems. The mass media has a significant impact on society, and
because this occurs frequently, there are those who wish to take advantage of it. Mass media may occasionally alter
knowledge in a variety of ways to achieve certain purposes. As a result, news stories that are somewhat accurate or
entirely fraudulent are produced. Even more websites exist that nearly solely manufacture bogus news. Intentionally

IJARESM Publication, India >>>> www.ijaresm.com Page 326


International Journal of All Research Education and Scientific Methods (IJARESM), ISSN: 2455-6211
Volume 11, Issue 1, January-2023, Impact Factor: 7.429, Available online at: www.ijaresm.com

publishing propaganda, half-truths, disinformation, and hoaxes under the guise of news, they frequently use social media
to increase their reach and drive visitors to their websites. The majority of fake news websites' objectives are to change
public perceptions of certain issues (mostly political). Examples of these websites can also be found in many other
nations, including China, Germany, Ukraine, and the United States of America. Therefore, fake news may be a global
problem as well as a global task. Many scientists think that with machine learning and AI, the problem of fake news
might be solved. There's a reason for that: recently, AI algorithms have started to perform significantly better on
numerous classification issues (image recognition, voice detection, and so on), thanks to cheaper technology and more
datasets. A number of important articles have been written about automatic deception detection. The writers give a
broad review of the approaches to the problem in the article. The feedback for the specific news within the micro blogs
was assisted by the authors' method for fake news detection. The developers of really create two fraud detection systems,
supported support vector machines and Naive Bayes classifier. They obtain the data by explicitly asking respondents if
certain statements like friendship, abortion, and execution are true or incorrect. The system's detection accuracy is
somewhere in the range of 70%. This paper offers a simple technique for identifying fake news that is backed by the
naive Bayes classifier, Random Forest, and Logistic Regression artificial intelligence algorithms. With a manually
labelled news dataset as its starting point, the research's objective is to examine how well these specific algorithms
perform for this specific challenge and determine whether or not utilizing AI to detect false news is a good idea. The
difference between this article and others on related topics is that in this one, Logistic Regression was specifically used
for detecting fake news. Additionally , the developed system was tested on a set of relatively recent data, giving the
opportunity toassess how well it performed using current data.

They frequently make grammatical errors. They frequently have emotive tints.

They frequently attempt to sway readers' perceptions on certain issues. Their information is not always accurate. They
frequently employ attention-grabbing language, news formats, and click-bait. They are improbable in every way. The
majority of the time, their sources are not reliable.

LITERATURE REVIEW

In their research [3], Mykhailo Granik et al. provide a straightforward method for identifying false news using a naive
Bayes classifier. This strategy was put into practise as a software system and evaluated using a set of Facebook news
postings as the test set. They were gathered from three sizable left-leaning and right- leaning Facebook pages as well as
three sizable mainstream political news pages (Politico, CNN, ABC News). A classification accuracy of about 74% was
attained. Fake news classification accuracy is marginally worse. This may be due to the dataset's skewness, as just 4.9%
of it contains bogus news.

Cody Buntain et. al. [12] develops a way for automating pretend news detection on Twitter by learning to predict accuracy
assessments in 2 credibility focused Twitter datasets: CREDBANK, a crowd sourced dataset of accuracy assessments
for events in Twitter, and PHEME, a dataset of potential rumors in Twitter and print media assessments of their
accuracies. They apply this technique to Twitter content sourced from BuzzFeed‟s pretend news dataset. A feature
analysis identifies options that ar most prophetical for crowd sourced and print media accuracy assessments, results of
that ar in line with previous work. They relied on identifying highly retweeted conversation topics and used the
characteristics of these topics to rank stories, limiting the applicability of this work to a subset of popular tweets.
variable. Since most tweets are rarely retweeted, this method is only usable on a small number of Twitter conversation
threads.

In this study, we seek to offer a categorization of the news story in the contemporary diaspora, as well as a discussion of
the various news story content kinds and their effects on readers. Then, we examine current methods for detecting fake
news that heavily rely on text-based analysis. We also discuss well-known datasets for fake news. In order to direct future
research, we outline four major open research problems in the paper's conclusion. It is a theoretical approach that
illustrates how to identify fake news by examiningpsychological variables.

METHODOLOGY
The system, which was developed in three sections, is explained in this essay. The first section uses a machine learning
classifier and is static. In order to select the optimal classifier for use in the final analysis, we studied and trained the
model using 4 alternative classifiers. The second component is dynamic and uses the user's keyword or text to search
online for information about the likelihood that the news is true. The final section confirms the legitimacy of the user-
provided URL.

We utilised Python and its Sci-kit libraries to create this work. Python has a substantial collection of libraries and add-ons
that may be utilised with ease in machine learning. The greatest place to find machine learning algorithms is the Sci-Kit
Learn library, where nearly allvarieties are easily accessible for Python.

IJARESM Publication, India >>>> www.ijaresm.com Page 327


International Journal of All Research Education and Scientific Methods (IJARESM), ISSN: 2455-6211
Volume 11, Issue 1, January-2023, Impact Factor: 7.429, Available online at: www.ijaresm.com

Therefore, "fake news" and "genuine news" must make up the two components of the data collection procedure. Since
Kaggle released a fake news dataset with 13,000 articles written during the 2016 election cycle, gathering false news was
simple. The later section is now really challenging. In order to update the fake news dataset, this is how you receive the
genuine news. Because it was the sole method for web scraping thousands of articles from various websites, it involves
a great deal of labour around many sites. Real news dataset was created via web scraping 5279 items in total, the
majority of which came from media outlets (New York Times, WSJ, Bloomberg, NPR, and the Guardian) and were
published between 2015 and 2016. Consequently,the data-acquisition procedure must consist of two steps: "false.s.

A. System Design-

Figure 1: System Design


B. System Architecture:

i) Static Search:
The static portion of the false news detection system‟s architecture is rather straightforward and was designed with the
basic machine learning process flow in mind. flow. The system design is self-explanatory and is depicted below. The
primary design processes are

Figure 2: System Architecture


ii) Dynamic Search:
The second search field on the website asks for particular keywords to be looked up online, and it then returns an
appropriate result with the likelihood of that term really being in an article or an article with similar content that makes
use of those keywords.

IJARESM Publication, India >>>> www.ijaresm.com Page 328


International Journal of All Research Education and Scientific Methods (IJARESM), ISSN: 2455-6211
Volume 11, Issue 1, January-2023, Impact Factor: 7.429, Available online at: www.ijaresm.com

iii) URL Search:


The third search form on the website allows users to enter a specific website domain name, and the implementation then
searches for that site in either the banned sites database or our real sites database. The domain names of websites that
frequently publish accurate and reliable news are stored in the database ofactual websites, and vice versa.

If neither database contains the website, the implementation simply asserts that the news aggregator does not exist rather
than classifying the domain.s

IMPLEMENTATION

DATA COLLECTION AND ANALYSIS


The gathering of data is the first significant step in the building of a machine learning representation. This analytical
stage will determine how effective the model is; the more and better data we can collect, the more effective our model
will be. Data collection methods include web scraping, manual interventions, and others.

This fake news detection utilizes data from Kaggle Link:https://www.kaggle.com/c/fake-news/data. Online news can be
found through a variety of sources, including social media websites, search engines, news agency homepages, and fact-
checking websites. A few publicly accessible datasets for the classification of fake news are available online, including
those from Buzzfeed News, LIAR [15], BS Detector, and others. The use of these datasets has been prevalent in These
datasets have been extensively utilized in numerous research articles to assess the credibility of news.

I have provided a brief discussion of the dataset's origins in the sections that follow. Online news can be gathered through a
variety of websites, including social networking platforms, search engines, and news agency homepages. However,
manually judging the truth of news is a difficult task that typically calls for annotators with domain experience who
carefully examine assertions and further supporting information, context, and reports from reliable sources. In general,
the methods listed below can be used to obtain news data with annotations: Fact-checking websites, crowd- sourced
labourers, industry detectors, and expert journalists. There are no established benchmark datasets for the fake news
identification issue, nevertheless. Before going through the training process, the collected data needs to be pre-processed,
that is, cleaned, converted, and incorporated. The collection the dataset we utilized is described.
It consists of 20800 individual data in the dataset and the dataset has five columns, they are:

1. Unique Id
2. Title
3. Author
4. Text
5. Label

DEFINITIONS AND DETAILS

A. Pre-processing Data
The majority of social media data is casual conversation with typos, slang, and poor grammar, among other things. It is
essential to establish methods for resource utilization to make wise judgements due to the quest for improved
performance and dependability. Before using the data for predictive modelling, the data must be cleaned in order to
produce better insights. Basic pre- processing was performed on the News training data for this purpose. This process
included:

IJARESM Publication, India >>>> www.ijaresm.com Page 329


International Journal of All Research Education and Scientific Methods (IJARESM), ISSN: 2455-6211
Volume 11, Issue 1, January-2023, Impact Factor: 7.429, Available online at: www.ijaresm.com

Data Cleaning:
We get data in either a structured or unstructured format while reading data. Unstructured data lacks a proper framework
while a structured format has a clearly established pattern. We have a semi-structured format that falls in between the
two structures and is comparable to better structured than unstructured format. To draw attention to characteristics that we
wantour machine learning system to recognize, the text inputmust be cleaned.

The data cleaning (or preprocessing) process typically entails the following steps:

a) Remove punctuation
Punctuation can give a sentence grammatical context that aids in our understanding. However, it adds no value to our
vectorizer, which only counts the words and ignores context, so we eliminate all special characters. Using an example:
What's going on?

b) Tokenization
Tokenization divides text into smaller pieces, such phrases or words. It provides previously unstructured text structure.
For instance: Plata o Plomo-> Plata, o, Plomo.

c) Remove stopwords
Stop-words are frequent words that almost always exist in texts. We eliminate them because they don't provide much
information about our data. I'm fine with silver orlead, for example, so silver, lead, fine.

d) Stemming
A word can be reduced to its stem form by stemming. Treating terms that are related similarly often makes sense. It
eliminates suffices such as "ing," "ly," and "s" using a straightforward rule-based method. Although the number of
words decreases, the actual words are frequently overlooked. for instance, Entitling, Entitled, -> Entitle. Notably,
some search engines consider synonyms for words with the same stem.

B. Feature Generation
Text data can be used to produce several features, such as word count, frequency of uncommon words, frequency of big
words, n- grams, etc. We can enable computers to read text and write Clustering, Classification, etc. By building a
representation of words that captures their meanings, semantic links, and many sorts of context they are employed in.

Vectorizing Data:
In order for machine learning algorithms to grasp our data, vectorizing is the act of encoding text as integers, or numeric
representation.

1. Vectorizing Data: Bag-Of-Words


The existence of words inside the text data is described by Bag of Words (BoW) or Count Vectorizer. If it is in the
sentence, it returns a result of 1, else it returns a result of 0. In each text document, it consequently produces a bag of
words with a document matrix count.

2. Vectorizing Data: N-Grams


All possible combinations of letters or words of length n that we can find in our source text are known as n- grams.
Unigrams are ngrams with n=1. Similar structures include bigrams (n=2), trigrams (n=3), and so forth. Compared to
bigrams and trigrams, unigramstypically don't have as much information.

The fundamental idea behind n-grams is that they identify the letter or word that will probably come after the supplied
word. You have more context to work withthe longer the n-gram is (greater n).

3. Vectorizing Data: TF-IDF


TF-IDF is the relative relevance of a term in the text andacross the corpus is represented by its weight.

TF stands for Term Frequency:


It determines the number of times a term appears in a document. A phrase might appear more frequently in a long text
than in a short one because document sizes vary.

As a result, term frequency is frequently divided by document length.

IJARESM Publication, India >>>> www.ijaresm.com Page 330


International Journal of All Research Education and Scientific Methods (IJARESM), ISSN: 2455-6211
Volume 11, Issue 1, January-2023, Impact Factor: 7.429, Available online at: www.ijaresm.com

IDF stands for Inverse Document Frequency: If a word appears in every document, it is of little use. The words "a," "an,"
"the," "on," and "of," for example, are frequently used in documents yet have little meaning. IDF increases the
relevance of uncommon terms while decreasing the importance of common terms. The word's uniqueness increases
with IDF value.

The word's uniqueness increases with IDF value. The relative count of each word in the sentences is recorded in the
document matrix after TF-IDF is applied to the body text.

𝑇𝐹𝐼𝐷𝐹(𝑡, 𝑑) = 𝑇𝐹(𝑡, 𝑑) * 𝐼𝐷𝐹(𝑡)

C. Algorithms used for Classification


The classifier's training is covered in this section. To determine the class of the text, a variety of classifiers were tested.
We especially looked at Multinomial Naive Bayes Passive Aggressive Classifier, Logistic Regression, and four other
machine learning techniques. These classifiers were implemented using the Sci-Kit Learn Python module.

Brief introduction to the algorithms-

1. Naïve Bayes Classifier:


The Bayes theorem, on which this classification method is based, holds that the presence of a certain feature in a class is
independent of the presence of any other feature.It offers a method for figuring out the posterior probability.

P(c|x)= posterior probability of class given predictor


P(c)= prior probability of class
P(x|c)= likelihood (probability of predictor given class)
P(x) = prior probability of predictor

2. Random Forest:
A group of decision trees is referred to as a "Random Forest" under trademark law. We have a collection of decision
trees, or "forests," in Random Forest. Each tree provides a classification to classify a new object based on attributes, and
we say the tree "votes" for that class. The classification with the highest votes is selected by the forest (over all the trees
in the forest). A classification system made up of several decision trees is called the random forest. It attempts to
produce an uncorrelated forest of trees whose forecast by committee is more accurate than that of any individual tree by
using bagging and feature randomness when generating each individual tree The class with the highest votes becomes
the prediction made by our model. The random forest's individual trees each spit forth a class prediction. The random
forest model performs better than any of its component models when it operates as a committee of many generally
uncorrelated models (trees), which is why it does. So how does random forest make sure that each tree's behaviour is
not overly connected with any of the othertrees in the model? It employs the two strategies below:

Bagging (Bootstrap Aggregation) :


Considering how sensitive decision trees are to the data they are trained on, even minor adjustments to the training set
can result in radically different tree architectures. By enabling each individual tree to randomly sample from the dataset
with replacement and produce various trees as a consequence, random forest takes advantage of this. This method is
often referred toas bootstrapping or bagging.

Feature Randomness:
When splitting a node in a typical decision tree, we analyse all potential features and choose the one that results in the
greatest gap between the observations in the left node and those in the right node. In contrast, only a random subset of
features are available to each tree in a random forest. This drives even more variety among the model's trees, which
ultimately leads todecreased correlation between them and increased diversification.

3. Logistic Regression:
It is a classification algorithm rather than a regression one. Based on a set of independent variables, it is used to
estimate discrete values (binary values like 0/1, yes/no, and true/false) (s). In plain English, it determines the likelihood
that an event will occur by fitting data to a logit function. It is also known as logit regression as a result. Given that it

IJARESM Publication, India >>>> www.ijaresm.com Page 331


International Journal of All Research Education and Scientific Methods (IJARESM), ISSN: 2455-6211
Volume 11, Issue 1, January-2023, Impact Factor: 7.429, Available online at: www.ijaresm.com

forecasts probability,its output values range from 0 to 1. (as expected).

In mathematics, the predictor variables are combinedlinearly to represent the log probability of the outcome.
Odds = p/(1-p) = probability of event occurrence /probability of not event occurrence
ln(odds) = ln(p/(1-p))
logit(p)=ln(p/(1-p))= b0+b1X1+b2X2+b3X3. +bkXk

4. Passive Aggressive Classifier:


The online algorithm known as the passive aggressive algorithm is excellent for categorising huge data streams (e.g.
twitter). It moves quickly and is simple to use. By using an example, learning from it, and then discarding it, it functions
[24]. In the event of an incorrect classification, such an algorithm remains passive but becomes aggressive, updating and
adjusting. It does not converge, unlike the majority of other algorithms. Its goal is to make updates that fix the loss while
barely changing the weight vector's norm.

IMPLEMENTATION STEPS

Step 1: In first step, we to install various libraries usingpip


Code: pip install numpy pandas sklearn
Step 2: We have to install Jupyter Lab to run code. Opencommand:
C:\Users\DataFlair>jupyter lab
Step 3: Create a new console when new browser pops up and press shift+enter to run multiple lines of code at once.

STEPS TO DETECT FAKE NEWS

Follow below steps to detect fake news

Step 1: Import necessary libraries

Import numpy as np
Import pandas as pd
Import itertools
From sklearn model section import
train test split from
Sklearn feature extraction text
import Tfidf Vectorizer from
Sklearn liner model import Passive
Aggressive Clasifier from sklearn,
metrics
Import accuracy score, confusion
matrix

IJARESM Publication, India >>>> www.ijaresm.com Page 332


International Journal of All Research Education and Scientific Methods (IJARESM), ISSN: 2455-6211
Volume 11, Issue 1, January-2023, Impact Factor: 7.429, Available online at: www.ijaresm.com

Step 2: Read the DataFrame, and observe the data infirst 5 rounds.

Step 3: Now getting tha labels from Data Frame.Code: #DataFlair- Get the labels
Labels = df.label.head()

Step 4: Split the dataset into training set and testing setCode: #DataFlair- Split dataset
x_train, x_test, y_train, y_test train_test_split(df['text'],
labels,test_size=0.2,random_state=7)

Step 5: Let‟s begin by initializing a TfidVectorizer with English stop words and a maximum document frequency of 0.7
(terms with a higher document frequency will bediscarded). Prior to processing the natural language data

stop words. The most frequent terms in a language should be filtered out. Additionally, a Tfid Vectorizer creates a
matrix of TF-IDF features from a group of unprocessed documents. Transform the Vectorizer on the test set after fitting
and transforming it on the train set.

#DataFlair - Initialize a TfidfVectorizer


tfidf_vectorizer=TfidfVectorizer(stop_words=
'english', max_df=0.7)
#DataFlair - Fit and transform train set,
transform test set
tfidf_train=tfidf_vectorizer.fit_transform(x_tr
ain)
tfidf_test=tfidf_vectorizer.transform(x_test)

Step 6: We should initiate a PassiveAggressiveClasifier. Ww will use this on tfidf_train and y_train. It calcuting the
accuracy.

#DataFlair - Initialize a Passive Aggressive


Classifier pac=Passive Aggressive Classifier
(max_iter=50)
pac.fit(tfidf_train,y_train)

Step 7: The accuracy we got is 92.9%. Now, let us print out a confusion matrix to observe false and truenegatives and
positives.

IJARESM Publication, India >>>> www.ijaresm.com Page 333


International Journal of All Research Education and Scientific Methods (IJARESM), ISSN: 2455-6211
Volume 11, Issue 1, January-2023, Impact Factor: 7.429, Available online at: www.ijaresm.com

Code: #Data Flair- Build confusion matrix Confusion_matrix(y_test,y_pred,labels=[„FAKE‟,„REAL‟])

RESLUTS

The implementation is done as explained in above steps.

The below figure shows the importing necessary libraries.

The below output shows the five records of Dataset

The below output shows the labels of Data frames

IJARESM Publication, India >>>> www.ijaresm.com Page 334


International Journal of All Research Education and Scientific Methods (IJARESM), ISSN: 2455-6211
Volume 11, Issue 1, January-2023, Impact Factor: 7.429, Available online at: www.ijaresm.com

The below output shows the Calculations of accuracy

The below output shows the confusion matrix of ourProposed model

CONCLUSION

More and more people are getting their news from social media rather than traditional news sources as social media
gains popularity. However, fake news has also been spread through social media, with serious consequences for users and
society as a whole. The current state-of-the-art multimodal methods lack the ability to learn from fake news detection
issues as a primary task. We developed a multi-modal fake news detection algorithm utilizing a passive regression
classifier to address this problem. It outperforms the prevailing techniques on average by 9%. The issue of spotting fake
news has been addressed in previous literature from a number of perspectives, including natural language processing,
knowledge graphics, artificial intelligence, and user profiling. A larger data set and more intricate methods that explain
how different modalities play a crucial part in theidentification of fake news can still improve performance

REFERENCES

[1] Sanani Divadkar, Akshat sahu, Shalani Puri, “A Novel Approach to Ambiguous Fake News Classification
through Machine Learning”, 2022 IEEE 3rd Global Conference for Advancement in Technology (GCAT), pp.1-9,
2022.
[2] Abdullah-All-Tanvir, Mahir, E. M., Akhter S., & Huq, M. R., “Detecting Fake News using Machine Learning and
Deep Learning Algorithms”, 7th International Conference on Smart Computing & Communications (ICSCC),
Sarawak, Malaysia, Malaysia, 2019, pp.1-5.
[3] Ahmed, H., Traore, I., & Saad, “Detection of online fake news using n-gram analysis and machine learning
techniques”, Proceedings of the International Conference on Intelligent, Secure, and Dependable Systems in
Distributed and Cloud Environments, 127–138, Springer, Vancouver, Canada, 2017.
[4] Mykhailo Granik and Volodymyr Mesyura, "Fake news detection using naive bayes classifier", 2017 IEEE First
Ukraine Conferenceon Electrical and Computer Engineering(UKRCON), pp. 900-903, 2017.
[5] Hadeer Ahmed, Issa Traore and Sherif Saad, "Detection of online fake news using n-gram analysis and
machine learning techniques",International Conference on Intelligent Secure and Dependable Systems in
Distributed and Cloud Environments, pp. 127- 138, 2017.
[6] Della Vedova, M. L., Tacchini, E., Moret, S., Ballarin, G., DiPierro, M., & de Alfaro, L., “Automatic online fake
news detection combining content and social signals”, FRUCT'22: Proceedings of the 22st Conference of Open
Innovations Association FRUCT, 2018.
[7] Chih-Chung Chang and Chih-Jen Lin, LIBSVM -: A Library for Support Vector Machines”, Google scholar
website, July 2018.
[8] Nikhil Sharma "Fake News Detection using Machine Learning" Published in International Journal of Trend in
Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4, June 2020, pp.13171320

IJARESM Publication, India >>>> www.ijaresm.com Page 335

You might also like