Professional Documents
Culture Documents
Introduction
Nowadays, a lot of people use social media websites for entertainment and communication.
These platforms have become very popular in recent years, with around 45% of the world's
population using them. For many individuals, social media has turned into an addiction.
Smartphones have further increased the use of social media, which unfortunately has led to
offensive behavior towards others.
Offensive behavior on social media includes spreading hate, online bullying, writing aggressive
or toxic comments, and sharing inappropriate images and videos. This behavior can occur on
various online platforms like social media, messaging apps, gaming platforms, and more.
According to a survey, around 44% of children and young people spend an average of 180
minutes per day on social media, and 9% of people use social media overnight. Of those who
experience offensive behavior, 65% endure it on social media. Bullying is also prevalent on
mobile phones (45%), online messaging (38%), online chat rooms (34%), and email (19%).
Shockingly, 20% of teens have sent sexually suggestive messages to people who didn't ask for
them.
Social media has become a platform for hostile behavior, and many parents worldwide believe
it's a major issue. Bullying even occurs in online games, affecting 53% of young adults who play
them. In the United States, 38% of internet users face trolling on social media daily.
The impact of offensive behavior on social media is taking a toll on people's mental health, and
in some extreme cases, it has led to suicide. Therefore, it's crucial to detect and eliminate such
behavior promptly and make online platforms safer.
To address this issue, researchers have conducted a survey and reviewed various articles related
to offensive behavior detection. They used databases like IEEE Explore, ACM, ScienceDirect,
Scopus, and Google Scholar to gather relevant information. They focused on articles related to
cyberbullying, hate speech, offensive content detection, toxic comment detection, profanity, and
aggressive language identification in online social networks.
The survey distinguishes itself by including a large number of research works (100) and
expanding the scope of the study. The paper summarizes the advantages, disadvantages, and
issues related to different approaches for identifying offensive behavior in social media. It also
discusses the factors that drive offenders to engage in such behavior and proposes preventive
measures and the cyber laws of various countries that enforce strict punishments.
The paper concludes by pointing out the remaining challenges and areas that need further
attention in identifying and combating online offensive behavior on social media.
In the next sections, the paper explains the different types of offensive behavior commonly
observed on social media, the approaches used for identifying such behavior, and recent research
literature in this field. It also addresses the factors behind engaging in offensive behavior,
preventive measures, and relevant laws in different countries. The paper concludes with a
summary of tasks that still need to be addressed.
The paper concludes by identifying challenges and areas that need further attention in dealing
with offensive behavior on social media.
1. The rest of the paper is organized as follows:
2. Section 2 talks about the different types of offensive behavior online.
3. Section 3 explains the approaches used to identify offensive behavior.
4. Section 4 reviews recent literature on recognizing offensive behavior.
5. Section 5 discusses the reasons why people engage in offensive behavior online.
6. Section 6 covers preventive measures and cyber laws from various countries to address
offensive behavior.
7. Finally, Section 7 concludes the paper and highlights tasks that still need to be addressed.
Content-based Approach:
Chen et al. proposed a method called Lexical Syntactic Feature (LSF) approach to detect
offensive content and users in online social networks. They used traditional machine learning
methods with various features to improve prediction performance. However, this approach only
works for English and ignores other languages.
Hee et al. developed schemes to annotate cyberbullying and classify online posts into different
categories. The technique can be applied to multiple languages if annotated data is available.
However, it doesn't address the detection of implicit cyberbullying events.
Samghabadi et al. proposed a Natural Language Processing approach to detect abusive posts and
cyberbullying. The model uses various features to distinguish negative text but could have been
improved with emotional and user relationship network features.
Shylaja et al. used document embedding with supervised machine learning algorithms to detect
aggressive comments. The model efficiently identified aggressive comments using specific
techniques, but it could have been further optimized with additional learning techniques.
Dadvar and Eckert explored cyberbullying detection using Deep Learning (DL) models and
compared their performance with traditional Machine Learning (ML) models. DL models with
transfer learning performed better. However, they overlooked incorporating user profile
information from social media.
Agrawal et al. analyzed cyberbullying on multiple topics across social media platforms and
applied transfer learning to detect it. Deep learning models outperformed traditional ML models,
but the dataset lacked bullying severity information, which could have improved the detection
model.
Huang et al. proposed a cyberbullying intervention application based on the CNN learning
model. The application could identify cyberbullying in real-time and offered early feedback to
users to revise their messages. Adding a social network relationship graph might further enhance
cyberbullying identification.