You are on page 1of 15

The Diffusion of Ignorance in Online Communities

ABSTRACT
This paper aims to investigate how information-sharing mechanisms in online communities favor
activities of ignorance distribution on their platforms, such as fake data, biased beliefs, and
inaccurate statements. In brief, the authors claim that online communities provide more ways to
connect the users to one another rather than to control the quality of the data they share and
receive. This, in turn, diminishes the value of fact-checking mechanisms in online news-
consumption. The authors contend that while digital environments can stimulate the interest of
groups of students and amateurs in scientific and political topics, the diffusion of false, poor, and
un-validated data through digital media contributes to the formation of bubbles of shallow
understanding in the digitally informed public. In brief, the present study is a philosophical
research, embedded in the theoretical framework of the epistemologies of ignorance, that applies
the virtual niche construction theory to the cognitive behavior of internet users, as it is described
by the current psychological, sociological, and anthropological literature.
Keywords: Epistemology of Ignorance, Cognitive Niches, Affordance, Scientific
Communication, Social Media, Black Box Arguments, Filter Bubble, Epistemic Bubble.

INTRODUCTION
It is easy to think about online communities1 focusing only on their role as social aggregators.
Someone could argue that social media, particular instances of online communities, are not
meant to be places where accurate sharing of information happens because they are just socially
based Internet websites for catching up with old flames or sharing what you ate for breakfast.
Unfortunately, if that could once be true, now it is just a big oversimplification. Initially,
Facebook and other sites were indeed designed as personal spaces to gossip and share personal
information. Even so, now the amount of news, scientific data and political statements the users
share on online platforms should force even the most skeptic person to consider them popular
venues for sharing – and for consuming and commenting – external content with one's (actual
and virtually extended) network. Recently, the science writer Christie Wilcox (2012, p. 87) went
even further, asking scientists to be aware of these new tools for science communication,
deeming this effort as “an integral part of conducting and disseminating science in today’s
world”. Online communities could be powerful instruments for education, but the current
diffusion of fake or, at best “oversimplified” scientific reports, political statements, and news in
online platforms are the main reasons to consider social networks actual ignorance spreaders.
Indeed, online communities distribute misinformation as well as news and high-quality
information, and the problem regarding this binary distribution is the lack of epistemological
tools the users have to distinguish what is relevant and accurate and what is not (Bessi, Scala,
Rossi, Zhang, & Quattrociocchi, 2014). Thus, in this sense, the aim of the paper incorporates
also the question "how have social oriented tools developed a mechanism for sharing news and
data that can also easily distribute misinformation and hoaxes?"
In the attempt to answer this question, the authors aim at investigating how information-sharing
mechanisms2 in online communities, such as social network websites, newsgroups, forums, and
blogs, favor activities of ignorance distribution on their platforms, such as fake data, biased
beliefs, and inaccurate statements. Thus, in the first section, the authors will briefly present their
research as following the precepts of the recently developed epistemology of ignorance, referring
to existent epistemological and moral frameworks (Proctor, 2005; Tuana, 2006; Sullivan &
Tuana, 2007; Davies & McGoey, 2012; Pohlhaus, 2012). They will also highlight the research
gap that exists in the epistemologies of ignorance, which concerns the diffusion of ignorance
through online media. In the second section, the authors will present online communities as
virtual cognitive niches following the account provided by Arfini, Bertolotti, & Magnani (2017)
and using basic definitions from cognitive niche construction theories, in order to analyze those
traits that make online communities particularly apt frameworks for the toleration of ignorance
distribution. In the third section, they will argue that the creation and use of online communities
as information sources promote biased epistemic judgments over the data the users receive and
share. We will underline how this proves to be particularly interesting as far as it concerns online
communities because they are engineered not only as to be “fool proof,” but to naturally co-opt
the inferential patterns developed by human beings in settings of real-life cognition (as reported
by Bertolotti, Arfini, & Magnani (2017)) for instance social cognition and one’s natural
disposition towards sharing (Simon, 1993). The promotion of biased judgments happens
inasmuch as the communication of data is adjusted to meet the interests and motivations of the
singular users (who are subject to what Pariser (2011) calls the "filter bubble"). As an example of
this phenomenon, the authors will comment the so far unsuccessful but tireless campaign of the
UNICEF Social and Civic Media Section (2012) aimed at contrasting the diffusion of anti-
vaccine sentiments trough East-Europe.
In brief, the present study is a philosophical research, embedded in the theoretical framework
of the epistemologies of ignorance, that applies the virtual niche construction theory to the
cognitive behavior of internet users, as it is described by the current psychological, sociological,
and anthropological literature.

EPISTEMOLOGY OF IGNORANCE BACKGROUND AND THE


DISTRIBUTION OF MISINFORMATION IN ONLINE COMMUNITIES
At first sight, the connection between the topic of this paper and the “epistemologies of
ignorance” can be puzzling. Epistemologies of ignorance are mainly interested in the
development of feminist philosophies (Tuana, 2006), the comprehension and contrast of forms of
racism (Mills, 2007) and the definition and preservation of epistemic justice (Sullivan & Tuana,
2007). Nonetheless, the development of the "epistemologies of ignorance" has always been
inspired by the necessity of a contextualization of ignorance. This contextualization mainly aims
at the investigation of the epistemological background that generates it and the analysis of
information-sharing mechanisms that contribute to spread it in particular environments (Proctor,
2005; Sullivan & Tuana, 2007; Davies & McGoey, 2012; Pohlhaus, 2012). To comprehend
ignorance one needs to investigate its means of diffusion and its "audiences," its consumers and
producers, the relative epistemic environments, which demand specific epistemic and power
relations between their occupants.
Accordingly, even if the study of the diffusion of ignorance in online communities does not
directly relate to forms of racism and feminist issues, it is nevertheless the study of contexts
where humans establish strong relations of power and knowledge and, in turn, also distribute
ignorance and epistemic inequality. Indeed, the Internet is one the most powerful resources of
information currently available. According to Pew Research Centre, 3 more than two-thirds of the
American population use online communities, most of them to get news about politics, science,
and technology (Oeldorf-Hirscha & Sundar, 2015). At the same time, it is widely reported that
these networks also distribute misinformation. For example, a study conducted by Bessi, Scala,
Rossi, Zhang, & Quattrociocchi (2014) testifies that a large part of the Facebook population,
upon receiving an injection of evidently false information, cannot distinguish them from
grounded data. Again, UNICEF Social and Civic Media Section (2012) conducted a study on the
diffusion of pseudoscientific rumors and ideological beliefs on online communities to understand
and counter the diffusion of anti-vaccine sentiments in Europe.
For these reasons, the authors believe it is time to include into the literature pertaining to the
epistemologies of ignorance also a line of research interested in the diffusion of ignorance
through online media. Specifically, in this paper, the authors’ investigation will be directed by a
comprehensive definition of the term ignorance. The authors claim that ignorance as generally
understood by analytic philosophers as a “lack of knowledge” or “lack of true beliefs” fails to
understand the employment of the term in ordinary situations. Lack of knowledge, indeed, refers
to only a particular state of the ignorant cognition: the one that does not possess enough data or
the right information to be considered in a “knowledge state”. The problem of this definition is
evident if one considers cases where all the relevant data are offered to the subject, who refuses
to believe in the truthful information, or misinterprets it, or fails to understand it. Ignorance, in
the authors' definition, is not limited to the situation where the agent has not all the relevant data
to gain a particular epistemic goal but also encompasses the cases where the agent lacks the
epistemic tools to recognize the appropriate, accurate, or useful information. Moreover, this
definition of ignorance describes also those situations where the agent has the correct, relevant,
and valuable data, but she fails to believe in them or refuses to use them to reach her epistemic
goals. The lack of epistemic tools is included in the definition of ignorance as lack of factual or
procedural knowledge to gain new knowledge. In this sense, misinformation, fake data, biased
beliefs, and inaccurate statements stand as instances of ignorance: they are misinterpreted,
incomprehensible, or false data that are not recognized as such by the agents. And in the
development of the digital era, the diffusion of ignorance (in this sense) through social media not
only affects the analysis of ignorant people but also the proper philosophical definition of
informed citizens that one should adopt.
If it is true, as Pariser (2011, p. 15) wrote, that “the structure of our media affects the character
of our society,” then one must discuss whether Internet users may have the power to shape for
themselves and others the frame of the society, distributing both correct data and misinformation
in online communities. Thus, the authors claim that the other side of the coin of the
“democratization of information” via online media is the increased responsibility of the crowd
over the data they share, that are not controlled, fact-checked and revisited by no greater
authority than the one shared by the crowd. Being able to distinguish ignorance from knowledge,
truth from fantasy is, in this context, more power-related than anywhere else. So, by
understanding the ways online media have been employed to spread imprecise and flawed data,
to discuss and diffuse biased judgments and misinformation, and to promote and distribute
radical ideologies and fear messages, the authors mean to investigate how they have been used as
instruments of knowledge-related power against whoever lacks the necessary education to know
better.
THE TOLERATION OF IGNORANCE IN ONLINE COMMUNITIES
In order to analyze the diffusion of ignorance in online communities, the authors will examine
how false data are first tolerated by the users. In order to do so, they will embed the analysis of
online communities in a cognitive perspective, referring to them as virtual cognitive niches,
following the account provided by Arfini, Bertolotti, & Magnani (2017). Specifically, the authors
will present three main features of virtual cognitive niches that will be usefully employed in
order to discuss the distribution of both correct data and misinformation on online networks.
1. Arfini, Bertolotti, & Magnani (2017) argue that virtual cognitive niches are constructed by
human actors by externalizing knowledge into the surrounding environment. This conception
follows the description of cognitive niches presented by Andy Clark (2005), who also defines
cognitive niche construction as “the process by which animals build physical structures that
transform problem spaces in ways that aid (or sometimes impede) thinking and reasoning
about some target domain or domains.” (Clark, 2005, pp. 256–257). In Arfini et al.’s
perspective (2017), an online community is an externalization of knowledge in the sense that
users employ social media as information distributors and depositories: users externalize, as
put into the network, personal data and opinions. Through the externalization of data on
social platforms, they build digital structures that transform problem spaces (the limited
physical spaces) establishing communication and sharing of data between subjects, with no
regards to the amount of data they may share, the distance between them, and their diversity
of culture, language, and status.
2. Virtual cognitive niches are also defined as structures in which human beings apply an
instrumental intelligence to uncover and exploit, in a persistent way, cause-effect
relationships in the external world. This particularly relevant feature was initially proposed
for cognitive niches by Pinker (2003) and Tooby & DeVore (1987). In online communities,
users employ specific patterns of behavior to establish fun, interesting and useful connections
with other people. These patterns of behavior depend on the accurate exploitation of the tools
and possibilities offered in those networks. In this way, users “uncover and exploit” cause-
effect relationships that emerge in the interaction on online platforms with other users.
3. Virtual cognitive niches are also sets of affordances. An affordance is defined by Gibson
(1986) as “opportunities for action”. In this sense, not only online communities distribute
particular types of affordances proper of the digital domain (adding contents to the
“cyberspace” and its myths, cf. Mosco (2004)), but also let the users generate specific
affordances for other users. Take for example the “share” button on Facebook: it affords the
user to show a particular content on her own wall and to comment it briefly. It is an
affordance created by Facebook programmers and put at the disposal of Facebook’s users.
Exploiting the definition of online communities as virtual cognitive niches, the authors will
start the analysis of the information-sharing mechanisms enacted in those environments to
uncover how and why they can also become depositories for fake data and misinformation for
their users.

The Social Relevance of the Virtual Domain


The first definition for cognitive niches, provided by Clark (2005, p. 256), depicts them as
constructed structures that affect users’ cognitive processes in order “to aid (or sometimes
impede) thinking and reasoning about some target domain or domains”. Specifically, the
development of online communities helps users to reason about both the actual, concrete, reality
to which the users, as physical people, belong and the digital reality that is embedded in the
digital environment. Users can spread and get data pertaining to these two separated but
connected realities. Sharing a post on the aftermath of the American presidential election can be
useful for a particular network of people to discuss the consequences that that event will have on
their lives. At the same time, it will also help them understand the political view of their friends
or, better, if their friends are politically involved in the digital framework and how much. Even if
the first direction of the reasoning can be considered more important, the second is much more
useful to discuss online communities: it determines the establishment of some relationships
between users, it leads to reinforcing or breaking social bonds, and it is the key to comprehend
the mechanism of a spontaneous information-based community. If a person realizes that her
friends are prone to comment and like politically-oriented Facebook posts she is going to expect
reactions to a post on the American presidential election. At the same time, her friends will
expect some comments from her after the election day, if she usually posts comments on political
news.
The capacity of online communities of aiding reasoning about both the real and the digital
framework is then asymmetrical. Indeed, for every post about a real event, every personal
information, or piece of gossip that the users share, they learn something about the network,
while they learn something about the external world just when approaching a particular shared
information. Social media information-distribution mechanisms provide more ways to connect
the users to one another rather than neutral ways to distribute data. The digital domain is loaded
with cognitive artifacts that implement the communication and the sociability of the users who
share a particular network (as two-people and group chat-rooms, more or less public personal
pages and profiles, group selection sharing, and so on). Facebook, for example, is a structure that
points out the connections with friends and colleagues through the display of their data and
updates on the main page the users see, the News Feed page. It is a source of relevant data on the
user’s network that helps her to see it as a common ground for her interactions. It is a
“personalized newspaper featuring (and created by) your friends” (Pariser, 2011, p. 24), of which
the agent is both the center and the only target. The social relevance of the digital domain
emerges as a form of dominance over the data shared on the online platforms, altering the
agent’s perspective about the real-world domain. In this sense, any information regarding other
users is interpreted by the first-person perspective of the user according to the beliefs she has
regarding first the person who shared that information and only after the actual content of that
piece of data.
Moreover, while it is obvious that the digital domain of online communities encourages the
distribution of socially-oriented data (pieces of information regarding the users’ interests, goals,
and preferences), data regarding the external domain can be exchanged in order to support,
change and improve the quality of the communication on the online platform. News consumption
(as information-receiving and -sharing) is still increasing on social media platforms such as
Facebook and Twitter (Oeldorf-Hirscha & Sundar, 2015). There users can leave a comment
regarding a story found on a news website, post a link to a news story or even generate and
spread original news material (Purcell, Rainie, Mitchell, Rosenstiel, & Olmstead, 2010). Since
the purpose of sharing these type of data is to trigger social mechanisms of the platform, the
important thing about a content shared on an online community is who shared it, not what has
been shared. As pointed out also by Oeldorf-Hirscha & Sundar (2015):

The key factor is that news is coming from a trusted personal source: most
news links on Facebook (70%) are from friends and family rather than news
organizations that individuals follow on the site (Mitchell & Rosenstiel, 2012)
(reported by (Oeldorf-Hirscha & Sundar, 2015, p. 240)).
Furthermore, the user-generated elaboration of news and the sharing activities on the network
provoke a “sense of agency”, the feeling that the agents have some control over the data they
share (Sundar, 2008). After all, the events, data, and facts of public interest are thought to be
freely chosen by the user, considered interesting and shared as the user prefers, with texts,
images or links to other pages. Also, in this occasion, the structure of online communities fosters
the user’s reliance on the social dominance of the digital domain, that is represented by how she
chooses to share a content and not the content itself. In this way, users become what Bruns,
Highfield, & Lind (2012) call “produsers”, which are not merely consumers of news contents nor
producer, but they exhibit a hybrid role in the online media networks that permit them to share
data created by another source as it was their own. Thus, online communities as virtual cognitive
niches become places where users can share interesting facts and items in order to display their
interests and opinions.
In sum, the social dominance of the digital domain promotes a different approach to the news
and event of the external reality, which are perceived as not just passively received, but also
(re)produced by the users of the network. The opinions and the reported facts in the network
become blurred categories and the different relationship with the external contents on online
communities implies a toleration regarding the degree of accuracy that users employ in sharing a
particular information. Users can employ the same fact-checking mechanism to comment both a
piece of gossip and a national news without being subject of any sort of criticism.
Indeed, in an online community, the shared information is never neutral i.e. it is neither
impersonal nor accidental: every user chooses what to share and when to share it on the base of
her interests, her desires and the effects she hopes to achieve through that particular sharing
within the online community. A recent report show, for example, that just a fraction of Facebook
users follow news organizations, and those who do generally are news consumers also outside of
the network community (Wells & Thorson, 2005). This means that the incidental exposure to
news – as seeing news titles in friends’ posts – rarely happens outside of the circle of users not
interested in news (McPherson, Smith-Lovin, & Cook, 2001).
Furthermore, every information is bound to the user who shared it: every piece of information,
personal or community-related, is presented in the platform because a user uploaded it and she is
accountable for its presence. As the authors already mentioned, this places the key for
trustworthiness in the hands of the user: the reliance on data and news is double-tied with the
reliability of the agents who share them (Oeldorf-Hirscha & Sundar, 2015). Especially on a
platform like Facebook, where the information is personally identified (often the profile is linked
to an actual person without the involvement of aliases and nicknames), this does not imply the
trustworthiness of the information but the trustworthiness of the social connection between the
information and the virtual persona (Mitchell & Rosenstiel, 2012). The virtual user, as a vehicle
for data, traces a consistency vector between a datum and the adequacy of that particular datum
on her profile. This way, the users build an online community that provides socially-based
affordances which are epistemically unreliable as they promote the maintenance of social bonds
and not of some deontological respect of truth.
Obviously though, even the exposure to trustworthy contents does not imply knowledge of
those contents (Hermida, Fletcher, Korell, & Logan, 2012). Indeed, the relation of trust that
could potentiate the information-sharing mechanisms that happen in online networks, at the same
time binds the users to a relation of social dependency with one another for epistemic matters.
The problems of this socially-driven system emerge when online communities are chosen by
internet-users not just as a social domain but also as the main area for discussing significant
topics: for example, as grounds for the comprehension and diffusion of political ideas and
scientific news. In the next section, the authors intend to analyze whether the entanglement
between the two domains of online communities can lead to problematic phenomena of
misunderstanding of real-world events and data in the context of online network.

ONLINE MEDIA AND THE DIFFUSION OF IGNORANCE


The Filter Bubble and the Implementation of the Confirmation Bias
So far, the authors have described online communities as promoting the social dominance of the
digital domain over the external one in terms of fostering more attention to the connections
between users than on the truthfulness of the data that are shared in those connections. This
feature is based on some affordances implemented by social network and online communities
developers through the use of “ranking” software. The latter increases the personalization of
websites filtering the data that the users can actually access. The generation, in 2009, of these
software was an answer to the increasing amount of data that was distributed online. Initially it
was implemented by programmers of web search engines, with Google being the first, to
establish personalized filters for the searches.
Thus, since 2009 different people have been accessing different contents when googling the
same term, depending on more or less personal data Google has stored (where the users log in
from, what browser they use, their browsing history, etc., (Pariser, 2011)). Therefore, when a
similar software was used to compact the social networks feeds into personalized frames, it
affected not only the sense of social gathering that these websites promoted, but also the contents
that the users shared. On Facebook, the algorithm that now implements the personalization of the
default page of the site is EdgeRank, which ranks every interaction on the site. In other words, if
you have more contacts with a person through Facebook or pay more attention to her profile –
chatting with her, commenting her posts, liking her photos, spending some time to check her
profile, and so on – the more likely it is that Facebook will show you more of her updates.
This tool powers the influence of peer opinion on these websites and the sense of being part of
an actual community (Acquisti & Gross, 2006), making preferable for users to acquire socially
filtered news (Emmett, 2008). But this is an obvious statement: with this implementation, the
users not only see the updates of all their “friendliest” friends but, given that consuming an
information that conforms to one’s ideas is easy and pleasurable, they are more and more pushed
to see it, rather than an information that challenges their opinions and questions their
assumptions. This feed-backing mechanism created what Eli Pariser calls a “Filter bubble,”
which is an extension of the confirmation bias through the means of social networks and online
communities. The confirmation bias is the tendency to consider and accept just the information
that confirms one’s precedent beliefs and opinion. Through personalized online platforms, this
psychological fallacy is reiterated in a web-space constructed for social aggregation but
developed into an information-sharing site.
The distinction between the proportion of what the agent sees because it is validated by many
sources, and what she sees because her friends share her same opinion is no longer visible. And
the visibility of this distinction is very important for how science is communicated. For example,
in “The Panic Virus”, the journalist Seth Mnookin argues that Andrew Wakefield, a British
gastroenterologist who alleged that the measles-mumps-rubella vaccine might cause autism, was
still very successful in disseminating misleading data on vaccines through social media where it
garnered fame for that, even after losing his medical license (Mnookin, 2011). His fame has been
spread by supporters of this argument and, through the mediation of a confirmation-driven
network, it produced a sense of validation for the hypotheses of concerned parents. According to
a UNICEF Social and Civic Media Section (2012) report, the anti-vaccination sentiment is hard
to take down, notwithstanding the many scientific studies that confirm that there is no connection
between inoculations and the occurrence of cases of autism because the networks that spread this
information are hardly penetrable to contrary opinions.
Moreover, the fact that the vast majority of adults search for data on the Internet and at least
two-thirds of them are on social networks (Oeldorf-Hirscha & Sundar, 2015) renders this
situation more and more dangerous from an epistemological perspective. As Pariser (2011, p. 7)
highlights: “With Google personalized for everyone, the query ‘stem cells’ might produce
opposed results for scientists who support stem cell research and activists who oppose it. ‘Proof
of climate change’ might turn up different results for an environmental activist and an oil
company executive”.
Nonetheless, if data navigate through social media and spread by “homophilia,” that is the
tendency to like what is similar to us, (McPherson, Smith-Lovin, & Cook, 2001), why cannot the
popularization of science contrast the diffusion of misinformation by the same means? In other
words, why it is much easier for a non-verified assumption to spread in a network, then the
accurate relative information?

Sharing Data on Online Networks as Black Box Arguments


While explaining the relevance of affordance theory for the development of their account, Arfini,
Bertolotti, & Magnani (2017) wrote that, if sociality is the aim of online communities as virtual
cognitive niches, information is the only kind of currency. Right now, this is the core of both the
appeal of online communities and their problems regarding the transmission of accurate high-
quality contents, as political news or scientific data.
To be clearer, let us consider for example how science is communicated. Right now, hard-core
science is a monologue given by and transmitted to a very specific audience. Scientists aim at
achieving significant results, doing ground-breaking research and publishing it in the top journals
of their fields. But these publications are only shared among scientists and specialists. Rarely an
important article published in an influential journal is transmitted to the public in the original
form. This happens for two reasons. The first depends on the academic system of publication that
is too expensive for non-academics. The second reason depends on, as it is well-described by
Christie Wilcox (2012, p. 85), “jargon walls – the barriers that keep the people we want to
become more scientifically literate from understanding what we do because they do not know the
terminology”. A scientific work is understood once the process that led to the results is
comprehended, and it is difficult that the terminology that is used in a specific field of research is
wildly accessible outside of it. Scientific sectors now are so specialized that, even within the
same field of research, there can be two scientists that use different definitions for the same
object. Thus, with this not little lexicon problem at hand, what is transmitted to the public media?
Science journalists (and scientists who do try to communicate their research) give the public
what the latter is looking for: information. Often, they diffuse the results of the scientific
research, with little or less effort in explaining how the process was conducted. They release into
the media what Jackson (2008, p. 47) calls “black box arguments”. A black box argument is, in
the words of the author, “a metaphor for modular components of argumentative discussion that
are, within a particular discussion, not open to expansion.” They are the parts of an
argumentation, often conclusive, that stand for a complete explanation of the process that
conducts to that solution, but that cannot be further elaborated by the listener. They resemble the
fallacy ad auctoritatem, the “appeal to authority”, in the way that they are justified by the
speaker as the abbreviation of a complicated result found by competent people.
Often public media offer to users oversimplified narratives instead of explaining the
complicated process that drove scientists to some conclusion. By using black box arguments,
scientists and science writers do not offer much more than what is distributed by other
information sources. If the rhetoric of the authors of online contents is the only discriminant
between high-quality data (scientific reports, political news, and so on) and poor and not
validated ones, then it is not surprising that those who spread the latter are better prepared for the
communication on social networks. Using intuitively simple, but scientifically irrelevant black
boxes too (conspiracy theories, religious beliefs, concepts of alternative medicine, etc.) they can
offer solutions better suited for the nonacademic environment of public media. The recipients of
public media may encounter difficulties in understanding the process of science but could be
profoundly religious, political extremists, superstitious, etc.
Furthermore, the use of black box arguments does not only affect the appeal of science for a
heterogeneous (not always science-interested) population, but can also bring about some
phenomena of shallow understanding even on the part of the public that is genuinely interested in
expanding its comprehension of science, politics, and other difficult topics. Users, for example,
can assume that the informational contents found (or shared) online can direct to the achievement
of a relative knowledge of the topic. But, as the authors already argued, the black box arguments
are not open to expansion. If you read an article on gravitational waves on Vice.com you may
acquire some data you did not possess before about general relativity mechanics, but it does not
transform you into an expert in the field, neither it gives you the same knowledge that you would
obtain reading an essay on the same topic. But, in online networks, you could have the same
sense of authority and control over the information you share, as it was yours (Sundar, 2008).
The emergence of “produsers” gave birth also to the phenomenon of self-proclaimed experts,
who are the acclaimed leaders in social driven networks Bruns, Highfield, & Lind (2012). In
fact, while the socially shared data (for instance impressive, curious or fun scientific tidbits) do
serve an interactive and social purpose, they may delude the users into being able to acquire
actual specific or complete knowledge with little effort. This phenomenon is at the core of
shallow understanding bubbles that abound in the net, which derive from the particular type of
affordances (“imagined affordances”, analyzed in (Nagy & Neff, 2015)) that allow the
distribution of black box arguments in the online communities.
The Raise of the Easily Informed Expert
Arfini, Bertolotti, & Magnani (2017) claimed that online communities as virtual cognitive niches
distribute “imagined affordances” (Nagy & Neff, 2015), which are the implementation of “users’
perceptions, attitudes, and expectations” within the possibilities and boundaries of a given
technology. Nagy & Neff (2015, p. 1) highlight how, for Gibson (1986), imagination can be
considered as an “extension” of perceptual knowledge, which is not “so continuously connected
with seeing here-and-now as perceiving is”.
Without imagination, there is no rationality. [...] The point is not solely what
people think technology can do or what designers say technology can do, but
what people imagine a tool is for. (Nagy and Neff, 2015, pp. 4–5)
Now, the generation of imagined affordances may be one of the cognitive follow-ups to the
distribution of black box arguments in online communities. Expectations for the functionality of
a particular technology may be not encoded in those tools by design, but they become part of the
users’ perception. In this sense, the feeling of agency and control over the information shared on
online media can be experienced as “epistemological power” over that information in that
particular community (Sundar, 2008). People who shared some posts regarding the recent
discovery of gravitational waves (posts that contain fancily disguised black box arguments) may
believe that they effectively know something more than those who did not. But knowing and
believing to know something are two different cognitive states and, while believing is a
pleasurable condition, it is also a fallible state not always recognized by the first-person
perspective of the agent.
The entailments between the pleasure of believing to know something and the incapacity to
distinguish it from the actual knowledge is the core of Woods’ idea of Epistemic Bubble (2005).
It describes the incapacity of distinguishing one’s own ignorance from her knowledge. An
epistemic bubble is a phenomenon of epistemic self-deception, by which the agent becomes
unaware of the difference between knowing something and believing that she knows the same
thing. It derives from the fact that believing to have some knowledge is a pleasurable condition
for the individual: it permits her to act accordingly to her beliefs and to relive the irritation that
the lack of some important data may raise. Since the relief is experienced when the knowledge is
acquired, “feeling relieved” is taken as a clue to the knowledge acquisition. Posting data on
online community indeed provokes a sense of control and agency over it: this may also cause the
delusion to have a special epistemic privilege over it, as to have acquired actual knowledge.
This can explain why there is a multiplication of self-proclaimed experts in online communities
over a variety of topics. To make an example, the authors will refer again to the diffusion of anti-
vaccine sentiments in Europe. One problem that agencies like UNICEF have to face is the
diffusion of un medically unqualified opinion leaders that guide the anti-vaccines crusades
(UNICEF Social and Civic Media Section, 2012). They often have no college education, but they
appear to have been well trained in alternative medicine. Some are just popular people of the
show business, like Jenny McCarthy, who has presented herself as educated, “Internet-savvy”
mother that aims to defy the medical establishment of data about vaccinations. Some often
proclaim themselves as “experts” about vaccinations because of their experiences as religious
authorities, political experts or “well-informed” parents: they especially present the vaccinations
as religiously problematic or part of a conspiracy, also because they believe to be well-informed
experts on religious matters and conspiracy theories. Often, parents that proclaim themselves as
experts in the correlation between vaccinations and the insurgence of autism highlight negative
stories that focus on individual cases. These cases, as religious impositions over vaccinations and
conspiracy schemes, are black box arguments that delude opinion leaders into having acquired a
particular knowledge over a sensible issue, without any reference to the medical understanding
of the practice of vaccination. They are in epistemic bubbles that entraps them into the self-
delusion of possessing relevant knowledge about an issue, without actually possessing it.
Online networks offer them the possibility of acting as competent opinion leaders thanks to the
imagined affordances they distribute. Asking the network’s opinion, targeting specific people
and sharing sensible data, the users raise greater involvement in the relative content from the
network and feel like they can be at the center of the movement for the vaccination control. They
use the online tools because they see as an imagined affordance the audience that they can reach.
Entrapped in epistemic bubbles, sharing black box argument and fomenting the anti-vaccine
sentiments, instead, they only diffuse ignorance and misinformation in their online networks.
Summing up, as socially driven networks, online communities provide more ways to connect
the users to one another than to control the quality of the data they share and receive. The
epistemic constraints imposed by the filter bubble, the diffusion of black box arguments and the
generation of epistemic bubbles cultivated exploiting imagined affordances of online
communities can effectively spread a variety of misinformation and hoaxes that can compromise
the epistemic judgment of users and multiple phenomena of ignorance diffusion. Nevertheless, as
long as the Internet confirms itself as the main information source for the global audience, it is
integral to find a way to reduce the spread of ignorance on its platforms and create at least an
epistemic balance on the information diffusion. A way to manage this situation could be
implementing forms of data curation and gatekeeping in Internet platforms.

LIMITATIONS OF THIS STUDY


Three different limitations are intrinsic to this philosophical study. These limitations, though, do
not hinder the quality or the results, but rather on the one hand they illuminate how the results
can be better understood and exploited, on the other they describe new directions to continue this
kind of research.
The first limitation is of a socio-geographical nature. Most of the psychological, behavioral,
cognitive and sociological studies on which applied-philosophical research is grounded belongs
to the sphere of western liberal democracies (namely Europe, North-America and Australia).
Other areas see the diffusion of online communities as well, but the usage is differently impacted
because of the existing infrastructure to access information, the averagely affordable devices and
bandwidth, the tolerated level of freedom of expression and the values upheld by the population
and the government. These traits might impact the diffusion and the perception of ignorance in
these communities.
The second limitation is linked to the speed at which the object of the study is developed and
evolves. New technologies are released, and new usages emerge in response, at a pace that the
research can hardly keep. Also considering the noble delays due to the reviewing processes, a
published article may deal with a technology that was hype a few months earlier but is almost
gone at the time of publication. The philosophical research, feeding on this literature, may suffer
from a further lag. At the same time, the educated philosophical reflection, which reads the result
through an independent framework, may help –by appropriate generalization– providing an
added value to the original research.
The last limitation is the most philosophical one, and relates to Kuhn’s notion of “paradigm”
(1962). Online communities are ever-present: the simple will to carry out a research about online
communities compels the researchers to enter and face online communities, beginning with the
search engines providing results to their queries. But also in their individual lives, the researchers
interact with and within online communities. The philosophical enquiry is optimally suited to
take a step back and reflect, in amazement, over the whole phenomenon of online communities
in order to better seize its shortcomings in a human-centered way: nevertheless, this analysis is
might still be rooted within the paradigm, as it relies on its vocabulary and its toolset. This is not
to say that such a research is necessarily flawed, also because it is the case with most human
endeavor, but it is an intrinsic limitation that should be declared for the sake of philosophical
honesty.

CONCLUSION
In this paper, the authors examined the problematic phenomenon of the diffusion of ignorance in
online networks. They first analyzed the implications of considering online communities as
virtual cognitive niches, which are digitally-encoded collaborative distributions of diverse types
of data into the environment. In virtual cognitive niches, the agents are invited to focus on the
social bonds established in the digital domain they contribute to create, and exploit socially
pregnant cause-effect relationships within it. These particular features of virtual cognitive niches
generate a sort of toleration for ignorance in the users of online platforms, who are more driven
to establish social connections with other users rather than exchanging accurate data with them.
Then, the authors offered possible explanations for the widespread diffusion of ignorance on
online networks that compromises the critical judgments of their users. Moreover, the authors
motivated the difficulty of a reliable distribution of information in the networks with the vastly
employed ranking software for the personalization of platforms, which determine the emergence
of “filter bubbles” (Pariser, 2011) that limit the visibility for the users of uncomfortable –
because not similar – opinions and beliefs. Then, the authors have commented the problematic
features of science communication on social networks, which distributes instead of fully
explained scientific directions and knowledge, black box arguments (Jackson, 2008), which are
not open to expansion by the average online network user. Finally, they have explained the self-
assurance of users by the employment of the idea of epistemic bubble (Woods, 2005), which is a
reassurance mechanism that is normally enacted by the human agent in order to be confident to
act accordingly to her belief, but that extremely implemented in the “closed” framework of social
networks. They claim this is one of the main reasons for the epistemic delusion of network users
regarding the data they receive and distribute on online platforms.

END NOTES
1 A terminological clarification should be introduced. The authors will use the term ``online
communities'' to employ a general definition that embraces different types of Internet-based
frameworks, as social networking websites, newsgroups, forums, blogs, and miniblogs (even if
they will specifically refer just to Facebook and Google search engine). They will employ this
term to define a target broad enough to support different references as social media, digital
frameworks, and social networks, without being general enough to hold the equivalence with
traditional media, as newspapers and television programs.
2 In this context, information-sharing mechanisms encompass any form of exchange of data
between members of the same online community through the technical means of the shared
platform (e.g. one-to-one chatting, group chatting, public posting, posting in a secret group, etc.).
3 Cf. reports on http://www.pewInternet.org/2015/10/08/social-networking-usage-20052015/.

REFERENCES
Arfini, S., Bertolotti, T., & Magnani, L. (2017). Online communities as virtual cognitive niches.
Synthese, online-first. DOI: https://doi.org/10.1007/s11229-017-1482-0.
Bertolotti, T., Arfini, S., & Magnani, L. (2017) Cyber-bullies as cyborg-bullies. International
Journal of Technoethics Special Issue The Changing Scope of Technoethics in
Contemporary Society (forthcoming).
Bessi, A., Scala, A., Rossi, L., Zhang, Q., & Quattrociocchi, W. (2014). The economy of
attention in the age of (mis)information. Journal of Trust Management, 1(1), 1–12.
Bruns, A., Highfield, T., & Lind, R. A. (2012). Blogs, Twitter, and breaking news: The
produsage of citizen journalism. In R. A. Lind (Ed.) Produsing theory in a digital world:
The intersection of audiences and production in contemporary theory (pp. 15–32). New
York: Peter Lang Publishing Inc.
Clark, A. (2005). World, Niche and Super-Niche: How language makes minds matter more.
Theoria, 54, 255–268.
Davies, W., & McGoey, L. (2012). Rationalities of ignorance: On financial crisis and the
ambivalence of neo-liberal epistemology. Economy and Society 41(1), 64–83.
Emmett, A. (2008). Traditional news outlets turn to social networking web sites in an effort to
build their online audiences. American Journalism Review 1, 41–43.
Gibson, J. J. (1986). The ecological approach to visual perception. New York, NY: Taylor &
Francis.
Hermida, A., Fletcher, F., Korell, D., & Logan, D. (2012). Share, like, recommend. Journalism
Studies 13(5-6), 815–824.
Jackson, S. (2008). Black box arguments. Argumentation 22, 437–446.
Kuhn, T. S. (1962) The Structure of Scientific Revolutions, Chicago: The University of Chicago
Press.
McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social
networks. Annual Review of Sociology 27, 415–444.
Mills, C. W. (2007). White ignorance. In S. Sullivan & N. Tuana Race and Epistemologies of
Ignorance (pp. 13–38). New York: State University of New York Press.
Mitchell, A. & Rosenstiel, T. (2012). The state of the news media: An annual report on
American journalism. The Pew Research Center’s Project for Excellence in Journalism,
1-43.
Mnookin, S. (2011). The Panic Virus. MLA: Simon & Schuster.
Mosco, V., (2004). The Digital Sublime. Myth, Power, and Cyberspace. Cambridge, MA: The
MIT Press.
Nagy, P., & Neff, G. (2015). Imagined affordance: Reconstructing a keyword for communication
theory. Social Media + Society 1(2), 1–9.
Oeldorf-Hirscha, A. & Sundar, S. S. (2015). Posting, commenting, and tagging: Effects of
sharing news stories on Facebook. Computers in Human Behavior 44, 240–249.
Pariser, E. (2011). The Filter Bubble: What the Internet is hiding from you. UK: Penguin.
Pinker, S. (2003). Language as an adaptation to the cognitive niche. In M. H. Christiansen & S.
Kirby (Eds.) Studies in the Evolution of Language (pp. 16–37). Oxford: Oxford
University Press.
Pohlhaus, G. (2012). Relational knowing and epistemic injustice: Toward a theory of willful
hermeneutical ignorance. Hypatia 27(4), 715–735.
Proctor, R. N. (2005). Agnotology. A missing term to describe the cultural production of
ignorance (and its study). In R. N. Proctor (Ed.) Ignorance (pp. 1–36). Stanford: Stanford
University Press.
Purcell, K., Rainie, L., Mitchell, A., Rosenstiel, T., & Olmstead, K. (2010). Understanding the
participatory news consumer. Pew Internet & American Life Project 1, 19—21.
Simon, H. (1993) Altruism and Economics. The American Economic Review 83(2), 156–161
Sullivan, S., & Tuana, N. (2007). Race and Epistemologies of Ignorance. New York: SUNY
Press.
Sundar, S. S. (2008). Self as source: Agency and customization in interactive media. In E. A.
Konijn, S. Utz, M. Tanis, & S. B. Barnes (Eds.), Mediated interpersonal communication
(pp. 58–74). New York: Routledge.
Tooby, J., and DeVore, I. (1987). The reconstruction of hominid behavioral evolution through
strategic modeling. In W. G. Kinzey (Ed.), Primate Models of Hominid Behavior (pp.
183–237). Albany: Suny Press.
Tuana, N. (2006). The speculum of ignorance: The women’s health movement and
epistemologies of ignorance. Hypatia 21(3), 1–19.
UNICEF Social and Civic Media Section (2012). Tracking Anti-Vaccination Sentiment in
Eastern European Social Media Network. New York: UNICEF.
Wells, C. & Thorson, K. (2005). Combining big data and survey techniques to model effects of
political content flows in Facebook. Social Science Computer Review 35(1), 1–20.
Wilcox, C. (2012). It’s time to e-volve: Taking responsibility for science communication in a
digital age. Biology Bulletin 222, 85–87.
Woods, J. (2005). Epistemic bubbles. In S. Artemov, H. Barringer, A. Garcez, L. Lamb, & J.
Woods (Eds.), We Will Show Them: Essay in Honour of Dov Gabbay (Volume II) (pp.
731–774). London: College Publications.

You might also like