You are on page 1of 2

¿Las redes sociales pueden darte un

trastorno de estrés postraumático?


Luke Mintz

26 DE SEPTIEMBRE DE 2018 • 5:41 P. M.

https://www.telegraph.co.uk/health-fitness/mind/can-social-media-give-ptsd/

En n 2014, terrorist group Islamic State took their battle online, posting a series of gruesome videos in which they beheaded Western
prisoners. Their aim was to spread fear, and the vast majority of people tried not to look. But there was one group who was forced to
watch the footage, hour after hour, every day of their working week: the content moderators on Facebook, YouTube and
other networks employed to keep the sites free of illegal content, removing the videos wherever they popped up.

https://www.telegraph.co.uk/islamic-state/

Every day, more than 300 million new photographs are uploaded onto Facebook, along with 714 million comments and 410 million
status updates. Most are perfectly benign, but a small minority of the content posted to Facebook each day depicts the very worst of
human behaviour. And it is this material – terrorism, child abuse, animal cruelty, racist hate speech – that Facebook’s 7,500-strong
army of moderators is forced to sift through, day-in, day-out.

https://www.telegraph.co.uk/facebook/

Doctors have long warned of the psychological dangers of this line of work, and this week one Facebook employee decided to take
action. Selena Scola, who worked as a moderator for the social media giant in California, is suing the company, claiming to have
developed post-traumatic stress disorder (PTSD) after being “exposed to thousands of images, videos, and live-streaming broadcasts
of graphic violence” during her time there in 2017 and 2018.

Should social media companies be doing more to protect their staff? And can you really develop PTSD from looking at online content,
a condition usually associated with soldiers and victims of serious violence?

Absolutely yes, says Emma Carrington, a mental health counsellor at Rethink Mental Illness. PTSD is by no means unique to soldiers,
she says, naming car accidents, fires, and childhood abuse as common causes. She can easily see how spending each day sifting
through extreme content could traumatise a Facebook employee.

https://www.telegraph.co.uk/technology/2018/09/25/facebook-moderator-sues-developing-ptsd-viewing-disturbing-content/

https://www.telegraph.co.uk/post-traumatic-stress-disorder/

“Just seeing one terrorist event is sometimes enough to make somebody experience PTSD,” she says. “So to be seeing that all day,
every day obviously has the potential to. We all want the internet to be a safe place for children, but somebody has to moderate that.
There is no way I would ever look at those things on the internet … but somebody has to do that job. We often forget there are people
like me and you sitting there watching these things.”

Inevitably, certain jobs will require staff to face upsetting situations, she says, and in these cases employers have a duty to provide
“reasonable adjustments” to help, such as professional counsellors. The UK fire and ambulance services, she says, routinely organise
“de-briefs” for their staff after they face a potentially traumatising scenario. Why can’t Facebook do the same?

But for journalist Gareth Rubin, it was the sheer repetition of the job that proved the most difficult.

In 2015, Rubin spent six months as a moderator for OkCupid, one of the world’s most popular dating websites. He didn’t come across
the sort of horrifically violent content that Facebook moderators are forced to tackle, and didn’t leave the job feeling particularly
traumatised. Instead, Rubin struggled with the mind-numbing grimness of it all.

“It’s pretty boring after a while, because you see the same mistakes and problems and stupidity and moronic behaviour over and over
again. It’s just the sheer repetition that destroys your confidence in humanity.”
His biggest nemesis was scammers – criminals, usually with poor English, who posed as attractive young women and struck up
conversations with lonely, older men. Eventually, the young ‘woman’ would ask for money.

“You see a lot of that and it’s quite unpleasant. First off, it’s kind of ridiculous, you think ‘how can anybody be taken in by this poor
attempt at English?’, and a profile that is so obviously [fake] – a 25-year-old model approaching a 50-year-old British man. And then
you start thinking about the man who’s getting scammed, and you think ‘well the reason that [the scammer is] successful is because
the chap is quite vulnerable in some way’. And that’s sad.

"Of course, it happens the other way, women get scammed as well.”

As a moderator, he says was also forced to sift through a barrage of photos of men’s penises (“dick pics”, as they are known on the
website), which men would send in a normally misguided attempt to woo women. However, the thing that sticks in his mind today
is the frequency and sheer un-originality of the ‘tiger profile picture’ – a trend where male users would post photos of themselves
posing with sleeping tigers at an animal sanctuary in Thailand.

Rubin thinks moderating a social media platform would be far worse now than when he endured it in 2015, with the growing volume
of illegal content posted to Facebook and Twitter making the jobs of their moderators extraordinarily difficult. “Undoubtedly social
media is much worse than it was four years ago. I think Twitter in particular has normalised hate, and that’s now shifted into the real
world which is far worse.”

https://www.telegraph.co.uk/money-scams/

He agrees with Emma Carrington that social media companies should do more to care for their online moderators.

“I think there is certainly a responsibility on Twitter and Facebook, who are huge corporations, and certainly have the resources … to
provide support services for their staff, and do frankly a hell of a lot more for users in terms of taking down hateful content.”

“Facebook has 7,500 moderators – Facebook can afford to have 40,000 or 75,000 moderators, and I don’t know why it’s not doing it
other than financial reasons.”

It’s possible that Facebook’s ‘moderator problem’ will be solved in coming years not by the mass hiring of mental health counsellors,
but by improvements in Artificial Intelligence (AI), which will automatically recognise illegal content before a human has to lay eyes
on it.

Blanket rules can get it wrong of course: in 2016, Facebook was criticised after removing the iconic ‘napalm girl’ photo, taken during
the Vietnam War, after it was flagged for child nudity. (It later reversed the decision, once human beings got involved).

Despite these hiccups, AI could well come to provide the solution to Facebook’s moderator problem. That, or staff like Selena Scola
will be forced to spend their hours looking at ‘dick picks’ for time immemorial - it's Facebook's choice.

https://www.telegraph.co.uk/artificial-intelligence/
https://www.telegraph.co.uk/technology/2016/09/09/mark-zuckerberg-accused-of-abusing-power-after-facebook-censors/

You might also like