You are on page 1of 10

Population controllers kill more women

Posted on November 11, 2014


More women died in India this week, the latest victims of official government  population control
programs.
ch country advocates of third world population reduction like to present their programs as benign
efforts to offer humane support to poor women who want birth control pills or devices but can’t obtain
them. It’s all about filling “unmet demand” they say. There may have been abuses in the distant past,
but that’s behind us now.

But when we move from the liberal-sounding fundraisers in the North to actual activity in the South,
very different pictures emerge. The population controllers are still  imposing their ideology on the
very poorest women, denying them choice and control, and killing many.

In India, populationism is official government ideology, and campaigns to reduce the number of poor
people are official government policy. Government programs pay per capita bounties to doctors who
sterilize women en masse. Unsafe operations are performed by ill-trained doctors, using poor
equipment in unsterile conditions. So-called health-care workers get just over $3 for each woman they
persuade to be sterilized, creating a strong motivation for clinics to process large numbers as quickly
as possible.
As Simon Butler and I discussed in Too Many People, when birth control programs are motivated by
population-reduction goals, the inevitable result is a focus on meeting numeric objectives and driving
up the totals, regardless of the desires or needs of the ‘targets.’
Blackmail, bribery, and coercion target the very poorest women. In India today, women who agree to
the operation are paid the equivalent of $23, which is more than most rural women earn in a month
— if they can find work at all. As Kerry McBroom, director of the Reproductive Rights Initiative at
the Human Rights Law Network in New Delhi, says, “The payment is a form of coercion, especially
when you are dealing with marginalised communities.”

Yet another tragedy, caused by just such population reduction programs, is reported this week in
the Guardian.
“Eight women have died in India and dozens more are in hospital, with 10 in a critical condition, after
a state-run mass sterilisation campaign went tragically wrong.

“More than 80 women underwent surgery for laparoscopic tubectomies at a free government-run
camp in the central state of Chhattisgarh on Saturday. Of these, about 60 fell ill shortly afterwards,
officials in the state said. …

“The Indian Express daily said the operations in Chhattisgarh were carried out by a single doctor and
his assistant in about five hours.”

The death-toll has since risen to ten, and 14 more women are reported to be in serious condition.

This is not an isolated incident. The health ministry admits to paying compensation for 568 deaths
resulting from sterilization between 2009 and 2012, a figure that independent observers believe
substantially understates the number of women who have actually died to help state officials meet
arbitrary population quotas.

Similar programs, with similar results, have killed or maimed poor women on every continent. As
David Harvey says, “Whenever a theory of overpopulation seizes hold in a society dominated by an
elite, then the non-elite invariably experience some form of political, economic, and social
repression.”
Ecosocialists support unrestricted access to all forms of birth control. We defend women’s absolute
right to choose whether to use birth control, and which kinds to use, free from all forms of coercion.
We oppose birth control programs based on populationist ideology because they consistently violate
those fundamental principles.

Facebook content moderators sue over psychological trauma

The World
December 12, 2019 · 4:00 PM EST
By Lydia Emmanouilidou

Silhouettes of laptop users are seen next to a screen projection of a Facebook logo, March 28, 2018.
Credit:
Dado Ruvic/Reuters 

Editor’s note: This story references violent acts and scenes.  

In the summer of 2017, Chris Gray walked into Facebook’s Dublin office for his first day of work as a
content moderator.

“It's one of these very trendy, California-style open offices. Bright yellow emojis painted on the
wall. It's all very pretty, very airy, seems very cool.”
Chris Gray, former content moderator for Facebook

“It's one of these very trendy, California-style open offices. Bright yellow emojis painted on the wall,”
Gray said. “It's all very pretty, very airy, seems very cool.”

Related: TikTok apologizes to US teen after removing video critical of Chinese government

Gray wasn’t a Facebook employee. He was a contractor hired by CPL Resources PLC in Dublin, one
of several outsourcing firms Facebook works with to moderate content on its platform. He took the
job hoping to move up the ranks and eventually work for Facebook. But that never happened. Instead,
Gray says, the nine months he spent at CPL Resources left him with lasting psychological trauma and
post-traumatic stress disorder (PTSD).

Gray and several other former contractors are now suing CPL Resources and Facebook in Ireland’s
High Court over the psychological trauma they say they endured because of poor training and lack of
adequate mental health resources on the job. The lawsuit, which was filed last week, is bringing new
scrutiny to the content moderation ecosystem that Facebook and other platforms rely on to police
what gets posted on their platforms. 
Gray started working as a content moderator in July 2017. He was one of the thousands hired to
moderate flagged content on Facebook following a series of high-profile incidents. In April 2017, a
Cleveland, Ohio man uploaded a video of himself gunning down an elderly stranger on the street. It
stayed on Facebook for hours. Within days, a man in Thailand livestreamed the murder of his baby
daughter on Facebook Live.

Facebook was scrambling to prove it was taking steps to keep posts like this off the site and in May,
Facebook CEO Mark Zuckerberg announced his company would be adding 3,000 new members to
the team of people who moderate content for Facebook. (Today, there are about 15,000 people around
the world who review content for Facebook, according to a company spokesperson.)

Related: Facebook wants to create a 'Supreme Court' for content moderation. Will it work?

At first, Gray’s job was to keep pornography off the site. When a user or Facebook’s technology
flagged a post that seemed to be in violation of Facebook’s “Community Standards,” it would go to
Gray, or to someone else on his team who would review the video, photo or text and decide what to
do with it — take it down, mark it with a warning or leave it up.

“After a few months, I was moved to the high-priority queue, which is hate speech, graphic violence,
bullying. Really all the nasty stuff you want to act on very quickly,” Gray said. “I really don't like to
talk in detail about [the things I was reviewing, but it included] executions. Terrorists beheading
people. Ethnic cleansing in Myanmar. Bestiality. I mean, you name it. All the worst of humanity,
really.”

On busy days, Gray would walk into work to find 800 of these posts waiting in his queue. On good
days, it was closer to 200. He had to sift through quickly, but also carefully because Facebook was
auditing the decisions he was making — and keeping a score. Gray was working 37.5 hours a week,
making about $14 per hour.

Gray was under a strict nondisclosure agreement and therefore was not speaking with friends and
family about the disturbing things he was seeing at work. It wasn't until a whole year after he left the
company during a meeting with a journalist at a coffee shop that Gray says he opened up about the
work he did for Facebook.

Related: What Western media got wrong about China’s social credit system

“This was the first time I'd ever talked about all the horrible stuff I had to see. I never even discussed
it with my wife. And I literally broke down and cried in a coffee shop. And I was absolutely shocked.
I was bewildered. I just did not know what was happening to me.”
Chris Gray, former content moderator for Facebook 

“This was the first time I'd ever talked about all the horrible stuff I had to see. I never even discussed
it with my wife. And I literally broke down and cried in a coffee shop. And I was absolutely shocked.
I was bewildered. I just did not know what was happening to me,” he said.

This incident prompted Gray to go see a doctor. He was diagnosed with PTSD.

CPL Resources did not respond to several interview requests on the lawsuit.

Through a statement provided to The World, Facebook wrote, “We recognize this review work can be
difficult, and we work closely with our partners to ensure that the people who do this work are
supported. We require everyone who reviews content for Facebook go through an in-depth, multi-
week training program on our Community Standards and have access to extensive psychological
support to ensure their wellbeing.

“This includes 24/7, on-site support with trained practitioners, an on-call service, and access to private
healthcare from the first day of employment. We are also employing technical solutions to limit their
exposure to graphic material as much as possible. This is an important issue, and we are committed to
getting this right.”

Sean Burke, another former contractor who is suing, says that he did not receive the training and
support Facebook claims its partners provide to workers. Burke started working for CPL Resources in
2017.

Related: Period apps share your fertility data with Facebook

“On the first day, one of my first tickets was watching someone being beaten to death with a plank of
wood with nails on it,” he said. “Within my second week on the job, it was my first time ever seeing
child porn.”

Burke says he saw videos of people being decapitated and people committing suicide. Not everything
he was reviewing was this disturbing, but these posts stuck with him.

Burke was working the night shift. He'd come in to hundreds of posts every day. Do his work, get
home around 3 or 4 in the morning, and try to get some sleep. When he finally did, he'd have
nightmares about the things he saw on his computer screen at work.

“You're seeing the worst that humanity has to offer, and you just become completely disheartened.” 
Sean Burke, former content moderator for Facebook

“You're seeing the worst that humanity has to offer, and you just become completely disheartened,”
he said.

Burke felt like he needed help and support, so he paid a visit to the CPL Resources’ Wellness Center.
They offered yoga, finger-painting classes, and people he could talk to.

“Unfortunately…they can't do anything to help you cope or manage with the material or the
environment,” Burke said. “They're kind of just, there's a shoulder to lean [on] and cry on.”

Burke sought outside help and was prescribed anxiety medication, which he says helped him cope.
But his accuracy rating fell below 98, and CPL Resources did not extend his contract. (Remember,
every decision he made was audited to assess whether he did a good enough job applying Facebook's
rules — rules that Burke says were complicated and constantly evolving).

Facebook has said it wants to eventually automate most of this crucial content moderation work — to
have it done by sophisticated algorithms. But the technology isn't there yet. In the meantime, Cori
Crider, who’s with the London-based nonprofit Foxglove, and is assisting with the lawsuit, says she
wants better conditions for the humans doing the work.

“Facebook and other social media platforms could not exist without the labor that these people
provide,” she said. “It would be unusable. You wouldn't touch it. You wouldn't set foot in it because it
would just be awash in abuse and pornography and violence. And the people who are on the front
lines of this battle making the platforms a place that's [usable] for us all — they're really paying the
price right now.”

Related: Australia's new rapid-removal law for violent videos may be a 'knee-jerk' reaction

Crider wants Facebook and its partners to provide better mental health support to employees, and to
limit how much toxic content moderators are exposed to.

“If you think, for example, about police investigating child abuse cases here in the UK, they all have
… very serious psychological support and actually limits on the amount of time they're permitted to
be exposed to that stuff.” 
Cori Crider, Foxglove

“If you think, for example, about police investigating child abuse cases here in the UK, they all
have…very serious psychological support and actually limits on the amount of time they're permitted
to be exposed to that stuff,” Crider said. “So, if [Facebook] had just taken a little bit closer look at
comparative examples of other people who do this sort of work, they could've done better.”

Gray agrees. But he says that figuring out the best way to keep the internet and the people who
moderate it safe is not easy.

“It's an incredibly complex task,” Gray said. “And I don't think Facebook should be ashamed that they
haven't got it right yet. I think they just need to say, ‘OK, we're learning. We're doing our best. We've
made mistakes. And it appears that some people have suffered as a result of those mistakes. And we're
going to make that right.’”

In the meantime, he says, he's staying off Facebook, and all other social media. 
Is there a ‘Nazi emergency’ in the German city of Dresden?

The World
December 25, 2019 · 12:00 PM EST
By Orla Barry
Supporters of the anti-Islam movement PEGIDA (Patriotic Europeans Against the Islamization of the
West) attend a demonstration in Dresden, Germany, on Oct. 21, 2018. 
Credit:
David W. Cerny/Reuters

Right-wing extremism is on the rise in Germany. 

The number of far-right extremists is up by a third this year, according to German intelligence
services. Dresden, in East Germany, is attempting to confront the problem by declaring a "Nazi
emergency" in the city in November 2019. 

But the announcement is simply a motion passed by the city council with no obvious solution,
residents say. The far-right party Alternativ fur Deutschland (AfD) said it is merely a political stunt by
disgruntled opposition parties. 

Are they right or does the city have a Nazi problem?

Related: Artists in Germany fear backlash after far-right party wins big

Once a month, a group of middle-aged and older men and women gather in the center of Dresden to
sing old German ballads and listen to speeches at a small makeshift stage. 

Some wave German flags and hold up homemade placards with pictures of Chancellor Angela
Merkel’s face pasted to the front. The only indication that this is not an innocuous community
gathering is the ring of police who stand silently nearby. 

This is PEGIDA, an anti-Islam movement that began in Dresden in 2014 and at one point
brought thousands of supporters to the streets of the East German city. PEGIDA stands for "Patriotic
Europeans Against the Islamization of the West." The posters of Merkel bear the words “terrorist” and
“war criminal.”

Related: Germany's  refugee crisis is a gift to the  far-right  PEGIDA  movement

The AfD party in Dresden counts PEGIDA members among its supporters. AfD councilor Wolf
Hagen Braun said while he may not agree with everything they say, he defends the rights of PEGIDA
members to protest. Braun is dismissive of the motion declaring a "Nazi emergency" which was
passed by his fellow city councilors. It’s all about party political differences, he said. 
“There's been increasing polarization in the political atmosphere here. And this has led to our political
opponents using more radical measures to attack us,” Braun said.   

That’s not how Max Aschenbach, who proposed the motion, sees it. Aschenbach, a councilor with the
satirical party Die Partei or "The Party" sees a steady rise in racism being expressed on the streets of
Dresden.

“Everyday there are police reports about swastikas and Hitler salutes; it has become a part of
everyday life.”

Max Aschenbach, Dresden councilor

“Everyday there are police reports about swastikas and Hitler salutes; it has become a part of
everyday life.” 

What’s worse, he said, is that politicians both locally and nationally are also using racist language.

Max Aschenbach proposed the motion to declare a "Nazi emergency" in Dresden, Germany. 
Credit:
Orla Barry/The World 

“Whether it’s the cashier in the supermarket uttering racist rubbish when an apparent foreigner passes,
or whether these are statements from politicians — not just in Dresden but on the federal level, too —
politicians who talk about ‘asylum tourism’ and make tendentiously racist comments.”

The motion is a controversial one and made news headlines around the world when it was first
adopted. Some councilors worry it will damage Dresden’s reputation and link the city forever with
extremism. Aschenbach shrugs. Pegida, he said, managed to do that already without his motion. He
wants other political parties to voice concern at the rise in popularity of far-right parties like the AfD
and accept the city “has a problem with Nazis.”

Does the AfD party have a Nazi problem? Political scientist, Christoph Meisselbach at the Technical
University in Dresden thinks so. And it’s a “problem that should not be underestimated,” he said. 

Meisselbach cites Bjorn Hocke as an example of the extremist element within the far-right party.
Hocke is a controversial figure in the AfD. He once said Germany was "crippled" by its "stupid"
politics of remembrance and dismissed the Holocaust memorial in Berlin as a monument of shame.
Meisselbach believes these views actually attract more people to the party. 

“The AfD doesn't get elected despite the fact that there are these radicals, but also because there are
these radicals.”

There is a danger though that the term Nazi gets thrown around all too frequently in relation to the
AfD. While extremists are attracted to the party, many of those who voted for the far-right party did
so because traditional right-wing parties like Merkel’s Christian Democrats (CDU) moved
increasingly to the center, Meisselbach said. 

The voters were looking for a more conservative right-wing alternative and the AfD stepped in.
Supporters also felt marginalized by the political system and the AfD presented itself as an anti-
establishment alternative. The support for the far-right party is much more pronounced in parts of East
Germany than in the West, a fact Meisselbach and others attribute to the poorer economic
circumstances many in the east found themselves in after the fall of communism. 

Related: 30 years after it fell, the  Berlin Wall  still divides Germans 

Regular PEGIDA marches and the rise of the AfD have made things uncomfortable for the Muslim
community in Dresden.

Imam Umer Malik said there is an issue with Nazis in the region and some people think nothing of
being openly racist on the streets.
Credit:
Orla Barry/The World 

Imam Umer Malik said there is an issue with Nazis in the region and some people think nothing of
being openly racist on the streets. He is careful to emphasize the “good people of Dresden,” though.
He recounts that at one point, there were anti-Islam demonstrations outside the Islamic Cultural
Center in Dresden every Friday. But authorities quickly intervened and the protests stopped. The
center itself is called Marwa El Sherbini Zentrum, named after an Egyptian woman who was stabbed
to death in 2009 in Dresden. The court later ruled that her killer was partly motivated by a “hatred of
foreigners.”

Imam Malik is not worried about his own safety even if AfD support continues to rise. His own
parents fled from Pakistan because of religious and political persecution. They were granted asylum
and settled down in Wiesbaden. Malik said his parents were always made to feel welcome in
Germany and he feels safe because he was born there.

“I'm a German. They can't restrict me to live. This is a fundamental right. I'm a human being.”
Imam Umer Malik, Dresden, Germany

“I'm a German. They can't restrict me to live. This is a fundamental right. I'm a human being.”  

Meisselbach, the political scientist, is not convinced the "Nazi emergency" motion will achieve much.
There is a danger it may play into the hands of the far-right party, who can use it to their advantage,
he said. 

“They [AfD] will say to their supporters instead of trying to understand what made us successful, they
just call you Nazis again and again and delegitimize your demands. So, who will be there for you?
Answer: we will be.”

Weeks after the motion passed, however, intelligence services have changed the way they identify
right-wing extremists in Germany. The country’s federal domestic intelligence service (Bundesamt
für Verfassungsschutz or BfV) and state-level intelligence services for the first time included groups
affiliated with the AfD when providing figures in their 2019 report on extremism. 

The move is likely influenced by a series of deadly incidents in Germany this year including an attack
on a synagogue in Halle, a city in central Germany, in October, when two people died, and the killing
of pro-immigration politician Walter Lubcke by a right-wing extremist in May. 

For Aschenbach, the councilor with the satirical party Die Partei, it may seem that his motion is
finally having an impact. 
 

Please use the sharing tools found via the share button at the top or side of articles. Copying articles to
share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy
additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service.
More information can be found here.
https://www.ft.com/content/36f838c0-53c5-11ea-a1ef-da1721a0541e

he writer is international policy director at Stanford’s Cyber Policy Center For years, big technology
companies have acted as though they were above the law. More people use Microsoft, Facebook or
Amazon than the populations of most countries. Their profits exceed the budgets of many states. Tech
and social media companies have become powerful global actors and their corporate governance
decisions already affect the rights and freedoms of billions of people. But, tech companies are now
going a step further, by positioning themselves as governments. Last month, Microsoft announced it
would open a “representation to the UN”, while at the same time recruiting a diplomat to run its
European public affairs office. Alibaba has proposed a cross-border, online free trade platform. When
Facebook’s suggestion of a “supreme court” to revisit controversial content moderation decisions was
criticised, it relabelled the initiative an “oversight board”. It seems tech executives are literally trying
to take seats at the table that has thus far been shared by heads of state. At the annual security
conference in Munich, presidents, prime ministers and politicians usually share the sought-after stage
to engage in conversations about conflict in the Middle East, the future of the EU, or transatlantic
relations. This year, executives of Alphabet, Facebook and Microsoft were added to the speakers list.
Facebook boss Mark Zuckerberg went on from Munich to Brussels to meet with EU commissioners
about a package of regulatory initiatives on artificial intelligence, data and digital services.
Commissioner Thierry Breton provided the apt reminder that companies must follow EU regulations
— not the other way around. But making sure EU lawmakers stay in the driver’s seat will require
significant catching up. Tech executives who initially resisted regulation now call it desirable. As long
as regulations serve their interests, companies support them. Large tech companies have found
regulations can help consolidating their market position, while smaller enterprises struggle. Apple
supports global privacy regulations, Microsoft pushes restrictions in the use of facial recognition
technologies, and Facebook looks to governments to regulate content online. But self-serving
proposals should be distinguished from laws to ensure democracy is not disrupted. It is important not
to take the words of Silicon Valley leaders at face value. For one, the suggestion that regulations are
the main drivers of corporate governance is misleading. It distracts from the power tech companies
have in setting norms and standards themselves. Through their business models and innovations, they
develop rules on speech, access to information and competition. If tech executives want change, there
is no need to wait for government regulation to guide them in the right direction. They can start in
their own “republics” today. As regulators of the domains they govern, nothing stops them proactively
aligning their terms of use with human rights, democratic principles and the rule of law. When they
deploy authoritarian models of governing, they should be called out. Instead of playing government,
they should take responsibility for their own territories. This means anchoring terms of use and
standards in the rule of law and democratic principles and allowing independent scrutiny from
researchers, regulators and democratic representatives alike. Credible accountability is always
independent. It is time to ensure such oversight is proportionate to the power of tech giants.
Companies seeking to democratise would also have to give their employees and customers more of a
say, as prime “constituents”. If leaders are serious about their state-like powers, they must walk the
walk and treat consumers as citizens. Until then, calls for regulations will be seen as opportunistic,
and corporations unfit to lead.

You might also like