ye10ai2021 You's Doing ItWirong: Notes on Criticism and Tachnology Hype | by Lee Vinsal | Feb, 2021 | Medium
Open in app e
Lee Vinsel
[rotow ] 11K Followers About
You have 1 free member-only story left this month. Upgrade for unlimited access,
You're Doing It Wrong: Notes on Criticism and
Technology Hype
e Lee Vinsel Feb1 » 19min read +
Maybe more people are writing about the real and potential problems of technology
today than ever before. That is mostly a good thing. The list of books and articles from
the last few years that have nuanced and illuminating perspectives on the
contemporary technological situation is rich and long.
Recently, however, I’ve become increasingly aware of critical writing that is parasitic
upon and even inflates hype. The media landscape is full of dramatic claims — many of
which come from entrepreneurs, startup PR offices, and other boosters — about how
technologies, such as “Al,” self-driving cars, genetic engineering, the “sharing
economy,” blockchain, and cryptocurrencies, will lead to massive societal shifts in the
near-future. These boosters — Elon Musk comes to mind — naturally tend to
accentuate positive benefits. The kinds of critics that I am talking about invert boosters’
messages — they retain the picture of extraordinary change but focus instead on
negative problems and risks. It’s as if they take press releases from startups and cover
them with hellscapes.
At their most ridiculous, hype-filled criticisms become what historian David C. Brock
Calls “wishful worries,” that is, “problems that it would be nice to have, in contrast to
the actual agonies of the present.” (See also science journalist John Horgan’s recent
Scientific American post on the topic.) Perhaps the most beautiful example of a wishful
hitps:sts-news medium comiyoure-doing-it-rong-notes-on-crlcism-and-technology-hype-t8b08b4307@5, ans.ye10ar2021 You'e Doing ItWirong: Notes on Criticism and Tachnology Hype | by Lee Vinsal| Feb, 2021 | Medium
worry is the article titled, “Hacked Sex Robots Could Murder People, Security E
Warns,” which, sadly for our culture, is not an April Fool’s prank. Part of Brock’s point
is that wishful worries are a kind of entertainment. We are, after all, a people that
regularly feasts upon dystopian science fiction. Imaginary fears can be fun.
But it’s not just uncritical journalists and fringe writers who hype technologies in order
to criticize them. Academic researchers have gotten in on the game. At least since the
1990s, university researchers have done work on the social, political, and moral
aspects of wave after wave of “emerging technologies” and received significant grants
from public and private bodies to do so. As I'll detail below, many (though certainly not
all) of these researchers reproduced and even increased hype, the most dramatic
promotional claims of future change put forward by industry executives, scientists, and
engineers working on these technologies. Again, at the worst, what these researchers
do is take the sensational claims of boosters and entrepreneurs, flip them, and start
talking about “risks.” They become the professional concern trolls of technoculture.
To save words below, I will refer to criticism that both feeds and feeds on hype as criti-
hype, a term I find both absurd and ugly-cute, like a pug. (Criti-hype is less mean than
the alternative, hype-o-crit, though the latter is often more accurate.)
This post moves through three stages: First, I examine a clear case of contemporary
criti-hype: how the film The Social Dilemma and Shoshana Zuboff’s book, The Age of
Surveillance Capitalism, overstate the abilities of social media firms to directly
influence our thoughts and provide near zero evidence for it. Second, I offer a
preliminary history of how criti-hype became an academic business model by taking a
look at the examples of the Human Genome Project, nanotechnology, “Al,” and a few
others. Third, I talk about some of the costs of criti-hype and offer some solutions
before ending on a thoroughly pessimistic note.
Before I proceed, I want to make one thing clear: My point isn’t that technology is risk
free. Not at all. For every technological risk that has been overstated in the past, one or
more risks have been underestimated and bit people — most often the poor and
marginalized — on the hind end. My first book, Moving Violations, is a history of
automobile regulation in the United States and examines how groups tried to make
cars safer, less polluting, and more fuel efficient. There are entire libraries of books on
legitimate and serious technological risks that harm and kill every day. More work can
and should be done on these topics. Indeed, I will argue at the end that one response to
criti-hype should be doing a better job of steering graduate students away from
hitps:sts-news medium comiyoure-doing-it-rong-notes-on-crlcism-and-technology-hype-t8b08b4307@5, ans‘eioar200t ‘ir Being Wong: Noles on Cilia and Technology Hype [by Lee Vine | Fe, 202% | acim
“emerging technologies” which are little more than promissory notes towards actual
technological agonies.
The problems I explore below develop when people begin working on the ethics and
governance of technological situations that aren’t real — and not just “aren’t real” in
the sense aren’t yet real but aren’t even realistic projections of where the science and
technology is headed. Criti-hypers play up fantastic worries to offer solutions, and as
we'll see, often they do this for reasons of self-interest —including self-interest as in
SS$S$SS$SSS.
‘A famous song, which — little-known fact — is actually about scoring money from the NSF.
Criti-Hype Today
Some of the clearest examples of criti-hype today center on the role of social media in
our lives, especially the claim that its designers can directly and effectively influence
our behavior. Perhaps the two most striking examples of this criti-hype trend are
Shoshana Zuboff’s book, The Age of Surveillance Capitalism, and the film The Social
Dilemma, which includes Zuboff and another criti-hyper Tristan Harris as talking
heads.
Both the book and film liken social media companies to puppet masters who have users
on strings. Tristan Harris looks at the camera earnestly and says, “Never before in
history have 50 designers . . . made decisions that would have an impact on two billion
people. Two billion people will have thoughts that they didn’t intend to have” because
hitps:sts-news medium comiyoure-doing-it-rong-notes-on-crlcism-and-technology-hype-t8b08b4307@5, ansseroaz0z1 You're Doing It Wrong: Nola on Ciism and Technology Hype | by Lee Vinsel | Feb, 2021 | Medium
of the designers’ decisions. But Harris and the other people who appear in The Social
Dilemma provide no evidence that social media designers actually CAN purposefully
force us to have unwanted thoughts. (Now-old joke: have I ever had a thought on
purpose?) The films’ talking heads repeat spectacular claims that social media
companies, which are basically advertising companies, would love their customers to
believe.
A.screenshot from the film The Social Dilemma. Tristan Harris tells the audience, “Two billion people
have thoughts that they didn’t intend to have” as the filmmakers show us an animation of puppet
masters controlling users like marionettes.
To some degree, it is unsurprising that Harris reproduces the digital technology
industry's hype-y claims about itself because he comes from it, Harris has a degree in
computer science from Stanford and worked at Google but has no training in
humanities/social science studies of technology, which might give him critical distance
from industry propaganda.
Ina notorious scene of The Social Dilemma, Tristan Harris says, “No one got upset when
bicycles showed up. Right? Like if everyone’s starting to go around on bicycles, no one
said, ‘Oh, my God, we've just ruined society.” Actually the exact opposite is true. There
was a moral panic around the threat of bicycles, a well-known fact amongst people
who study the social dimensions of technology. An 1894 New York Times article told
readers, “There is not the slightest doubt that bicycle riding, if persisted in, leads to
weakness of mind, general lunacy, and homicidal mania.” Moreover, there have been
moral panics about film, radio, television, and nearly every other older media form,
hitps:sts-news medium comiyoure-doing-it-rong-notes-on-crlcism-and-technology-hype-t8b08b4307@5, ansseroaz0z1 You're Doing t Wrong: Notes on Cicism and Technology Hype [by Lae Vinsel| Feb, 2021 | Medium
including about how they supposedly manipulated users. For example, everyone
should own a copy of the 1980 book, The Clam-Plate Orgy and Other Subliminal
Techniques for Manipulating Your Behavior, which played a role in widespread worries
about “subliminal messages.” Harris has not taken the time to get perspective on the
bigger picture of technology and society.
hitpssts-news medium comiyoure-doing-it-wrong-notes-an-cricism-and-technology-hype-18b08b4307@5, sus,You's Doing It Wirong: Notes on Criticism and Tachnology Hype | by Lee Vinsal| Feb, 2021 | Medium
Ooh. La. La.
What is less obvious is why Shoshana Zuboff, an emerita professor of Harvard Business
School, so uncritically repeats the digital industry’s marketing materials, nor why she
never points to or assesses evidence that goes against her argument. Yet her writings
are full of hyperbole that sounds like she took press releases from Facebook's and
Google’s PR departments and rewrote them to be alarming,
In an editorial related to her book titled, “You Are Now Remotely Controlled,” Zuboff
wrote that social media and the like are a “a new ‘instrumentarian’ power that works
its will through the medium of ubiquitous digital instrumentation to manipulate
subliminal cues, psychologically target communications, impose default choice
architectures, trigger social comparison dynamics and levy rewards and punishments
—all of it aimed at remotely tuning, herding and modifying human behavior in the
direction of profitable outcomes and always engineered to preserve users’ ignorance.”
God, that sounds scary. But is it true?
You would think in a 700 page book Zuboff would present mounds of evidence of such
an important and central claim that “surveillance capitalism” firms are able to
influence our behavior directly to the point where we lost the “will to will,” as she puts
it. But in fact, she puts forward very little evidence for this claim.
Her account primarily relies on a few pieces of verification: first, two studies on
emotional contagion that Facebook published. In these studies, people who were
shown more negative posts were more likely to make their own negative posts and
people shown more positive posts were more likely to make positive ones. But these
studies are controversial in some circles and hardly show a large impact anyway. The
findings were statistically significant because they had enormous sample sizes — in
hitps:sts-news medium comiyoure-doing-it-rong-notes-on-crlcism-and-technology-hype-t8b08b4307@5, ans,‘eioar200t Youre Doing I Wong: Nols on Cicism and Technology Hype |by Lae Vinal | Feb, 2021 | Medium
689,003 people — but their effect sizes were small (in that same study,
Cohen's d = 0.02). This is no demonstration of puppet mastery.
one study,
The other bit of evidence Zuboff regularly relies on is . .. Pokémon GO. Zuboff
describes it in frightening ways, “Game players did not know that they were pawns in
the
al game of behavior modifi
ition for profit.” But all that happened was that
money for each player who showed up to their locations to acquire virtual goods in the
game. Are you scared now?
In fact, there is a great deal countervailing evidence that cuts against hype about
online companies being able to influence our behavior. Zuboff and Co. never take this
evidence into account because it would undermine their cases. Articles such as “Ad
Tech Could Be the Next Internet Bubble” and “The new doc com bubble is here: it’s
called online advertising” as well as Tim Hwang's book, Subprime Attention Cris
e Bomb at the Heart of the Internet detail how terrible online
advertisements are both at finding the right target and influencing us even when they
do find sympathetic eyeballs. One study by business school professors used six
different advertising platforms found the targeting of ads performed worse than
random guessing. If your friends are like mine, you regularly see what has become a
genre of Facebook post, wherein people put up screenshots of ridiculous and
inappropriate ads that Facebook showed them. Contra Harris and Zuboff, it seems that
Mark Zuckerberg cannot sell me fucking socks, let alone purposefully/significantly
change my politics or self-concept.
To be clear, I am NOT saying that there’s nothing to worry about or study when it
comes to how social media use shapes behavior. There are many things to be
concerned about and try to better understand, including misinformation,
radicalization, the formation of mobs through online platforms, and more. There are
also plenty of reasons to question Facebook’s, Google’s, and other firms’ monopolistic
powers and potentially even to break them up. But none of these problems or our
criticisms of them have anything to do with social media companies being able to
control our minds.
Criti-hypers like Harris and Zuboff reproduce the most far-fetched claims of the online
advertising companies to such a degree that an old joke wonders if they are secretly
being paid by those companies to keep air in the online ad bubble. Mercifully, there are
many critical works on digital technology that do not engage in criti-hype and indeed
hitps:sts-news medium comiyoure-doing-it-rong-notes-on-crlcism-and-technology-hype-t8b08b4307@5, 7sseroaz0z1 You're Doing t Wrong: Notes on Cicism and Technology Hype [by Lae Vinsel| Feb, 2021 | Medium
challenge senseless claims about powers of new technologies, including Evgeny
Morozov’s pioneering work on “solutionism”; Meredith Broussard’s questioning of
artificial intelligence; Morgan Ames’, Christo Sims’, and Roderic Crooks’ critically
examining claims around “EdTech”; Sarah Robert
Siddharth Suri’s, and other folks’ unveiling of the hidden work behind online
platforms; and many other examples.
leton Gille
But we shouldn't take these counter-examples as evidence that academic research is
free of criti-hype. Indeed, at some point, criti-hype unquestionably became an
academic business model.
How Criti-Hype Became an Academic Business Model
How did criti-hype become an academic business model? This history has yet to be
written, and, in general, | think we need a more robust, self-reflexive understanding of
how funding has shaped research priorities and critical approaches in academic
science and technology studies.
Ibelieve, however, that one stream of this business model arose out of the Human
Genome Project’s Ethical, Legal, and Social Implications program, which put 3% and
later 5% of the HGP’s budget towards studying moral and social issues. Since that time,
ithas become the model wherein academic humanities and social science researchers
attach themselves to new “emerging technologies” to study the ethics and social
implications of speculative risks. Indeed, I think we can talk about the ELSIfication of
some fields, including science and technology studies
(A common criticism of ELSI is that it effectively involved scientists buying off,
controlling, and/or domesticating social scientists and philosophers to stave off more
radical critiques. This is interesting and deserves more examination. But in this post, I
want to focus on the agency of the academic humanities and social science researchers
who played along with hype to score cash money and prestige.)
There were certainly things to criticize about the Human Genome Project. Friends who
were working during that period recount stories of scientists who were far-too overly-
confident about their abilities to control genetic manipulations once they were made.
But there was also a great deal of criti-hype around genetic engineering. One example
is The President’s Council on Bioethics 2003 report, Beyond Therapy: Biotechnology and
the Pursuit of Happiness, which included professors like Leon Kass, Francis Fukayama,
hitps:sts-news medium comiyoure-doing-it-rong-notes-on-crlcism-and-technology-hype-t8b08b4307@5, anstosr2021 ‘ir Being Wong: Noles on Cilia and Technology Hype [by Lee Vine | Fe, 202% | acim
and Robert George and worried about the dangers of designer babies, ageless bodies,
and that people might make themselves . . . too happy. Sounds like wishful worries to
me.
While we should not downplay the way genetic information is being used in medicine
today, it’s also clear that the most extraordinary of earlier visions of genetic
engineering have not come true — not even close. Even with CRISPR today, genetic
engineering is far more difficult than some people imagined.
After the Human Genome Project, the next emerging technology that criti-hype formed
around may have been nanotechnology, which, as Patrick McCray showed in The
Visioneer, boosters claimed would transform the world. Arizona State University
(ASU) president Michael Crow and a co-author wrote, “The first thing to say is that if
—as is variously claimed — nanotechnology is going to revolutionize manufacturing,
health care, travel, energy supply, food supply, and warfare, then it is going, as well, to
transform labor and the workplace, the medical system, the transportation and power
infrastructure, the agricultural enterprise, and the military.” The authors made a
number of recommendations, including that more funding should be directed to. . .
people like them: “If we wanted to be serious about preparing for the transformational
power of a coming nanotechnology revolution, we would need first get serious — at
this very early stage — about developing knowledge and tools for effectively
connecting R&D outputs with desired societal outcomes.”
In 2003, the US Congress passed 21st Century Nanotechnology Research and
Development Act, which directed that some NSF money be used for research on the
societal, ethical and environmental concerns of nanotechnology research to “bring
about improvements in quality of life for all Americans,” much like the Human Genome
Project’s ELSI. Part of NSF's funds went to the creation of The Center for
Nanotechnology in Society at Arizona State University.
In 2008, another set of authors, including ASU faculty David Guston, Cynthia Selin,
and Erik Fisher, cited the Crow et al essay as a source of authority: “The widespread
understanding that nanotechnology constitutes an emerging set of science-based
technologies with the collective capacity to remake social, economic, and technological
landscapes (e.g., Crow & Sarewitz, 2001) has, in itself, generated tangible outcomes.”
(Why ASU is such a center of criti-hype will be the subject of a longer essay.)
hitps:sts-news medium comiyoure-doing-it-rong-notes-on-crlcism-and-technology-hype-t8b08b4307@5, onsseroaz0z1 You're Doing t Wrong: Notes on Cicism and Technology Hype [by Lae Vinsel| Feb, 2021 | Medium
Guston went on to edit a series titled the Yearbook in Nanotechnology and Society,
which ran for three years before nanotech exuberance evaporated and it was
discontinued, becoming no longer a . . . yearbook. The second yearbook edited by
Jameson Wetmore (ASU again) and Susan Cozzens (Georgia Tech) contained these
dramatic claims, “Nanotechnology is enabling applications in materials,
microelectronics, health, and agriculture, which are projected to create the next big
shift in production, comparable to the industrial revolution. Such major shifts always
co-evolve with social relationships. This book focuses on how nanotechnologies might
affect equity/equality in global society. Nanotechnologies are likely to open gaps by
gender, ethnicity, race, and ability status... .”
Now, many of these exaggerations about nanotech now seem outlandish to the point of
being LOL funny. But the point is that these worries about nanotechnology were a
black mirror for claims made by the technology's boosters, and there were clear
financial incentives for academic social science researchers to go along with the hype.
If nanotechnology was not as big a deal as its boosters claimed, there also wouldn’t be
reason to fund social science research on the topic.
More recently, “AI” is the area of technology that has likely experienced the greatest
amount of criti-hype. As Yarden Katz and others have argued, “AI” is really best
thought of as a rebranding exercise: around 2017-2018, corporations using “AT” to
describe things that had previously been known by other faddish terms, like “Big Data.”
Most notably, Google renamed its Google Research division Google Al.
At about the same time, a number of academic centers opened up to look at ELSI of
“Al,” often funded with money from private foundations and the digital technology
industry itself. (I think there are more of these “Al” centers than there ever were for
nanotech, and one hypothesis is that there is learning going on around criti-hype in
academia. The business model is diffusing.) For sure, these academic centers have put
out some nuanced work on problems of digital technology, but they have also, without
a doubt, engaged in criti-hype.
For example, in its 2017 report, the AI Now Institute, which is associated with New
York University, paraphrased another report from the consulting firm McKinsey
claiming that 60 percent of occupations would have 1/3 of their activities automated.
This would be an enormous increase in productivity from a single set of technologies,
probably one of the largest in history. These claims precisely mirrored the advertising
digital technology firms were putting out as well as visions coming out of organizations
hitps:sts-news medium comiyoure-doing-it-rong-notes-on-crlcism-and-technology-hype-t8b08b4307@5, sonsye10ar2021 You'e Doing ItWirong: Notes on Criticism and Tachnology Hype | by Lee Vinsal| Feb, 2021 | Medium
like the World Economic Forum that we were on the cusp of a “Fourth Industrial
Revolution.”
‘The AI Now report argued, “To prepare for these changes, it will be essential that
policymakers have access to robust data on how advances in machine learning,
robotics and the automation of perceptual tasks are changing the nature and
organization of work, and how these changes manifest across different roles and
different sectors.” This of course means more funding for exactly the kinds of research
that these “AI” ELSI centers were doing. And AI Now called for funding in several
different ways throughout the report.
There are other examples of criti-hype around “AI,” too. I have watched people in
“critical AI studies” give conference presentations in which they spun out elaborate and
frightening dystopian futures based on no other evidence than a few Google Image
searches.
But, just as happened with nanotech, the wind appears to be going out of the sales of
“Al.” Some researchers suggest that we may be entering a new “Al Winter,” a period of
decreased funding in the area, or at least an “Al Autumn,” as exuberance for the
technology fades and expectations come back to earth. The claims made around
productivity and unemployment particularly appear to be bunk. Examining 40 “AI”
firms, Jeffrey Funk has estimated that it will take decades for them to have any marked
effect on productivity by, for example, increasing the efficiency of offices. Recent
reports predict that “AI” will not lead to significant near-term changes in employment.
(Keystone, MIT)
Sometimes the relationship between social science researchers and
scientists/engineers working on an emerging technology can become so cozy that it
undercuts criticism altogether. Once someone researching the social implications of
synthetic biology told me that the field was, in her estimation, mostly salesmanship
and bullshit. “That’s what you need to write then,” I told her. She said that if she told
the truth she would lose access to the people she was studying, and since she was
planning to do this research for much of the rest of her career, that wasn’t an option.
Here, social science becomes as bad as the worst forms of access-preserving
journalism.
Moreover, at times, it can appear that social scientists and humanities folks are trying
to create demand for something that no one wants. As Jane Flegal put it in her
hitps:sts-news medium comiyoure-doing-it-rong-notes-on-crlcism-and-technology-hype-t8b08b4307@5, anstosr2021 ‘ir Being Wong: Noles on Cilia and Technology Hype [by Lee Vine | Fe, 202% | acim
dissertation on geo-engineering, “For one, the supply of research on solar
geoengineering — social scientific and otherwise — has outpaced any demand
function.” By doing criti-hype, researchers hope others will want to buy their wares.
It will be interesting to see what happens to researchers and centers currently
dedicated to — even named after — synthetic biology, geoengineering, and “AI.” Most
likely they'll just fade away. But here’s the depressing thing: no matter what bit of
science and technology becomes hot and faddish next and no matter how unrealistic
and hollow the claims made about its future are, some academic researchers will
emerge to say they are doing the “ethics” or “anticipatory governance” or “responsible
innovation” or whatever-the-fuck of that thing. And they will pull down big, stanky,
oozey chunks of cheese from funding bodies for doing it, too. You don’t even need to
say “maybe” this will happen. It’s a guarantee.
The Costs of Criti-Hype
If the only downside of criti-hype was folks blowing federal tax dollars making edited
volumes that no one reads, it wouldn’t be worth talking about. Welcome to modern life
— it’s rubbish.
But criti-hype comes with real costs. Here I will focus on two:
First, criti-hype helps create a lousy information environment and lends credibility to
industry bullshit. In Bubbles and Crashes, Brent Goldfarb and David Kirsch write about
the role of narratives in creating speculative bubbles around new technologies. When
academics engage in criti-hype, they lend more authority to these narratives.
Here is one example of how credibility-lending can work: McKinsey says 60 percent of
occupations would have 1/3 of their activities automated by “AI.” Let’s be real.
McKinsey says this because it sells consulting services to firms and wants executives in
those firms to believe they will be soon be dealing with a radically transformed
environment. In other words, McKinsey wants to scare the shit out of us.
Then NYU’s Al Now Institute cites McKinsey’s report as a credible source (it wasn’t one)
and says that policymakers should take it seriously and put money into examining the
problems it identifies. “AI” startups making pitch decks and journalists writing hype-y
articles about fantastic changes just over the horizon can now cite something published
out of NYU. The narrative has become more plausible and compelling.
hitps:sts-news medium comiyoure-doing-it-rong-notes-on-crlcism-and-technology-hype-t8b08b4307@5, ransseroaz021 You're Doing t Wrong: Notes on Cicism and Technology Hype [by Lae Vinsel| Feb, 2021 | Medium
We need sound information for all aspects of life and culture, including decisions made
by business leaders, policymakers, and citizens participating in democracy. For
example, as a society, we experience real opportunity costs when policymakers waste
their time dreaming up solutions to massive job displacement from “AI” when it isn’t
coming. Indeed, friends of mine working in policy in Washington, D.C., believe that
criti-hype is just as damaging as positive hype from boosters when it comes to decision-
making.
This leads to our second problem: criti-hype distracts us from real world problems and
suffering that are happening right now. Recently, [wrote a post synthesizing a picture
of technology and the US economy that I have picked up reading work from multiple
fields up over the past five year. In that picture, economically-significant technological
change has been slower since 1970 than in the preceding period, digital technology
has never had the economic impacts its boosters said it would, and, for a variety of
reasons including globalization, many people, especially those without college
degrees, have little-to-no access to good jobs. Moreover — despite the hype about, like,
apps — nothing about current technological change is likely to change any of these
economic conditions soon.
While writing the post, I keep wondering to myself: why have so few people from my
own field contributed to the understanding of these issues? There are lots of reasons
for this gap, I think, including what topics are popular and faddish, but it seems to me
that one important reason is that so many people in my field are looking at “emerging
technologies.”
Generally every person working on the “anticipatory governance,” “sociotechnical
imaginaries,” or [add your own crappy neologism here] of an “emerging technology”
isn’t doing deep academic research into an existing technological problem. It’s an
enormous opportunity cost. It is outrageous that I can point to gobs of people in my
field working on synthetic biology, “AI,” self-driving cars, and blockchain but not a
single person researching septic tanks, mobile homes, trailer parks, or even housing
more generally, even though these latter topics are full of technological issues and true
human suffering that IS HAPPENING RIGHT NOW.
In some ways, it is another version of an argument that Andy Russell and I made in The
Innovation Delusion: innovation-speak distracts us from ordinary problems of
technology and infrastructure, including maintenance, repair, and mundane labor. We
hitps:sts-news medium comiyoure-doing-it-rong-notes-on-crlcism-and-technology-hype-t8b08b4307@5, sans‘eioar200t ‘ir Being Wong: Noles on Cilia and Technology Hype [by Lee Vine | Fe, 202% | acim
need to be more honest and reflexive about how innovation-speak has shaped
academic social science and humanities research.
(also believe that most academic criti-hype is so poorly done and has such a short
shelf-life before it rots that consuming it is actually harmful to your health. While
reading the nanotechnology stuff, I hammered shots of bourbon to washdown handfuls
of antidepressants and benzodiazepines. It hurt that bad.)
This leads to the one partial solution that I will consider in this post. Graduate
programs should do a better job training students to question claims made around
technologies. Our students should be bullshit detectors and hype slayers. Historian of
science and mathematics Michael Barany put it this way:
In my experience, young people enter graduate programs enthusiastic about some
pretty unrealistic and dramatic visions of near-term technological change, even
including things as ridiculous as the singularity and transhumanism. Nuanced
understanding of the history, sociology, and economics of technology is good medicine
for this condition. A lot of claims about the revolutionary potentials of present
technology will appear silly if you know how technology and society have and have not
changed from, say, 1850 to the present. We can begin to improve things by pointing
graduate students to technologies that have fully-emerged, that have diffused into
society, and that have and are creating real problems and actual agonies.
hitps:sts-news medium comiyoure-doing-it-rong-notes-on-crlcism-and-technology-hype-t8b08b4307@5, vansseroaz0z1 You're Doing It Wrong: Nola on Ciism and Technology Hype | by Lee Vinsel | Feb, 2021 | Medium
But we shouldn’t be optimistic. Universities will continue turning out criti-hype and
producing graduates who do it because it is so lucrative. I know of several academic
criti-hypers in the USA and Europe training graduate students to do similar work at
this very moment. Criti-hype is one of those phenomena that it’s important to be
mindful of but not kid ourselves we'll be free of it. There’s too many lousy incentives at
play, it won't go away.
This post is drawn from a longer essay I’m writing titled “Don’t Believe the Hype!:
Anticipatory Governors and the Political Economy of STS,” which examines how parts of
my professional field developed a business model of overplaying the risks of “emerging
technologies” to score money from national science and engineering bodies, private
foundations, and industry.
Technology Hype Social Science Digital Marketing Al
hitps:sts-news medium comiyoure-doing-it-rong-notes-on-crlcism-and-technology-hype-t8b08b4307@5, sss