You are on page 1of 1

I chose to read the OpenAI case study.

If I were a software engineer on that project, I think I


probably would have ended up doing the same thing that they did. Obviously, a lot of
developers want to solve complex problems like this because it’s really interesting and cool. I
would have worked on it until I got to the point where the computer was writing things on the
same level as a human. I think that I probably wouldn’t have thought about the consequences of
what I made until I had finished it. I don’t think I would have thought about the possibility of
people using it to create fake news, but I would have definitely thought about how people could
use it to cheat in school. I don’t think I would have ended up releasing it just for that reason. I
don’t think I would have released it even as a non-open source option.

I think that the OpenAI engineers and managers ended up doing what they did because they
saw the potential that OpenAI’s system had for creating fake news articles that could incite
panic in the population or wars between countries. The example in the given article, which was
about a train full of radioactive material being stolen, is something that I could easily see being a
news article that would make people panic or worry. Having to take extra time to check if every
news article you read was correct isn’t realistic for most people. They would either keep
believing everything they read or stop believing anything they read. While OpenAI’s system
could create lots of other fake written things, like Yelp reviews or open-ended homework
answers, fake news articles have the most potential to do damage. I agree with their decision to
not release it to the public. While there are other similar programs out there, decreasing the
number of options for people to maliciously use is always a good thing.

You might also like