You are on page 1of 4

Illuminae isnt about an intergalactic war.

Its about Artificial Intelligence and how it could

backfire.

Set in the year 2575, Illuminae is a book centered around an interstellar war between mega

corporations. Caught in the crossfire is an insane AI in charge of a battle carrier full of zombies.

Artificial Intelligence Defense Analytics Network.

AIDAN, the damaged artificial intelligence inside the Alexander, is the ultimate greater good

personality. His primary concern is protection of the fleet, not the lives of particular people. His

willingness for destruction emphasizes the idea that completely greater good thinking can be

disastrous ... and AIs are completely greater good thinking.

He is the opposite of most of the characters in the book because of his lack of human emotions,

even though he is constantly thinking: Am I not merciful? But is he really merciful? He is a

100% greater good thinker- mercy isnt a part of that.

General Torrence

On the more human side of the spectrum is General Torrence- the seasoned commander of the

battlecruiser Alexander. Torrence, like everyone else important on the Alexander, is in a way

scared of AIDAN. And Torrence, like everyone else important on the Alexander, doesnt quite

understand to what extent AIDAN will go for the greater good.


The Copernicus

The Copernicus, a heavy freighter carrying some of the refugees, was the only ship to pick up

refugees from near where Beitech used its biological weapon. AIDAN, after monitoring the

situation of the Copernicus closely, decides to destroy it.

Why does AIDAN do this? He has been monitoring the progression of the Phobos virus (the

mutated result of the biological weapon) and decides that it has gotten to the point where it is no

longer safe to have it in his fleet. He arms his missiles, launches his fighter squadrons, and fires

on the Copernicus.

What he didnt foresee? His fighter squadrons want human confirmation to destroy the Phobos

filled escape pods that launched before the missile struck. Because they dont fully trust AIDAN,

Phobos has made its way to the Alexander. Torrence, like any commander, immediately orders

his crew to shut AIDAN down because of his unauthorized attack on a ship of their own fleet.

Bay 4

But AIDAN is reactivated soon enough. He is needed because the Lincoln has caught up and the

Alexander needs to be in combat formation. They cant do that without AIDAN online, so they

wait to the last minute and then reactivate him. One talented pilot drops a logic bomb, as the

crew calls it, on the Lincoln and deactivates its engines. After the battle, AIDAN responds to his

deactivation by killing Torrence and all of the commanding officers on the Alexander.
Seems a little harsh, right? However, this was not an action of revenge. It was an action of

self-preservation for the sake of the fleet. AIDAN believes that he needs to be active for the

fleets survival and Torrence and his officers are the only people that can shut him down. The

only problem with this plan is that to kill Torrence, AIDAN releases the Phobos victims from

Bay 4. Now the Alexander is a ship captained by an insane AI and filled with zombies.

All of this means that both the human commander and an artificial intelligence have to be

working together, to trust each other to make the right decisions. Had AIDAN fully explained his

reasoning behind destroying the Copernicus, Torrence might have even done it. Had Torrence

trusted that AIDANs decision was the correct one and not shut him down, the logic bomb by

the human crew of the Alexander could have been much more effective. As AIDAN himself puts

it, With more time, I could have devised a way to neutralize its nuclear strike capability. With

more time, I could have killed it once and for all. The fleet would have been safe. I could have

made them all safe.

What does this mean for Artificial Intelligence?

AIDAN and Torrence represent different sides of the spectrum: one fully directed by the greater

good, and one more more guided by human emotions. When is each better for survival? Is a

balance the key?


As we see in this book, balance is definitely the key. Greater good should be the main focus of

the fleet, but solely greater good thinking can be cruel and terrifying if not properly explained.

When AIDAN explains why he destroyed the Copernicus, it makes sense. Torrence might have

even done it if AIDAN had completely explained it.

If an AI explains everything it wants to do for the protection of the fleet to the commander, the

commander can then act on this information. He -- a human with integrity -- is the one with the

power to perform military actions. Artificial Intelligence needs a human element to provide

morality and experience, and human commanders have to understand that even though the advice

and decisions of AIs might be cruel or unethical, sometimes humans have to think more about

the greater good than particular people or emotions.

How does this affect todays world?

In case you havent noticed, we do not currently have artificial intelligence. We have pseudo-AIs

like Alexa: robots that can respond to conditions with programmed actions, but no machines that

can learn like AIDAN. In todays world, we need to find that balance between morality and

greater good thinking that preserves humanity while being as effective as possible. Morality is

the biggest difference between humans and artificial intelligence.

So lets keep it that way.

You might also like