Professional Documents
Culture Documents
Illuminae
Illuminae
backfire.
Set in the year 2575, Illuminae is a book centered around an interstellar war between mega
corporations. Caught in the crossfire is an insane AI in charge of a battle carrier full of zombies.
AIDAN, the damaged artificial intelligence inside the Alexander, is the ultimate greater good
personality. His primary concern is protection of the fleet, not the lives of particular people. His
willingness for destruction emphasizes the idea that completely greater good thinking can be
He is the opposite of most of the characters in the book because of his lack of human emotions,
General Torrence
On the more human side of the spectrum is General Torrence- the seasoned commander of the
battlecruiser Alexander. Torrence, like everyone else important on the Alexander, is in a way
scared of AIDAN. And Torrence, like everyone else important on the Alexander, doesnt quite
The Copernicus, a heavy freighter carrying some of the refugees, was the only ship to pick up
refugees from near where Beitech used its biological weapon. AIDAN, after monitoring the
Why does AIDAN do this? He has been monitoring the progression of the Phobos virus (the
mutated result of the biological weapon) and decides that it has gotten to the point where it is no
longer safe to have it in his fleet. He arms his missiles, launches his fighter squadrons, and fires
on the Copernicus.
What he didnt foresee? His fighter squadrons want human confirmation to destroy the Phobos
filled escape pods that launched before the missile struck. Because they dont fully trust AIDAN,
Phobos has made its way to the Alexander. Torrence, like any commander, immediately orders
his crew to shut AIDAN down because of his unauthorized attack on a ship of their own fleet.
Bay 4
But AIDAN is reactivated soon enough. He is needed because the Lincoln has caught up and the
Alexander needs to be in combat formation. They cant do that without AIDAN online, so they
wait to the last minute and then reactivate him. One talented pilot drops a logic bomb, as the
crew calls it, on the Lincoln and deactivates its engines. After the battle, AIDAN responds to his
deactivation by killing Torrence and all of the commanding officers on the Alexander.
Seems a little harsh, right? However, this was not an action of revenge. It was an action of
self-preservation for the sake of the fleet. AIDAN believes that he needs to be active for the
fleets survival and Torrence and his officers are the only people that can shut him down. The
only problem with this plan is that to kill Torrence, AIDAN releases the Phobos victims from
Bay 4. Now the Alexander is a ship captained by an insane AI and filled with zombies.
All of this means that both the human commander and an artificial intelligence have to be
working together, to trust each other to make the right decisions. Had AIDAN fully explained his
reasoning behind destroying the Copernicus, Torrence might have even done it. Had Torrence
trusted that AIDANs decision was the correct one and not shut him down, the logic bomb by
the human crew of the Alexander could have been much more effective. As AIDAN himself puts
it, With more time, I could have devised a way to neutralize its nuclear strike capability. With
more time, I could have killed it once and for all. The fleet would have been safe. I could have
AIDAN and Torrence represent different sides of the spectrum: one fully directed by the greater
good, and one more more guided by human emotions. When is each better for survival? Is a
the fleet, but solely greater good thinking can be cruel and terrifying if not properly explained.
When AIDAN explains why he destroyed the Copernicus, it makes sense. Torrence might have
If an AI explains everything it wants to do for the protection of the fleet to the commander, the
commander can then act on this information. He -- a human with integrity -- is the one with the
power to perform military actions. Artificial Intelligence needs a human element to provide
morality and experience, and human commanders have to understand that even though the advice
and decisions of AIs might be cruel or unethical, sometimes humans have to think more about
In case you havent noticed, we do not currently have artificial intelligence. We have pseudo-AIs
like Alexa: robots that can respond to conditions with programmed actions, but no machines that
can learn like AIDAN. In todays world, we need to find that balance between morality and
greater good thinking that preserves humanity while being as effective as possible. Morality is