You are on page 1of 66

The Minimum You Need to Know

About the Phallus of AGILE

By Roland Hughes

Logikal Solutions
Copyright ©2019 by Roland Hughes
All rights reserved

ISBN-13 978-1-939732-08-8

This book was published for the author by Logikal Solutions. Neither Logikal Solutions nor the author shall be held
responsible for any damage, claim, or expense incurred by the user of this book or any company, persons, or
individual that perceives any real or imaginary damages from it.

This is a collection of essays on IT and life in general. Make of it what you will.

These trademarks belong to the following companies:

Burgerville The Holland, Inc.

Chipotle Mexican Grill Chipotle Mexican Grill, Inc.
FaceBook Facebook, Inc.
IBM International Business Machines Corporation
Jaguar Jaguar Cars
Linux Linus Torvalds
Microsoft Microsoft Corporation
McDonald’s McDonald’s Corporation
OpenVMS Hewlett Packard Corporation
Selectric International Business Machines Corporation
The Minimum You Need to Know Logikal Solutions
Ubuntu Canonical Ltd.
Wal-Mart Walmart
Websphere International Business Machines Corporation
Windows Microsoft Corporation

All other trademarks inadvertently missing from this list are trademarks of their respective owners. The best effort
was made to appropriately capitalize all trademarks that were known at the time of this writing. Neither the publisher
nor the author can attest to the accuracy of this information. Use of a term in this book should not be regarded as
affecting the validity of any trademark or service mark.

Cover art courtesy of toonaday via

Table of Contents
The Terminal Days...........................................................................................................17
The A/B Switch................................................................................................................27
Tiered Storage..................................................................................................................39
How Was Your First Day?................................................................................................49
Organic Systems Development........................................................................................55
Software as a Competitive Advantage.............................................................................59
A Ribbon-Cutting Ceremony for a Sewer........................................................................69
The River of Souls...........................................................................................................75
Solve the Whole Problem.................................................................................................79
Post Hoc Ergo Propter Hoc..............................................................................................85
Define Forever.................................................................................................................91
Inner and Outer Joins Are a Red Flag of Failed Design..................................................97
The Four Holy Documents and Architecture.................................................................107
Relational database vendors stepped up to the plate........................................131
Karoshi – Do More with Less........................................................................................139
Rapid Application Development....................................................................................145
We Swung the Hammer Too Much................................................................................167
The Mythical Business Analyst......................................................................................177
Management by Crisis...................................................................................................189
A Prototype is Not a Product..........................................................................................207
Too Big to AGILE..........................................................................................................211
The Phallus of Scrum.....................................................................................................217
Ruminations and Observations......................................................................................232
The Network Software Appliance..................................................................................233
Consultant or Contract Coder........................................................................................255
The Non-Consumer Economy.......................................................................................261
Factoring Your Way Into Bankruptcy............................................................................275
Trickle Down with a Chainsaw......................................................................................285
Corporate Housing vs. Corporate Housing....................................................................291
7 Requirements for a Cluster...........................................................................297
How Do You Spot the Bottom Feeders?........................................................................301
Professional Day...................................................................................................301
Fixed Bid..............................................................................................................302
Flat-Rate Internet Phone.......................................................................................303
Requires a Test Before Interview..........................................................................304
Long Lead Time....................................................................................................306
Some Career Advice.......................................................................................................307
The Changing Game of Recruiting and Consulting.......................................................317
Your Legacy...................................................................................................................333
Killing Patients Wholesale.............................................................................................339
The More You Are Paid the Less You Are Worth..........................................................343
Cryptocurrency and the Coming Financial Apocalypse.................................................345
Security Via Obsolescence.............................................................................................351
Royalties – Every Stupid Idea Comes Around Again....................................................355
Calculating Your Minimum Hourly Rate.......................................................................359
Encryption – Last Great Bastion of the Damned...........................................................375
How Do You Backup the Human Race?........................................................................395
The Era of the Smart Phone is Over..............................................................................399
A Virtual Room Full of Geeks.......................................................................................409
A Jag and His Jag...........................................................................................................411

I shall be consistent using AGILE when talking about the AGILE methodology and
proper case when I’m just using the word. Too many of today’s publications simply use
Agile and leave the interpretation to the reader.
My use of the comma after the word “but” instead of before will offend every grammar
Nazi reading this book. It is done deliberately because the grammar is wrong. Grammar
codifies criminal fraud into our punctuation rules. The true skill of a “confidence man”
isn’t hiding the truth; it is making you believe a lie while telling you the truth. Consider
the following:

Yes! We will deliver the new system on time and under budget,

but it won’t work.

Yes, I believe elected officials should be held to a higher ethical standard,

but I’m still taking the $750,000 bribe and calling it a “speaking fee.”

Yes, we will pay taxes on the money we have stashed overseas,

but we will never tell you just how much that is.

I deliberately placed the “but” part of the sentence on a separate line because that is how
fraud artists deliver it. When most people read a comma aloud, they don’t even take a
breath. The flimflam artist takes a pregnant pause at the comma, completely changing
the meaning of the sentence for those who hear it.
Are you aware that most people, when told what they want to hear before a pregnant
pause, never hear the “but” at the end? It’s true. The mind veers off. It’s on to What’s
next? while you get screwed with your pants on.

Many reading this will have either watched a few episodes of or read the books behind
Game of Thrones.1 One of the running truisms throughout the series is a phrase many of
the characters manage to work into conversations.
Everything before the word “but” is shit.

This phrase is true when the comma is before “but” in the sentence. Placing the comma
prior to the word “but” shows intent to deceive. Placing it after, so the word “but”
occurs in the same breath as the preceding statement, keeps the listener’s mind engaged.
It gives fair warning of the impending screw job. It is more ethical to let them know they
are about to be screwed.
When I was in my early twenties, I worked for a DEC VAR (Value Added Reseller). I
was just a programmer, and the more senior developers held the title
Programmer/Analyst (PA). Our PAs developed a self-defense mechanism when talking
with the owner. He would ask if something was done and they would respond,
“It’s done, except…”

The dude had a bit of a temper, so this self-defense mechanism was understandable. It
was also intended to deceive. When the Big Guy heard, “It’s done,” he was already
sending the invoice to the client in his mind.
The fact the project really wasn’t done never reached his conscious thought because of
the comma and the pause that came before the word “except.” Of course, he found out a
week or two later when the client refused to pay the invoice on the grounds the work
was not done, but, there was always the possibility it could be by then so you might miss
the beating.
Many readers are familiar with the “Ruminations and Observations” chapter at the end
of each title in The Minimum You Need to Know book series. Ordinarily, I would save
each of these things as I write them and serve up a rip-roaring good chapter to end each
technical book but, some of this simply can’t be so restricted.


“Restricted?” you ask. Yes. Most of my technical books appeal to very specific technical
audiences. While I continue the tradition of including a Ruminations chapter at the end
of each, a company hawking their AGILE wares pissed me off so bad I had to create a
book that could appeal to a much broader audience.
My rant on AGILE being hawked as software engineering is far too long and involved
for a single essay someplace where few would find it. To get my point across and
hopefully save what is left of the world, I have to walk the reader through IT history and
point out some really nasty underwear hidden in its closets.
I will reuse “Solve the Whole Problem” and “The Mythical Business Analyst” that
originally appeared in The Minimum You Need to Know About Java on OpenVMS. That
book was targeted to a very narrow market, and those essays need a broader audience.
I was very tempted to also include “Grade 8 Bolt Syndrome,” but sacrifices had to be
made to keep this work within various binding limitations. There is an early version of
that essay available on my blog.2
This work, while somewhat technical, will have a much broader appeal than my
technology-specific books. Everyone involved in the management or employment of IT
services should read this book. I will sprinkle in some humorous essays to keep this
from being a dry treatise on software development history, management, and practices.
The historical walk through IT really should capture the interest of all readers.
Much of the knowledge about the times before the early 1980s came from my friend and
former manager, Perry Sugerman. He actually lived through IT during the ’60s and ’70s.
I just worked with some of the technology because it was still in widespread use.
Because of this decade-plus difference in our entry points, from time to time, it will
seem like there is conflicting information. Understand that is because I started with
DEC/PDP and DEC/VAX hardware as well as CP/M-based personal computers. He
started with IBM computers before we had what would later be called operating
systems. Hopefully, we have been successful in identifying which was IBM and which
was the rest of the world.


The only regret I have of our time together is that we didn’t get to do more projects. We
got to work on some far-reaching stuff, and it was always well architected. One system
we put together was so well architected that after my contract was over, it survived
countless business changes without anyone needing to touch the code for over ten years.
This was a core feeder system for the profit center of the company, not some barely used
hunk of software. Until one actually works on a project of that size and quality, one
cannot understand just how big a fraud AGILE really is.
For those of you just starting out, consider these 30+ years of IT industry experience
condensed into book form. The experience of others may differ. Many I know have only
one year of experience repeated 30+ times.
Note: You will find me using “off-shoring” and “offshore” in quotes because many
times, companies bring in vacation3 and other visa workers to replace their own citizens.
Bottom-feeding on price tends to get bottom-feeding talent that leads to a publicly
traded corporation with a reputation for zero quality.
A great many college professors and MBAs insist you start out with a list of bullet
points on an executive summary page. That is a very bad method of presenting a thesis.
For starters, if you have more than three bullet points, most MBAs, especially Keller
MBAs, have dozed off before making it to the fourth.
Creating a document in that format also sets up an adversarial situation. Clueless people
who have no idea how the points were reached zone out and start throwing out bullet
points of their own that, by and large, are fake news.
Beginning any thesis, proposal, or otherwise serious thought-piece with an executive
summary virtually ensures the piece will not be read. Management will assume the rest
of the document successfully defends the summary and make a knee-jerk decision based
on the summary and their personal preference. This is actually how some of the worst
decisions in the history of man happen. The executive summary is skimmed, and a
decision is made based on a summary and a pile of pages that don’t back it up.


News channels and people who call themselves journalists love the sound bite for this
very reason. Everyone hears the sound bite and forms an opinion without anything
behind it. The sound bite becomes fact even when it is pure fiction. Headlines suffered
from this first.
People who don’t agree with the summary will not read any supporting documentation.
Most who are paid to promote agendas go into full-on spin-doctor mode, which is why
they are called “Merchants of Doubt.”4
While it does not ensure your document will be read, burying the summary in it
(especially if you do it without a big heading entitled “Summary”) ensures the document
must be at least partially read before anyone can offer a challenge. In today’s society, we
don’t bother finding out how we or anyone else arrived at their point of view. If it is
different from ours, we ignore or defile it.
The people who really need to read this book—upper management and recent college
graduates who were only taught AGILE—most likely won’t. This book doesn’t start
with an executive summary, so that rules out management who got their degree from a
diploma mill. People who only know AGILE don’t want to learn that their skill set is
worthless, so they won’t bother to be exposed to this reality. Everyone else can still be
Hopefully, kids about to go to college and management who didn’t get a degree from a
diploma mill will both read and contemplate the essays within before making a decision
or forming an opinion.
The corporate IT industry has been around since, at least, 1965. Yes, people can point to
the Turing machine work of the 1930s5 and the Enigma codebreaker of World War II6
but, those weren’t corporate computing systems. They didn’t perform accounting,
process orders, calculate payroll, generate quarterly reports, or any of the other general
business functions commonly performed by computers today.


Many reading this will have been born too late in the life cycle of IT to remember a time
when all businesses used typewriters to create correspondence. It’s true. (A big
improvement from pen and paper.)
What you know of today as word processing didn’t catch on until the 1980s. Large
corporations had Word Processing Centers during the pioneering times around 1975 but
that was the natural morphing of “the typing pool.” Handwritten or tape-recorded
documents would be sent to “the typing pool,” and sometime later, a document would
come back for review. The concept of people typing their own documents didn’t start
taking hold until desktop computers took off. In large part, this was because only a small
percentage of humans could type. We took classes to learn the skill.
At the time of this writing, we are almost a full seven decades into the life of the
corporate computer industry. There have been stumbles, and there have been great
successes. One thing remains absolutely certain: every mistake made in the mainframe
computer world is consistently repeated by each line of smaller computers that followed.
That idiot phone many of you can’t put down is no exception; it proves the rule. Just
read up on all of the security breaches and other catastrophes that befall it.
“Experience is what you get when you don’t get what you want.

Wisdom is what you get from an awful lot of experience.” — Will Rogers

Given those two universal truths, this book is organized to give you both experience and
wisdom by taking a little walk through computing history on the way to the summary of
the thesis.
In life, the journey is the reward.

Note: You will find me dissing Keller MBAs regularly and sometimes DeVry. I attended
DeVry after having attended an excellent junior college computer science program.
Thankfully I attended the junior college first because it was almost impossible to get an
education at DeVry. I did get a diploma, though, because the checks cleared. I did learn
about student loan debt and how to work a full-time night shift job while attending
classes during the day, but not much else.

Later in my career, I worked with several Keller MBAs. Of those MBAs, one was great
and would have succeeded in any school. The rest, especially when anything IT was
involved, were wastes of oxygen. They didn’t have the skills to manage IT and had too
much pride to admit they couldn’t manage to fold a paper grocery bag.
About the time I attended DeVry, it was acquired by the same company that ran Keller.
Recently, both schools were “sold at no cost” to a new owner.7 If you do your research,
you will find other articles reporting the previous owner had to sink a bunch of money
into both schools before the new owner would agree to take them off their hands. In
March of 2020, a $100 million settlement was reached between Devry and the FTC.8
You don’t have to pay people to take a quality product off your hands.

Based on my personal experience and the publicly available terms of sale, I would not
recommend any young person pursue a degree at either institution. That is my personal
opinion based on my experience, and I’m entitled to share it with you.
From time to time, you will see *nix in the text. This is how many in the industry
identify the pool of proprietary Unix platforms as well as all of the various Linux
Occasionally you will see me use the phrase “my Inner Bill.” This is a euphemism for a
fictional character living somewhere in all of us but, who hopefully only visits when he
is needed. He has been built from many men I’ve encountered throughout my life.
Inner Bill started with my father, who was in the Navy and is from that generation who
believes there isn’t anything you can’t fix by cussing and hollering.
Add in a few cups of this old Russian guy I worked with for a year or so. Our design
sessions used to involve veins popping out of foreheads and spittle escaping mouths
with hands gesticulating wildly. People would get up and close the door of whatever
room we were in and then move a bit farther away. More than once, someone would
come in and tell us it was time to quit for the day.


One thing always amazed me about that guy: he could turn it off like a switch. All you
had to do was draw something on the board that was an irrefutable fact that kicked the
three-legged stool out from under his position and click, the thunder was gone. There
would be silence for a few seconds, followed by “Ooooh. This is a big problem.” He
was then totally focused on finding the correct solution to whatever it was we were
working on that day.
Later in my career, I encountered another consultant who, ironically, had also been
deployed in the Navy. I don’t know if he had just hit the cranky, old-man stage of life or
had been there his entire existence on this planet. Whenever he got on a really good roll,
and it seemed to be headed my direction, I would deploy the off-switch phrase:
“You know, if I wanted to work with a cranky, old man, I would have stayed on
the farm helping my dad.”

I came up with that one when my zinger9 factory was left unattended. It worked though;
he shut down. Well, the volume anyway. Spent the next hour grumbling at his desk and
banging on the keyboard—$6 keyboards can’t take that kind of abuse but, better
keyboards seem to tolerate it for days. No, I didn’t keep track of his keyboard failure
rate. It was more than mine, and that is all that matters.
So, my “Inner Bill” is an assembly of all these people. He has to be caged and cared for
so he can be brought out at opportune moments to good effect.
Getting in touch with my “Inner Bill” is always a risky thing. I have this zinger factory
in the back of my brain that tends to fire up without warning. When “Inner Bill” gets a
hold of it, the scene could make for a viral video but, it doesn’t make for a happy
That zinger factory has left some “memorable” moments in my wake. Some are still
with me today. Most aren’t fit to print but, I will share one:
I worked with a bunch of twenty-somethings early in my career. It was a culture where
zingers were cherished. Another programmer (we’ll call him Bob) and I got along like
brothers, tormenting each other just for the enjoyment of it. (It’s a guy thing.)


Bob and another coworker made the mistake one day of telling us they were going
skydiving the following Sunday. Everybody was getting in a few digs, just in case we
wouldn’t have the opportunity later, and Bob spun the conversation around to some
snarky comment about the gallon tea jar I kept in the fridge. Without even thinking, I
turned and said:
“Just think, Bob. On Sunday, after that chute doesn’t open, we’ll be able to fit all
of you in that jar.”

Full points! Nuclear strike!

Bob and his buddy did spend part of Sunday morning trying to figure out how to get a
picture of his face on that tea jar, so it would be the first thing I saw. I know this because
someone who was helping them spilled the beans.
I know, the younger crowd can’t imagine it being difficult to tape an image to a gallon
jug but, we didn’t have color printers or copiers back then. We didn’t even have a fax
machine in the office, nor had anyone come up with a digital camera. The best you
could do was to find a Polaroid camera and try to take a close-up, then tape the photo to
the jar. As twenty-somethings, none of us owned such a camera. It was for parents
looking to store ammunition to embarrass their children later in life. Don’t believe that
last statement? Then you’ve never brought home a special someone and had the folks
pull out those baby pictures.
One final note. The “i” in “iPhone”™ stands for “idiot.” You spent north of $1,000 on a
device that isolates you from the real world and, by some estimates, costs well under
$400 to make.10 Your grandparents and Gen Z are buying flip phones; many cost under
$50, allowing them to put the rest of that $1,000+ toward having a real life. Guess what?
While they are out there having a real life, they can still make phone calls, and their real
life isn’t interrupted by tweets and texts.
Now that I’ve amused you, it is time to explain what we did have.


Figure 1: People used to celebrate a system going live


The Terminal Days

Figure 2: LA120 DECwriter III (courtesy of Richard

Thomson and

Most of you expect me to start this book with a definition of AGILE, or more to the
point, my definition of AGILE. Well, get used to being wrong. Traditionally published
authors are forced to do that, and then they lose you five pages into the book. To really
understand what AGILE is in my eyes and the eyes of most actual IT professionals, you
have to understand how we got there. Actually, you have to understand how we never
really left.
Prior to terminal days, we had key-punch machines and key-punch operators.
Programmers wrote code on paper using coding forms. Key-punch operators
transformed the coding form into punched cards; then, the editing and debugging began.
Key-punch machines were largely phased out in the 1970s but Perry had an IBM
Selectric™ type terminal (150 baud) on his desk in 1970. IBM 3270 type terminals
arrived by 1973 but were in terminal rooms so they could be shared until they arrived on
programmers’ desks and rolled out through the entire computer department from the late

1970s into the 1980s and were themselves replaced with PCs emulating them in the mid-
to-late 1980s.
During the 1960s, most terminal devices were some form of keyboard and printer
combined—yeah, typewriters. IBM had the 2741, an industrial version of the
Selectric™ while other vendors “terminalized” the Selectric™ typewriters’ innards to
create a luggable terminal, some with built-in acoustic couplers. There existed teletype
terminals that were originally used like fax machines, to send short messages by wire.
These devices could operate in full-duplex mode and had optional paper tape reader and
punch. The typewriters were about 50% faster than the teletypes 110 baud.
There were also some Cathode Ray Tube (CRT) terminals. The IBM 2250 was a large
screen graphics terminal, and the IBM 2260 was a monochrome CRT. The first model in
1964 displayed 240 characters in six rows of 40. Emulators of the 2260 were still in use
through the 1980s. The company Perry worked for converted its Parts Ordering and
distribution system to IBM CICS to get 3270 support because repairing the 2260s had
become impossible.
We also had the Conversational Programming System, or CPS, that was an early
time-sharing system offered by IBM that ran on System/360 mainframes circa 1967
through 1972 in a partition of OS/360 Release 17 MFT II or MVT or above. CPS was
implemented as an interpreter, and users could select either a rudimentary form of
BASIC or a reasonably complete version of PL/I (Programming Language One.)
Accounting mentality around CPS inevitably led to systems failures. In one shop, CPS
was originally configured with twelve switched lines with auto-answering modems. It
ran smoothly for about nine months until the accountant mentality manager noticed that
only ten were being used. Of course, they removed two modems. The system became
unusable. Logins failed due to a lack of ports. People came in early to get logged in and
remained logged in throughout the day. No one else could use the system. Restoring the
two “unused” modems solved the problem, and the system went back to its previous
level of availability.

With a dial-in system, you have to have more available lines than normally needed for
things to work smoothly. If people always get a connection when dialing in, they will
log off when done. If they have to spend hours trying to get connected, they will
maintain that connection all day—even when they don’t need it—to avoid the pain of
trying to get another one.
In 1970, a large money lending company already had in place a terminal network of
over 1,000 terminals. There were eight concentrator controllers; each had many leased
multi-drop lines shared by thirty terminals. They even had a satellite link to an office in
As usual, the satellite link exposed coding errors in the concentrator controller. A
simplified version of the problem was that the controller would be fooled by noise on
the link into thinking that the terminal it had requested data from had answered with no
data and go on to poll the next terminal. However, the message had not even reached
Hawaii yet. The Hawaii terminal would reply with a message that was assigned to a
different terminal polled later. This message was about the payment of funds on loans.
There was no id in the message, just the terminal id, so the loan got posted to the wrong
account. This cost the company a lot of money to fix.
During the late 1970s through the early 1980s, business computers came in two sizes:
mainframe and midrange. Mainframes tended to cost millions of dollars, while midrange
computers used to cost between $250,000 and a million. (No, I’m not going to get into
an argument here about capacity and pricing. These ballpark numbers are good enough
for our discussions.) The amount of software actually written for these computers is
astounding given the trivial (by today’s standards) amount of memory they had and the
massive cost of data storage.

Early on, we had punched cards, followed by paper

terminals, followed by video terminals (the VT in
VT100). By the time I started in the industry, we had
video terminals on desks, and paper terminals were used
as system consoles. Having said that, we didn’t have a
terminal on everyone’s desk.
Please take a good look at the first “laptop.” I actually
had one of these. I wish I still had it just for the nostalgia
factor but, I would also have to carry around a phone
with the round ear and mouthpieces since they only exist
in really old hotels.
Figure 3: DEC LA12 Those funky-looking round things on the side were
(courtesy of Richard called acoustic couplers. A fancy name for soft mushy
Thomson and terminal-
things that fit snugly (when new) around the ear and
mouthpiece of the handset so the terminal could both
send and receive. You were important when you had one of these. Compared with
today’s computer weight, this was a bag with a couple of bricks in it but, you lugged it.
Terminals for the mainframe world were mostly
smart, while those for the midrange world were
considered dumb terminals. While many a
mainframe programmer tried to denigrate the
midrange via these terms, it meant that the
mainframe terminals had some processing
capability built into them while the other terminals
did not.
Today most of you can easily get a serious PC for
what a “dumb” VT10011 type serial terminal cost
Figure 4: DEC VT100 (courtesy
of Jason Scott and terminals- back in the day. Yes, we paid hundreds of dollars for just a terminal. Then we paid many hundreds of
dollars for a 300, 1200, or (gasp!) 2400 baud

modem so we could connect to a computer somewhere. Mainframe people paid

thousands for a terminal and even more money for a bi-sync modem.
Modems for mainframe computers
communicated via BISYNC or BSC for Binary
Synchronous Communications. They operated in
Half-duplex mode. “Standard” modems were
asynchronous communicating in full-duplex
mode. Either machine could use both means of
communication. The BN Railroad had a large
terminal network. Many of the terminals were
located at railroad terminals. IBM 3270
terminals and IBM 3287 printers were connected
to DEC central computers.
On the other hand, Start-Stop asynchronous
terminals were used by International Harvester
to serve its dealers. Most of the IBM supplied
software was half-duplex, while the midrange
computers opted for full-duplex processing. That
Figure 5: IBM 3278 (courtesy of is what made connecting midrange terminals to
Richard Thomson and terminals- IBM mainframes a chore. IBM “customer
support” was not educated in this area. So
finding the needed software was mysterious.
The half-duplex model allowed terminals to buffer transmissions while the full-duplex
model required the computer to be interrupted for every character received. IBM wasn’t
a half-duplex purist, though. The IBM 1800 (process control) computer supported async
and full-duplex terminals.

By the way, the VT-52, a terminal

preceding the VT-100 series, was the first
“portable.” It was an all-in-one model,
meaning the keyboard wasn’t detached. I
worked at a DEC VAR, where the owner
and sales team would lug that terminal with
a modem to potential customers all around
the country. They would dial in and run
through a demo of our software.
Most of you reading this are far too jaded.
Figure 6: DEC VT52 (courtesy of You grew up with either a PC or a
Richard Thomson and terminals- smartphone, so none of this wows you, but people were wowed. This heavy thing was
set on a conference room table along with a gaggle of wires and a modem. When it was
all cabled up, they heard it dial out, followed by the screeching sounds of modems
trying to agree on a speed, then silence. Suddenly, characters started appearing on the
A little while into the demo, the second wow hit: All of those paper files could go away
once they were entered into this system. After choosing a couple of menu options and
entering a value or two, they could page through every invoice ever sent to a customer
and see the amounts and the current balance.
Okay, other departments found other parts of the application to be wowed about. The
point I’m trying to make is that wrecking one’s back to carry that monstrous terminal
meant you had an awe-inspiring presentation. I don’t know if we ever did it but, I heard
tell of some competitors leaving a terminal and modem at a potential customer site for a
few days so they could dial into a demo system and really kick the tires.
Keep in mind this was going to be north of a quarter-million-dollar purchase to get the
computer and the software. Nobody made such a decision lightly. Also, keep in mind the
minimum wage was around $3.50/hr, and a decent-paying union factory job was $7.50–
$9.00/hr at the time.

In most management minds, that computer had to save them a quarter million in labor
costs to be justified. They wouldn’t base a purchase decision on pie-in-the-sky “future
benefits.” They had to see real calculable financial benefits.
Computers were so expensive back in the day that we had various time-share and lease
arrangements. One company or university would buy a computer system, then lease time
and a specific amount of storage on it. I had one customer that had a classic arrangement
with a local university. They bought a smaller version of the same computer system the
university had. The senior students used the university computer system to do software
development for the company as part of some internship. After each module was tested,
it got installed on the production machine at the business location. This kind of real-
world learning was incredibly common and highly valuable.
In today’s world, you will never find a company willing to put that kind of investment
into growing the youth of America. You will also not find a company willing to put
seven years into getting a custom ERP (Enterprise Resource Planning) system in place.
Today’s MBAs got their diploma from the for-profit diploma mill with the best
television commercial and can’t see past the end of this quarter. They won’t establish a
long-term relationship with a university to maintain a steady flow of highly skilled IT
workers. Instead, they will toss national security to the wind, bringing in workers from
third-world countries with one-tenth to one-quarter of the necessary skills, as long as
they are willing to work well below the prevailing wage. In short, the only people they
believe to have IT skills are those who are “priced right.”
As computers became cheaper, the camaraderie of companies failed. It is sad but true.
The first fatality was a sense of community. The second fatality was a sense of
obligation to the community. We were all too busy looking at other things that died to
notice these deaths when they happened.
To fully understand the scope of that statement, you have to go all the way back to 1955
when SHARE, a volunteer-run user group for IBM mainframe computers, was founded
by Los Angeles-area users of IBM 701 computer systems. How many of you know that
operating systems, such as they were, only got distributed in source form back then?

GUIDE (Guidance for Users of Integrated Data-Processing Equipment) was a “users’

group” for users of IBM computer systems that started in 1956. SHARE and GUIDE
operated as IBM’s mainframe user groups. IBM supported these groups with personnel
and money. SCIDS (Session Concerning Interdisciplinary Studies) consisted of an open
bar in the ballroom of the convention hotel where attendees could meet for information
SHARE was started by the scientific and computational IBM customers, and GUIDE
was started by the business data processing (accounting) customers. Each organization
met quarterly; SHARE met in the spring and fall for full meetings and summer and
winter for organizational meetings. GUIDE was a season out of phase. In the 1970s,
many companies belonged to and attended both organizations.
In truth, early computers didn’t come with an operating system at all, just what today, in
PC terms, would be called firmware or BIOS (Basic Input Output System). If the
machine had an assembler, it came with an assembler language library providing the
IOCS (Input Output Control System). If the computer you spent around a million dollars
on did not come with an assembler, you got a printed manual that some brands called the
“Principles of Operation.”
Paper tape or punched paper card decks were physically read in. There were various
decks of cards lying around to load this or print that. The card decks for popular
program loads were punched in special cards with Mylar™ increasing the number of
reads without card jam or other mechanical error from about 10 to 100. Printing was
done from cards using tabulating machines like the IBM 407. Programs for these were
large wired and/or wire-able boards. Magnetic tape was introduced by UNIVAC
(Universal Automatic Computer) in 1951, preceding the magnetic core by about four
years. (The trademark history listed in Wikipedia12 may well be accurate, but the current
UNIVAC trademark has to do with artificial teeth.)


Operating systems emerged from a conversation about getting more useful time out of
expensive computers. In the early days, operator setup was of significant duration; the
computer was idle while waiting for the operator. Folks decided that the computer itself
could be used for setup between jobs. Even if it used 30% of the computer, more useful
computer time would be available for running the jobs. Eventually, programmers began
talking to one another, and certain “standard” card decks became common and were
rolled into the early form of an operating system.
The SHARE Operating system was released in 1959 for the IBM 709 computer. They
called it “commons-based peer production.” Today we call it “free and Open Source
software.” As of 2013, SHARE, now SHARE Inc., was located in Chicago, Illinois.
What is the point of bringing up SHARE?
Prior to and during the early terminal days, computer vendors were trying to build their
operating systems. To improve a substandard/marginal product, they opted to allow all
developers to submit enhancements for peer review and possible inclusion. There was a
limited pool of talent, and the vast majority of people involved were talented and highly
In the mid-1970s, IBM decided to rein in maintenance costs and lock down perceived
competitive advantage by using a proprietary language to write system software and by
refusing to allow access to the source or the compiler. This was a turning point; the start
of IBM’s demise in system dominance.
Today we have the Open Source movement where everybody and their brother have
some Linux distro. Most people involved aren’t properly trained. In many cases, code
written by 12-year-old kids rolls right into the code base because it is for a part of the
distro the owner/host cared little about.
While I do not know the age of the person supporting fax-modem software for Ubuntu
back in the day, I do know they checked in source code that was “tested successfully”
and wouldn’t even compile. I don’t mean they forgot to check in a file (who hasn’t done
that?), I mean it didn’t have a prayer of compiling. There were blatant syntax errors in
the files the “maintainer” checked in.

It is too bad the Open Source world chose to embrace AGILE and unskilled labor
instead of learning from SHARE on how to do software correctly. The peer review there
was deep to the point of actually testing the code before it was passed on for inclusion.
I’ve heard stories that developers who routinely submitted trash didn’t even get
considered for inclusion after it became obvious they had no idea how to properly
develop software.
The A/B Switch

Figure 7: A/B switch (courtesy of Leonid

Karelin and

Terminals were big, heavy, and expensive. Many of the smart terminals had IDs built
into them so the OS could provide even more security. While you may have a user
account that lets you access finance or inventory control, you couldn’t log on from just
any terminal. Only terminals with an ID in the list authorized to access one or more of
those subsystems were available to you.
This bit of security led to the creation of the most useful and dangerous device ever
invented: the A/B switch. This is the telcom version of switches used to connect bus and
tag devices to more than one computer in the same data center without being connected
to both at once. You certainly didn’t want just any developer terminal able to log in to
production and generate orders or “adjust” accounting numbers but, some did need
The solution was to add a shared terminal in a common area with an A/B switch
toggling between the test system and production. Usually, this terminal was placed in
the cubicle occupied by your most trusted people, and only they were given accounts on
those systems. This was a dangerous terminal because its ID was entered into every
system. These people developed or tested everything in the company, so there was no
way around giving them access.

It just so happens an outside software vendor submitted changes to an order processing

and invoicing system on a midrange computer to allow ten-digit values. This required
changes to the IBM mainframe code that interfaced with said system. The coder was
busy banging out his changes on the development system using his own terminal, then
installing them on the test system and using the shared terminal to enter an order making
sure all of the printing lined up, and everything flowed cleanly into accounting.
After one of his tests, he was banging away on his terminal when someone came to ask
his cubemate to look something up in production. The developer ignored what was
going on behind him. His cubemate rolled over to the shared terminal, turned the A/B
switch, and logged in to production to look up the information. When he was done, he
wheeled back to his own terminal without logging out because the person was going to
come right back. Eventually, the cubemate went to a meeting.
Later in this book, I will talk about the printers of the day. For now, you simply need to
know that testing these changes couldn’t allow a 1 as the first digit. The developer
needed to know the leading digit printed cleanly on the preprinted multi-part form. This
meant he was trying to come up with an order total along the lines of $8,765,432,109.12.
What the actual total was didn’t matter as much as the leading number being an 8 or 9 to
make sure the widest number really did fit in the box on the preprinted form.
Once everything looked good on the development system, our developer once again
installed the code on the test system then wheeled over to the shared terminal. He had
been using that terminal for days. Didn’t even think to look at the A/B switch. It was
still logged in, so why should he? He entered a whopping big heavy equipment parts
order for the production test customer (which also existed in test). This was for a heavy
truck manufacturer, so he ordered every engine at the main warehouse to get that total
up fast. Might have even ordered some transmissions if there wasn’t enough engine
To understand just how funny this is, you need to understand his changes were for the
accounting system. Order processing could process the order; it currently just printed
garbage for the total. Picking and shipping didn’t care about dollars, only quantities, and
they could handle some really big numbers. The picking system also interfaced with
inventory management that had automatic reorder levels set for many items, particularly

engines and transmissions that had a long lead time. (Lead time is the length of time
between placing an order and the items actually arriving.)
The name of the test customer was DO NOT SHIP. Their city and state were
NOWHERE, USA, without a zip code. Ordinarily, the warehouse would just toss an
order from this customer, knowing it was a test order that slipped through. Today was
not to be an ordinary day.
It was the last day of a bonus period. Some fool in upper management bought into the
bogus management theory that union labor costs less and works harder when you
viciously negotiated a piecework contract. This means union members got credit for
every item picked, loaded, received, and put away. If they happened to exceed some
arbitrary number(s), the workers received a bonus based on how much they exceeded
the arbitrary number(s).
Needless to say, an order, however bogus, that emptied out the warehouse of big-ticket
items was like Christmas in June. They refused to throw the order away, and
management didn’t think. Warehouse workers were willing to throw the order away if
they got paid the bonus, and management was in full “cut costs” mode, so they refused.
Trucks were called. Everything was picked, packed, and loaded onto the trucks.
Workers had the trucks pull away from the dock and sit there for a bit while someone
created a return order for all of that stuff. The trucks were then allowed to back into the
same loading dock they were now sitting only a few feet from. All of those big-ticket
items were unloaded and put away.
Just how many of you were paying attention?
Management was up in arms, refusing to pay the bonus, claiming it was fraud. The
union heads who had endured a brutal, lopsided negotiation with a management team
who wanted to pay less than minimum wage for everything, stood their ground. Lawyers
raced to get in front of judges, and guess what,? You really have to pay for labor in
Just how many of you were paying attention?
Have any of you figured out where the gopher went down the mountain on this one?

Okay, the company hired Keller MBAs so, they didn’t have management, just people
chanting “cut costs,” hoping real management would show up and grow the business.
Admittedly they were screwed the day the team was hired but, that is what put the
gopher in the sled at the top of a snow-covered mountain, not what shoved the gopher
down the mountain. At least not by itself.
During the late 1980s and early 1990s, when fax machines became popular, office
workers got in the habit of faxing one- or two-page cartoons to each other. Most of these
cartoons would not even be within shipping distance of politically correct today but,
humans were of a better quality back then. More than one secretary plopped her bare
bottom down on the glass of that shiny new copy machine then faxed the output to
someone she knew at another company. It just happened. People knew it happened, and
nobody got fired for it.
One of the favorite mantras from various cartoons making the fax rounds was:
To err is human. To really screw things up requires a computer.

Said mantra was a widespread belief, even for those of us making a living writing the
software. You, born now, are used to large-scale, off-the-shelf software for order entry,
inventory management, accounting, etc. If you buy it all from one vendor and don’t
customize it, the stuff is all integrated and mostly works. Prior to the mid-1990s, we
were writing all of that stuff. Only a company with an IT department that spent an awful
lot of time planning and paid top dollar for developers managed to get that stuff working
before the mid-1990s.
Just how many of you were paying attention?
The picking system also interfaced with inventory management that had
automatic reorder levels set for many items, particularly engines and
transmissions, which had a long lead time.

Management was busy throwing the dice on paying bonuses looking at the end of this
quarter, not next quarter. One by one, as each big-ticket item was scanned as picked and
the inventory dropped below the reorder threshold, an automated re-stocking order went
out for the standard order quantity. These were electronically communicated to the
manufacturer’s systems. As each engine model required certain amounts of steel,

aluminum, milled parts, and scheduled labor, those things were automatically ordered
and scheduled as well by the manufacturer.
You couldn’t just cancel. There were contracts in place that made this house of cards
work. The union workers at the manufacturing plants had to be paid once scheduled,
whether you had anything for them to do or not.
Despite nothing having been shipped, each supplier wanted a hefty “restocking fee” to
cancel the order because they were going to have to eat all of these other orders. It
wasn’t as bad for them as we are talking production that was scheduled weeks, if not
months, out. A few didn’t have to trigger production because they had enough standby
inventory but, yes, the dominoes fell at a rather furious pace, and management was, for
the most part, oblivious.
To understand just how far off the rails something could go, you have to understand that
this was the beginning of large integrated systems. Everybody involved signed all kinds
of contracts and had IT departments working hard together. From the supplier side,
nothing was unusual. Every time a new stock order came in, the quantity was between X
and Y units. Here comes a stock order for a quantity in range, so it must be good.
Inventory management did exactly what it should. The order quantity dropped below a
preset reorder point, so an order was generated.
What failed here was management trying to keep the numbers juiced for the end of a
quarter. Paying that bonus to labor would definitely dent the numbers. Nobody in
management stopped even for a nanosecond to think about what would happen if the
warehouse picked all of that stuff.
Had they taken that nanosecond, the job that periodically scanned inventory levels and
generated automatic reorder requests could have been stopped. It was a batch job.
Whenever something had to be tested or turned into production, it was routinely
stopped. Back then, few, if any, suppliers offered next-day delivery to warehouses.
Whether your reorder got to them tonight or tomorrow, it was still going to be days
before it showed up.

Please don’t think the much-maligned A/B switch is some barefoot-in-the-snow story
you don’t have to worry about in a modern world. Most of you can’t even look at your
A/B switch. It’s a DNS (Domain Name System) entry in a table somewhere. Maybe you
have a local DNS server that is supposed to be at the front of your search list but, for
some reason, it’s not.
When you are developing your website or service pointed to by, something
on your machine is gizzed up, so only points to the local development site.
After you get something working, you run some other script that promotes your code to
integration testing and points the URL (Uniform Resource Locator) to your integration
testing instance. The same happens for independent QA and production support.
Production is the only instance the actual Internet knows about.
Are you gullible enough to believe automated stock orders aren’t triggered by inventory
management software today? Think again. Talk to some kid working at a big box store
about “regularly stocked items.” As each bar code is scanned at the register, the store
inventory is decremented. Once it gets below a certain point, the store inventory
software sends a request to the warehouse. If the warehouse is out of stock, it
automatically reorders from the supplier.
We could at least see our A/B switch. Now, if only management could see past the end
of the current quarter.
Here’s another shining example from post-2010 in case you think my previous example
was a bit too Dark Ages to worry about. It’s about a startup company operating in a
100% AGILE method. They were working on an embedded control system, and boy, it
looked sweet. Developers cobbled together some stuff on their laptops so they could
quickly flash the device while at a trade show. Everybody took their own laptops.
Once they arrived at the trade show, they found there were all kinds of problems trying
to use the WiFi available at the trade show site. Mainly, it was overloaded, but more
importantly, it didn’t follow the existing IP pattern they had at the office, so all of the
scripts would have to be hacked. After a quick run to a store for some gear and a bit of
technical hocus pocus, the laptop for the owner of the company was configured to
become a local network providing a gateway to the trade show’s WiFi so developers
could access various documentation sites.

Everybody thought the trade show was a big success. They got back to the office a few
days later after dark, all worn out. Everybody dropped off their gear and went home for
the night.
Early the next morning, the owner got a call from one of their customers experiencing a
problem. He was on his way into the office already, so he informed the customer he
would look into the matter in a few minutes. The owner arrived, connected his laptop,
booted up, and tried to SSH (Secure Shell) into his local test device to test a few things.
His laptop could see the machine but, just couldn’t get in. After about an hour of trying
to log into this device as the root (God) user, the developers and building suddenly lost
all connection to the Internet.
You see, during embedded systems development, you typically SSH into a device so you
can tail out logs, run utilities, and other things while remotely debugging. You almost
always log in as root, and it rarely has a password, or it has the same stupid password on
each test target. This is ordinary.
Online documentation, wiki pages, and blog posts are the greatest thing since sliced
bread. Online documentation, wiki pages, and blog posts will be the death of the human
race. IDEs (Integrated Development Environments) that look for documentation online
first before looking locally will hasten our demise because that changeover doesn’t
always work well, especially if it has to wait for a network timeout before looking
Companies doing software development correctly who allow employees to exhibit at
trade shows have a sacred set of machines. These machines are locked in a cabinet,
cupboard, closet, whatever until the next trade show approaches. At that point, they are
taken out, wiped, then have whatever is going to the trade show installed on them.
Nobody is allowed to take their own machine.
You see, nobody undid the magic-magic hocus pocus on the owner’s laptop. It was late,
and everyone was exhausted. When the owner came in and started working on the
customer’s issue before he consumed his first big container of caffeine, he didn’t even
think about that. It seems that every network manager has either written or read the same
wiki and blog pages using the same set of IP numbers and copied them verbatim into

their own scripts. This included the network engineers working for the company’s
Internet Service Provider (ISP).
When the laptop booted up and connected, it looked like one of the ISP’s own devices
on its own network, not a laptop on a local network. When he tried to SSH into his test
target, he was trying to access a machine at some other company with a network
manager who read the same wiki and blog pages. In other words, his test target had an
IP address that was valid at the other site, and intrusion detection screamed loud about
someone trying to SSH in as root.
A bit of tracing revealed the location of this intrusion. A phone call to the ISP, along
with a transfer of the trace information, led to the Internet services for the offender being
cut instantly. A few frantic phone calls by employees at this company managed to get an
appointment for someone from the ISP to come out and evaluate their site “sometime
later today.”
Perhaps you are familiar with all of those commercials and legends about cable TV
companies agreeing to connect your home on a given day? You know, the ones where
you take the entire day off, and nobody shows. Then you get a call saying they will
Well, when it comes to evaluating your site so you can prove you aren’t an identity theft
hacking den, it’s almost as bad. Part of this is punishment, so you learn to never do
whatever you did again. The other part is because there are so few outside of law
enforcement skilled enough to make such an evaluation.
Speaking of law enforcement, the owner can be glad the IP address he was trying to root
into was not at a nuclear power plant, military contractor, or other sensitive site. This
one happened to be at “just another business.” In this day and age, one of the other sites
would have required a visit from someone in Homeland Security whose sense of humor
about such things is much what one would expect . . . nonexistent.
We need to also mention the level of punishment losing one’s Internet connection doles
out to an AGILE company. Everyone trying to look at documentation in their IDE has to
wait for a network timeout before it looks locally. If they never installed the
documentation, they can’t look up anything.

Most of today’s startups use Gmail or some other external email service that means you
cannot send an email with a code snippet to a coworker somewhere else in the building.
Oh, and how about those chat tools your AGILE company thought were such a
productivity boon? Yeah, can’t use them either. You have to copy something to a thumb
drive and walk it over to whoever needs it.
Today, most companies don’t have binders of documentation about their systems on
shelves. They have a private wiki hosted somewhere. Worse, they are using Google
Docs or some other document collaboration site that is now also inaccessible. The wiki
might have actually hosted internally but not a chance when it comes to third-party
document collaboration.
Adding insult to injury, your code is most likely hosted on GitHub or some other
Internet-hosted code-management site. Only those who did a pull before the trade show
can work. No joy for anyone else. Check-ins and code reviews cannot happen, which
means continuous integration builds cannot happen either.
This is why non-startup companies “waste all that money” buying equipment used only
at trade shows. They keep it locked away in a closet until a trade show is coming up
because a closet full of $300 laptops and other gear sitting idle is cheaper than an entire
development team just sitting around for a day.
How do you think today’s trendy office environment looks to an investigator? You
know, those long tables or high benches with bar stools in front and two monitors (or
more) in front of each bar stool. Tables are so close you can just turn around from your
stool and, without even reaching, touch the person behind you.
This may be the lowest square footage cost in the world but, it looks like a hacker’s den.
All of that noise forces people to “bud out” with headphones instead of collaborating.
There are Harvard studies showing people working in this environment are more apt to
send an email to someone in the same room and disturb them.13 It doesn’t increase
collaboration, it slaughters it.


Our investigator, sent by the ISP, showed up roughly at 9:30 pm, or so I was told. I did
work at this client site but, this happened before I got there. Stories differed about how
long it took for Internet services to be restored. Some say it was 2 am, others say it was
4 am, others say it was around 2 pm a few days later.
Why such a wide variance?
This happened more than once.
As far as I know, the company never got around to buying a pile of gear used only for
trade shows. Each trade show venue always seems to require some wonky local WiFi
setup, and everyone is exhausted by the time they get back. At some point, the ISP is
going to stop buying the story, or at least that is my belief.
This company is not unique. AGILE startups tend to operate in a seat-of-the-pants
manner. All of that automated continuous integration testing and source libraries
requiring multiple code reviews before check-in is just plain hogwash. It makes people
feel like there is some control over the software development process but, there is none.
You learn this at your first trade show or live install.
Don’t believe me? Go to a trade show, or better yet, let the sales team go, and you be
one of the developers in the “fire team” back at the office. The first whale customer who
says something along the lines of
“If it only had this one additional feature, we would probably buy oceans of

As soon as they walk away, a frantic phone call will come from the trade show floor
with, at best, a 20-word description asking you to deliver that tomorrow. Worst case,
your company brought along a few developers as an on-site fire team. Now you won’t
even have a record of a phone call as they hack something out that looks like that feature
so the sales team can show it tomorrow, or later that same day. Good luck finding any
paper trail on that and even better luck on them being able to get that code checked in
before it is lost.

This happens with hardware, too. I once did a project at a vending machine company
where they allowed some hardware people to “blue sky” a completely new type of
refrigerated vending machine in the lab weeks before a trade show. The company took
the prototype to the trade show along with everything else. These guys weren’t working
from a formal spec and wrote little down. They were cutting and soldering on the fly as
ideas occurred to them.
Sales sold the prototype off the trade show floor and brought home an order for many
more. The engineers had planned on taking the thing apart to create formal
documentation when it returned. They had to reinvent that thing from a pile of metal,
motors, and tubing using only their foggy memories. Everyone was too embarrassed to
ask the customer for a loan of the machine they just bought.
The much-maligned A/B switch is both useful and horrible in equal measure. AGILE is
just plain horrible. Coupling AGILE with something along the lines of an A/B switch is
a nuclear strike waiting to detonate.

Figure 8: The good old days of terminals and

patch pannels
Tiered Storage

What goes around, comes around.

Those who do not study history are doomed to repeat it.
We have all heard both of those
sayings more than once in our lives.
When it comes to tiered storage, few
sayings could be more true. This
conversation comes up every couple
of years at client sites. Please allow
me to give the twenty-somethings a
tour down IT memory lane, so they
have some concept of why all of their
improvements have led right back to
this problem.
During the early- to mid-’70s, we had
mainframe computers with massive
computer rooms. Data and programs
were stored on both punched cards
and paper tape. During the late ‘70s,
Figure 9: Paper tape (courtesy of we had 2,400-foot reels of magnetic
tape of various storage densities and
removable disk packs.

Figure 10: Dorothy Whitaker works in the National

Oceanographic Data Center (NODC) magnetic tape library
(courtesy of and Arnold Reinhold)
During the early ‘80s, we had midrange computers with smaller computer rooms. They
had actual hard drives along with tape drives and removable disk packs. Reel tape was
eventually swapped for cartridge tape. In the 1990s, we got DAT (Digital Audio Tape)
and some other forms of cassette tape. Our hard drives moved into multi-drive cabinets
with additional controllers, such as HSC from Digital Equipment Corporation.
Eventually, these multi-drive enclosures became standalone disk subsystems, SANs
(Storage Area Networks), NAS (Network Attached Storage), and I don’t even know
what they are called today.

Throughout the 1970s and 1980s,

we had tiered data storage by
default. When a 1.2Meg (yes, I said
Meg, not Terabyte) removable pack
cost around $5,000 and a 2,400-foot
reel tape that could store hundreds
of Megs cost well under $200, you
stored what you didn’t need
immediate access to on tape. You
also used tape to make safety copies
of things that were on those fragile
removable disk packs.
Lowly paid computer operators
implemented tiered data storage in
many, perhaps most, shops. I was
one of them. They had procedures
Figure 11: Removable disk pack (courtesy of
and “run schedules.” Batch jobs and Deutsche Fotothek of the
Saxon State Library / State and University were written, and operating systems
Library Dresden [SLUB]) supported request/reply operations.
Once your job issued a request for a
particular tape or pack to be mounted, the operating system would continually nag every
operator’s terminal until one replied to the mount request or aborted it along with the
job. It may not have been the most efficient or reliable but, it really did help contain
storage costs as well as power consumption.
Commercial disk storage started with IBM 305 Ramac in 1956. In 1961, the IBM 1301
disk storage unit was available for the IBM 7000 series equipment and IBM 1410
equipment. For the 7000 series equipment, the model one had a storage module size of
28 Mega-characters. Characters were 6 bits BCD (Binary Coded Decimal). The module
had 50 magnetic surfaces on 26 stainless steel disks, each of that was at least three-
eighths of an inch thick. There was a read head for each disk surface on an access arm
the moved using compressed air.

The “mod 2” (1963) had two modules and two access arms. It was so big that one of the
glass walls of the IIT Commons building had to be removed to get the 1301 in the
building. The movers did not land it perfectly flat on the machine room’s raised floor,
causing a support under the first corner down to crumble. A stack of punch cards
replaced this support. To the best of my knowledge, the cards continued to support the
floor until the machine room was moved to another building.
Tape of this era was 7 channel instead of 9 like it became in 1965 with the introduction
of the IBM 360 series computers and 8-bit bytes. The tape drives wrote in one of three
densities—200, 556, or 800 BPI (Bits Per Inch), in a plethora of formats and even or
odd parity. As IBM had a patent on the rectangular hole for punch cards, Univac had a
patent on odd parity digital recording tape.
The disk modules of the 1301 were not removable except by qualified servicemen,
definitely not operators. The disk space was too valuable to be used by any single user.
So many jobs would copy runnable program files from tape to the 1301 and load them
from the disk while running.
The tape reels were of a maximum size of 2,400 feet but, they could be smaller. There
were 600-foot reels and smaller ones. One programmer had a program of several
overlays. The first time he ran it from the overlay tape (600-foot reel used for
distributing software), the tape broke from being read and rewound hundreds of times in
five minutes. It was quite a mess. That is when a program was written to read the tape to
a section of the 1301 that was available for scratch space. The fix for this program was
to have it load itself and all its overlays to 1301 and no more broken tapes.
At IIT the 1301 mostly contained the operating system and a home brew print spooler so
student jobs could be read from a card reader and their output spooled to a printer
attached directly to the IBM 7040 instead of being batched on input and output tapes.
Note the natural storage tiers.
In the early 1970s, IBM created the Hierarchical Storage Manager. It was a part of the
operating system that moved and thereby backed up data existing on system non-
removable disks to tape. This allowed the data center to provide more disk space to
current users but, it also allowed the data center to recover old instances of files.

One of the things to understand about that time was the lack of typing skills. In the mid-
1970s, two clerical staff were trained to be programmers. (Yes, that is how many
programmers got their start, simply knowing how to type.) One of them asked how to go
about restoring the source of a program that she had accidentally changed. When told
the procedure, she said, “It will be easier for me to just reenter the code.” She did have
the source listing from before she changed it.
Fast forward to now. Few companies have the intelligence to still have a midrange or
mainframe. Many have a truckload of PCs, racks, or blades scattered over half of Hell
and Georgia. All of these machines are sharing storage on a SAN or other type of
standalone disk subsystem.
Cost is everything now. Instead of built like a tank and last for eight years SCSI drives,
those disk subsystems are trying to get away with cheap (well under $100/TB) IDE or
SATA drives. The dirty little secret about the drives made for the consumer market is the
fact many only meet their advertised five year MTBF (Mean Time Between Failure) if
they are turned off for at least four of those years. It’s sad but true; at least, that has been
my experience. When placed under high I/O load, they don’t seem to last a year.
The absence of midrange and mainframe computers at most companies also means the
absence of computer operators as well as people who are skilled in systems design and
integration. There is an ever-growing number of companies who start small and end up
installing some kind of SAN, or ISS (Integrated Storage System) if you prefer, yet have
no architecture plan or backup policy. Some feel that because there are lots of different
RAID levels happening in the cabinet, they should be fine. Most of these companies
subsist on low-skilled IT workers doing Java, JavaScript, or Windows platform
development. None of them can wrap their minds around the “big picture,” which is
why we continually get stories about massive identity/credit card information theft. In
short, these workers have never run with the big dogs; they’ve only seen life from the
Some companies purchase a second storage device, place it in a different location, then
have it mirror their primary device so they can have a hot or live failover. While this is a
dramatic improvement, it doesn’t address the security issue, nor does it even attempt to
answer the issue of storage tiers.

Let us also remember your live mirror storage device is just as corrupted as your
primary storage device when an erroneous I/O operation successfully completes on your
primary storage array. If you don’t believe that, consider what happens when a DBA
(Database Analyst) accidentally issues a DROP TABLE command followed by a
COMMIT in a production database instead of a test where they thought they were. Upon
successful completion of the command, that change will also be on your mirror, which is
not exactly what you (or they) hoped.
I’ve worked on small-scale embedded test systems where we could not lose a single test
reading. We had two storage devices, and the application wrote everything twice.
Mirroring could not be trusted because something bad happening on the primary would
get mirrored, so we wrote to two different databases on two different storage devices
and instantly reported the first failed read/write.
Many of the Integrated Storage Systems used to provide tape drives for backup
purposes. I don’t know of any that are built using 1TB or larger spindles providing tape
as a backup option. Most storage subsystems in today’s marketplace only do a disk-to-
disk backup. Each disk (spindle) is in a hot-swappable cartridge.
At any given time, half to two-thirds of the storage subsystem is powered off. Some of
the powered-off drives are there for hot-swap, and the rest are cold mirrors of currently
running drives or have just gone idle due to lack of I/O requests.
Cold mirror means the drive is a non-fresh (point in time) copy of another drive. Due to
the fragile state of these lower-cost disks (and their sensitivity to heat), the system
routinely rotates spindles. The cold mirror can be updated faster than a full-drive copy. It
also provides a recovery point if something bad happens.
Most MBAs today have little to no concept of tiered storage. They don’t understand that
much of the data on their storage subsystem is simply there for archival purposes. Since
the storage is now shared across so many operating systems and business units, it
becomes impossible to implement any meaningful storage policy.
Do you think your PC-based operating system could even handle the concept of waiting
for an operator to mount a tape to satisfy a single record read request? How about at the
end of a job appending a daily summary record to the end of a file on an archival tape?

Some database vendors have even tried to implement “cold storage” type solutions
within their products but, the bulk of these fail. Why? We have gone so far afield
claiming “proprietary bad; cheap, dysfunctional x86 based products good” that we no
longer have the ability to implement tiered storage correctly.
Remember how we used to do it? The operating system provided all of the tiers and a
request/reply system that allowed batch and interactive jobs to request media mounts
and operators performed them. Well, that was considered tightly coupled and bad. In
truth, it was tightly coupled and good. The currently popular and near worthless
platforms of today have yet to rise to a useful level.
Back when we had proprietary platform databases, they had the option of keeping data
off-line in cold storage. When a query needed that data, it could be brought briefly
online, used, then returned to cold storage. Most of these cold storage data were
historical, such as customer invoices from 1987. You had some legal reasons to keep it
but, little in the way of actual use for it.
Please don’t try to make a case for Data Warehousing or Data Mining when it comes to
my invoices from the 1987 example. The customer number may still be the same, and
they may still have the same physical location but, other than the total, there is little
valid data remaining in 2019.
Whatever the item codes or product numbers were on those invoices, they don’t exist
anymore. Other than providing fodder for “Gee, a hundred years ago a gallon of milk
cost this” type nostalgia piece, there is no useful business information there. Not useful
enough to be consuming electricity and space on a spinning hard drive.
The main reason tiered storage doesn’t happen is the marketing scam about database
access speed. Vendors and supporters touted millisecond access speed and all other sorts
of speed claims for “any query.” Dynamic query optimizers that would automatically
choose the fastest path to the data. All of the timings and tools got tweaked in such a
way that we don’t have either the syntax or the concept of:
SELECT_LONG * FROM inv_hdr_1987 order by inv_dt desc;

The difference between SELECT and SELECT_LONG is that the tool recognizes a
response could take many minutes to come back with a result due to the “expense” of
bringing some part of cold (tiered) storage online.
Can your storage subsystem vendor look you in the eye and bet the lives of their
children, spouses, and other family members on the survivability of their product when a
5.9 earthquake shakes your building like a piece of wet pasta? How about 6.9? Higher?
Don’t get me wrong. Storage subsystems are amazing at handling single and, in many
cases, multiple spindle failures. I have never seen one that could handle every spinning
spindle suffering splattered platter syndrome at the same time.
I live in Illinois. Most people don’t associate the Midwest with quakes but, we get them,
and some are really bad. We have had intensity VI and VII. We have had magnitude 5.x.
Ask the USGS what all of those measurements mean. All I know is that I was in a two-
story apartment building in the Fox Valley mall area when I lost my balance without
warning and that one is not even on this USGS chart.14
To bring back the benefits of tiered storage, we, as IT professionals, must create online
systems that accept user input and send the results expecting a delayed response. We can
develop these systems today. Some financial institutions already are. When you log in
around tax time and request all of your transactions for the year, they thank you and tell
you to check your email for a link to pull down the report when it is completed.
These request-response systems are more secure than existing systems. Don’t believe
me? What does your existing trading system do? You log in, it shows you all of your
positions, allows you to do trades, etc., simply by knowing a username and password.
In addition to username and password entry, what if they sent confirmations to the email
address of the account that was encrypted with an email password that could not match
your login password? What if you had to decrypt, then click the link to confirm? It
would certainly be difficult to pull big scams. Some companies are already using this
very transaction security. They will be able to save oceans of money when tiered storage
becomes available, and they won’t have to change their user interface.


Some debit and credit card companies are already offering these services. Many of you
will have seen commercials on television where the card company touts the ability for
you to set an approval amount. Transactions below that amount continue with low
security as normal. Transactions at or above that amount require you to respond to a text
message or click a link in an email to approve. I currently have one of my cards set to
$500. I have to physically authorize transactions of that amount or more via one of these
two methods.
Such systems are sometimes called hub and spoke systems. A great number of small
systems with dedicated external (usually Internet) interfaces are exposed to the evil
outside world. When a user issues some kind of request, the small system converts that
request from whatever inbound form to a fixed-length, fixed-field-width proprietary
internal message and places it on the designated message queue for the hub. No SQL
injection or other overflow technique is physically possible. The hub has no connection
to the Internet. It exists on an internal private network. The exposed systems have
multiple NICs (Network Interface Cards). They exist on multiple networks, but only the
message queue software has access to the internal network.
When you design hub and spoke architecture into your system, you are already set up to
incorporate some form of cold storage. You are also set up to institute a two-phase
commit security if you choose. By two-phase commit, I mean like having to physically
authorize a transaction via a separate channel/input stream like my previously
mentioned credit card.
The first level of tiered storage will be your storage array with power-saving settings
spinning down “idle” drives. Your second level will be part of those long-run
transactions like that annual report. Most banks archive transactions monthly, only
keeping so many months online. The rest can be in cold storage. The job that runs the
report can even issue the mount requests.
You will notice I have yet to cover how tiered storage figures into an off-site backup
strategy. Most companies I encounter these days with a large shared disk subsystem
don’t even have an off-site backup strategy. When the basement containing their
computer room floods, that is it—they lose everything.

I have seen shops diligently make backups then carry them to a different room in the
same basement until it was time to recycle them. I kid you not. MBAs have learned
nothing from The Great Chicago Flood,15 Hurricane Sandy,16 or insert your favorite
natural/manmade disaster here.
To sum up, it doesn’t matter how cheap disk drives get. Every drive takes electricity and
has to have some level of cooling. The more you put on any single drive, the more you
lose when that drive suffers from splattered platter syndrome. With the push to go green,
tiered storage will once again become the viable option it has always been.
Solid State Disks (SSD) don’t have platters but, they still fail for various reasons, and
each storage location has a physical limit to the number of times it can be written.
Besides, I’ve yet to see an SSD that would remain functional under thirty feet of water
for a significant length of time.
Data usage has a natural frequency to it. Learn to make your application design and
physical storage mediums make use of this access frequency. Not everything needs a
millisecond response time.
One final note about cold storage. Doing it correctly really means cold. According to
research done in association with the Library of Congress,17 most low-cost CD-R and
DVD-R media starts failing in the 20–25-year range. It is believed the better quality
disks can achieve a 10–200-year lifespan with data intact. You can improve that lifespan
by up to 25 times if you store them at 41 degrees Fahrenheit.
What we in the IT, and by extension the archival world, need is a DVD jukebox that can
reliably store media at 41 degrees Fahrenheit for seven centuries. (See “How Do You
Backup the Human Race” later in this book.)

How Was Your First Day?

There is nothing like your first day on the job. The excitement. The terror. Few places in
normal office life can contain more of both than your first day in computer operations
for a large company running real computers. We are not talking about some wannabe
company with a bunch of x86-based racks but real computers with scheduled jobs,
backups, and yes, even some report distribution. Despite what you may believe when it
comes to the “paperless office” and massive security risk of passing around PDF files
via email, the law still and always will require many things to be recorded on actual
paper. We can store and preserve paper for hundreds of years. Digital storage, not so
Batch is still the realm of big business. Those pretty Web pages that generate orders and
send you back a real-time invoice still feed hundreds, sometimes thousands of batch
systems. When you log in to your favorite auto parts website and order an oil filter, in
many cases, that inventory transaction doesn’t go back to corporate immediately. The in-
store system accumulates all sales, then at the end of business either uploads a bulk file
of current transactions so corporate computers can run inventory forecasting to calculate
a daily stock order or the in-store system runs a batch job to calculate the stock order.
Either way, someone at the store (if a franchise-owned location, corporate-owned
generally do not get a choice) approves the stock order with or without making changes,
and it gets sent back to corporate to be pulled from distribution centers that evening or
the following day. Warehouse/distribution-center pulls are done in batch as well. Why?
Because of truck loading. You want to schedule a truck so it can deliver the most with
the least amount of both travel and material handling. This means you need to have the
truck loading personnel put the order for the first stop on the truck last, so it is the first
off the truck (LIFO – Last In First Out). Few things cause more damaged product than
having to unload a truck then reload it at each stop.
A truckload is a natural batch. You hand all of the system-generated pull requests to
warehouse pickers and assign a staging location. Pickers pull and stage the orders in
last-to-first order. Most orders require plastic wrap and pallet stacking, so forklifts can
load the truck.

Each team repeats this batched process for each truckload. Picking and loading have to
be done in a manner that allows the truck to arrive at the first stop as the retail location
opens. It does no good to show up at 2 am when nobody is there to help unload unless
the driver wants to get a few hours’ sleep.
Things as ordinary as payroll are also batch operations. At the end of each pay period,
the payroll system gathers up all of the hours for hourly people and all of the attendance/
sick/vacation days for salaried people. It generates both your payroll check and deposits
for the withholding tax to be paid to various government agencies. Even if you and the
government agencies are all using direct deposit, the batch operation still happens.
Why don’t you think about the orchestra of activity that occurs to get that oil filter to the
store so you could buy it?
Do you think the warehouse wants to order them one at a time from the supplier? No.
Suppliers give discounts for full truckload orders.
Do you think the manufacturer wants to find out about sales one at a time? No. They are
scheduling production weeks, sometimes months out. They need a batch file and
forecasting to tell them they need to make three million of that particular filter for the
next month or quarter.
Companies supplying the raw materials don’t want to ship enough for one filter at a
time. All of this activity occurs weeks, sometimes months in advance of that filter
arriving at the store.
This is also why one model of oil filter tends to fit quite a few models and years of
vehicles. The last thing any automobile manufacturer wants is a unique oil filter. While
it may look good on paper to be able to charge whatever you want for the filter,
consumers will balk, and vehicle sales will plummet. You can’t get an oil change for
around $20 if the filter costs $45 wholesale and is difficult to obtain.
So it is in the world of real business. Generating product for shelves requires real
computers running real batch jobs. Traditionally, that means you also need a computer
operations staff to schedule, run, and monitor the jobs ensuring all complete without

It is at this point we join the story of our hapless computer operator on their first day at
work. Typically, someone who goes into computer operations is someone who is
currently enrolled in or has completed an IT degree at some form of a higher-learning
institution. Some people find they like computers but not the long hours of writing code,
so they stay in operations. These people are to be cherished because they know what
each job means to each part of the business. They are also the ones waking support
people at 2 am when something falls over, many times catching grief for things that are
not their fault. Been there. Done that. I’ve even had to wake them up when it was my
fault. Not a good feeling.
While I have never seen the actual IBM mainframe control screen for the 3270-type
terminal, I have seen variations of it at various client sites. It was a classic CICS green
screen with a single character-underlined field running down the left margin and some
job information on the rest of the line. Most importantly, it contains the job name along
with some state information.
The only field you can do entry in is this underline column, usually one character wide.
You can arrow or return key your way down the screen to get to the line you want then
enter one of a list of job commands. (3270 terminals have a separate tiny ENTER key
down at their base to transmit data from the CICS screen back to the controlling
One of the things the Blue Warriors always joke about is having to “pee on the job to
kill it” because “P” was what you entered to kill something. S was already used for
Start. P was the only letter in STOP which had not already been used for a single-
character command. Even people who knew all of this had fun with the “pee on it”
On this particular day, some application was having trouble, and it was blocking access
to the IMS database. This database was hierarchical, not relational. It was one of the first
high-availability, large-scale data-storage mechanisms providing extensive transaction
processing capabilities for its day.

The downside of a hierarchical database is that all information is treed off of a base
record. The base record typically had some often-used fields, and then it had a gaggle of
index fields to the lesser-used information contained in other records. This meant if one
application locked a base record for updating, nothing else could update it or anything
under it until the lock was released. When everything went right with high volume,
high-speed transactions, this delay was not noticeable. When some application locked a
base record then forgot to unlock it due to a programming error, it wouldn’t take long
before other applications were hanging and timing out.
Such was the situation for our computer operator (call him Joe) on the first day. I was
standing in a cubicle with several others while the conference call happened. Developers
identified the application that was causing the problem and told the operator he had to
“pee on it.”
Joe was not alone in the computer center. He asked multiple times, “Is this the right
job?” The only problem was his finger was on the correct line, but his cursor was on a
different line than his finger. He dutifully “peed on it” and hit the funky ENTER key.
Things started crashing at a furious pace. The line he arrowed to was the IMS database
engine. The mainframe had to be IPLed (Initial Program Load—rebooted for you of the
PC-only world) to get it back to a usable state. All of production was down for the rather
lengthy IPL process.
Yes, I should probably explain the IPL (Initial Program Load) concept a bit more. I’m
from a DEC midrange world but, the concept is the same. These classes of machines
don’t boot directly into the operating system. VAX systems booted to a firmware level
with a >>> command line prompt on the system console. There you could type
something like:
>>>B DUA0:

and the machine would look on that device for a bootable operating system. There was a
rotatable key on the front of the 11/750, which could also choose targets like the TU-58
console tape drive (which took close to an hour to load.)

On IBM 360 machines, there were three hexadecimal rotary switches into which the
operator placed the unit address of the device targeted by the IPL. When the Blue Button
was pushed, the computer started the IPL sequence and read the IPL data from the
targeted device and executed the code.
During the period up through much of the 1970s, we (the rest of the world) had to have
knobs, switches, and keys to control boot targets because computers had little to no
built-in firmware. The keys were really rotating switches where you could remove the
key so nobody could change the setting. The entire IBM 360 line of computers was
firmware implemented—all save the model 75, which was a hardware 360. The
firmware was updatable but the cost and difficulty of updating precluded its use for
things that could easily be done by human operators, like IPL device selection. In later
years four digits were required for selecting the IPL device. This became doable via
During the 1980s and 1990s, firmware got dramatically better. There were full-blown
diagnostics and help built into the system console. For those only familiar with PCs, you
can think of the progression of having to set DIP switches to a rudimentary BIOS screen
to a BIOS screen, which had pretty colors along with mouse support to the UEFI you
have today.
While this concept of selecting a boot target may seem foreign to you, it was created out
of a need that still exists. You could manually choose a new boot device to test out a
different OS or newer version of your current OS without trashing the original boot
media. Besides, DEC machines were designed to stay up and running for decades. The
only time they were supposed to shut down was for hardware maintenance, diagnostics,
or to install a new OS.
A PC was designed for home (non-technical) users. They always arrive from the factory
configured to boot from the first physical media. If you are a technical user, you can
configure them to boot from other devices, but the average home user is never going to
do that. The average home user is never going to “test” a new operating system installed
on a different disk drive. You will use your one computer with its preinstalled OS until it
either no longer works, or you decide it is time for a new computer.

A real business cannot do that. You cannot just connect to the Internet and allow some
vendor or support service to just push out some rolling release to you because there is a
distinct possibility the “update” will either not allow your system to boot or cause some
other catastrophe.
Business-level computer systems require the ability to boot from an optically isolated
boot device. This allows both operations and systems managers to make sure everything
your company needs still works.
On the plus side, Joe did get to learn how to IPL a mainframe! He learned how to rotate
the switches assigning the IPL target device and even got to push the “Blue Button.”
Organic Systems Development

Every bad idea comes around at least twice in your life if you live long enough. AGILE
is on at least its third time around during my lifetime. Organic Systems Development
was the first name I ever knew it by. Later it was Rapid Application Development
followed by X-treme Programming and now AGILE. They are all synonyms for bad,
and now I read people are trying to resurrect Organic Systems Development.
Please allow me to quote the Cornell University Department of Computer Science
Stepwise refinement refers to the progressive refinement in small steps of a
program specification into a program.

The term stepwise refinement was used first in the paper titled Program
Development by Stepwise Refinement by Niklaus Wirth, the author of the
programming language Pascal and other major contributions to software design
and software engineering, in the Communications of the ACM, Vol. 14 (4),
1971, pp. 221–227.

Wirth said, "It is here considered as a sequence of design decisions concerning

the decomposition of tasks into sub-tasks and of data into data structures."

Note the year, 1971. At the time, all we really had were organic systems. Many
companies with computer systems didn’t even have centralized IT departments. Each
major department got its own little budget, hired its own developers, and created its own
systems, thus organically creating its own little data silos. Everything was running on
the same mainframe, and no system talked to any other system.
Accounting got the computer brought in so it could develop accounting systems and run
more efficiently. It was a big expense, and they handled the money.


When the order processing department wanted to change, “please allow 6–8 weeks for
delivery” to “please allow 2–3 weeks for delivery,” management bought into the idea
but wasn’t going to buy another computer. They were granted a budget increase, bought
some more storage and resources for the central computer, then bought terminals, and
hired their own developers. The order processing system didn’t talk to accounting.
The same thing happened with inventory processing, truck loading, warehouse
management, and on and on. Each department got its own IT budget and developed
something for the central computer organically without any grand plan.
Back in the late 1960s, Wirth and many others saw this as a fool's errand. This led to
some deep thought, and the 1971 article quoted previously. The ultimate result of this
deep thought was the concept of the Four Holy Documents and what is known as the
Waterfall Method and Software Engineering. The only true form of software
Rather than systems developed from a specification written on a napkin at lunch and
hacked out in one cubicle for one department by one developer, the entire biological
entity, known as “the business,” had to be taken into account. It would decide if the
system was to be created and how that system would integrate into all of the other
systems for the betterment of “the business.” We would finally put an end to data silos
and one-off application development for a single user or department.
Later in “We Swung the Hammer Too Much,” you will read about the one-off donor
mailing list system developed in Lotus Approach that was organically developed and
grew beyond control. Suddenly a system written and maintained by one developer,
existing on one PC, became a revenue engine generating millions of dollars for an
organization with its own building in the Chicago Loop. It didn’t talk to any other
business system, and the developer walked out the door.
RAD was the second iteration of Organic Systems Development. That donor
management system was one of millions of systems developed around the world using
RAD tools that left companies high and dry when it was time to set sail. These systems
weren’t integrated into “the business.” They were hacked out in pursuit of a fast buck
then became a massive liability.

While I don’t have stats, I have to believe a large number of corporations went under
because of these systems. They couldn’t scale because scaling was never a thought or
requirement during development. In fact, there were no requirements. Just one person
saying, “As a user, it would be nice if we could do this,” and a few hours later, a limited
functionality ticking time bomb was delivered.
I had the “honor” of sweeping up behind too many of these catastrophes to count over
my career. Many were hacked out using various Xbase-type RAD tools without any real
design thought put into them. Yeah, for a hundred records, you could make them look
really nice and run reasonably fast. When someone tried to use them, they found out the
data couldn’t possibly fit on the 40 MEG hard drives that had to have two partitions
because FAT16 had a 32MiB size limit.19
The problem was departments could find 10–50K in their budget to bring in a lone-wolf
contractor and one RAD tool to bang out a “little system,” and they didn’t have to wait
for IT to get around to developing their system correctly. By the time it was discovered,
IT was left trying to reverse engineer a system using correct design methodologies when
all the people involved in the creation of said system were gone. This usually resulted in
a reorganization to weed out managers who were allowing these systems to grow like
mold in a flood-damaged house.
With AGILE, it is all happening again. Business systems live, sometimes over 30 years.
They must be designed to last that long and be maintainable over that lifespan but, with
AGILE, they are not. They don’t even get properly completed.
How do you ethically define completion with AGILE? Whatever you delivered at the
end of a sprint? That is sad. Unless you created the Four Holy Documents (discussed
later) before writing the first line of code, you cannot hope to develop a system that will
last five years, let alone 30.


Figure 12: Wood rolltop 3 1/2 inch floppy storage unit

Software as a Competitive Advantage

During the terminal days through the tiered storage days, up

until the Enron meltdown, software was rightfully viewed as
a competitive advantage. In the early days, it was undeniably
true. This view was so true companies were allowed to get
valuations from accounting firms and book that amount as
an asset.
People who’ve never been more than arm’s length from their
idiot phone will have trouble comprehending the previous
paragraph. To you, software is everywhere, and much of it is
free. You even pirate the non-free stuff. During the 1970s
through the mid-1980s, software was both rare and
expensive. Companies spent years developing business
Figure 13: ID 17670765 systems that fit their business model. There were no cookie-
© Juan Moyano | cutter software packages because every business had management intelligent enough to understand its
requirements were unique.
Let me paint a picture, so you understand. Sears™ started in the 1880s and soon became
the world’s largest store. It even started a radio station in April 1924, in Chicago, with
the call sign WLS for World’s Largest Store. (The Chicago Tribune created WGN radio
and television—World’s Greatest Newspaper—making the Chicago Cubs the most
widely followed baseball team in America. The Chicago Federation of Labor launched
WCFL, and one of the early independent radio stations was WIND.)
While building and running a radio station may sound stupid in today’s Internet-based
world, this also was a competitive advantage. The only electronics entertainment homes
had (if they had any) was a radio. You could broadcast your own advertisements for free
and make money running commercials for businesses that weren’t your competitor.

The competitive advantage radio provided a company didn’t begin to diminish until the
1950s when an improved form of black-and-white television became popular.
While all of this was going on, there remained one constant. At the bottom of every
paper order form for Sears™ and every other catalog vendor of the day was the phrase
Please allow 6–8 weeks for delivery

You wrote down your order and totaled it all up, including tax and shipping. You wrote a
check, then you put your check and order form into an envelope and mailed it to the
vendor of your choice. Most required you to use your own stamp.
Some days later, your envelope arrived at the catalog center. A clerk opened your
envelope, verified your calculations, verified your check matched that amount, then
recorded your order and sent the check to people who made bank deposits. Your order
was stamped PAID and handed off to a sourcing agent.
The sourcing agent then had to bust up your order, spreading it across each warehouse
required for fulfillment. They had to fill out, by hand, individual internal order forms for
each warehouse. Those forms got bundled up at day’s end and were either driven or
physically mailed to each warehouse.
Back then, you didn’t stock the same item in more than one warehouse, typically. This
made sourcing much easier. When companies started having regional warehouses with
overlapping inventory, that is when the process got complicated. Warehouse 12 would
have to mail a request to Warehouse 13 to obtain out-of-stock items that had long lead
times. After the invention of the telephone (no, it didn’t always exist), they would call
the other warehouse to verify it still had the item. Some companies would allow a phone
call be all that was needed to initiate an inventory transfer.
At any rate, paper forms were sent down to the warehouse floor, and they would be
assigned to pickers. The pickers would wander the aisles, climb the ladders, and pick the
merchandise for your order. They would carry it to the front of the warehouse along with
the paper form they used. A packer would then verify all items picked, and if not
available, they would write you a back-order ticket and pack a box, then handwrite a
label, and slide it down to shipping.

I deliberately left out the great big paper ledgers keeping track of where each item was
supposed to be in the warehouse and how many were thought to be on hand. Most
companies would have people come into a warehouse one weekend a month to take
“physical inventory.” People literally walked with pencil and paper down each aisle,
noting what item was in what storage bin and how many there were. Items in the wrong
bin were set out on the floor so they could be properly re-binned later.
Now a company decides to spend around a million dollars on one of those computers
using punched cards and paper tape. The first system most of them wrote was an
accounting system. When it was done, hundreds of accounting clerks could either be
reassigned or let go. With all of that paper, you had to have a lot of green visors walking
One of the next systems written was order entry. This eliminated more labor. In
particular, it got rid of the sourcing job. Now those paper forms for each order portion
each warehouse had to fill were printed automatically once the order was entered.
Taking inventory became a much less frequent thing once they expanded the order
processing software to also have an entry screen for physical inventory. As each order
was entered, the inventory would be allocated, then fully reduced once it had been
It shouldn’t take much imagination to believe each time a new system went online, it
saved the company tons of money in both material and personnel costs. I didn’t even get
into full inventory management where historical sales and order forecasting led to
placing orders with suppliers efficiently.
These systems also helped management calculate the actual cost of inventory, especially
trapped inventory. For those of you not in the order processing world, “trapped
inventory” is a catch-all phrase for inventory that did not sell and probably won’t. Think
of last year’s model or clothing style. If you are selling automotive parts, there could
have been an engineering change forced by the Highway Safety Commission, and the
part you have hundreds of is no longer legal to sell. Let’s not forget about Christmas
candy when Christmas is over.

No matter how the inventory got trapped, management has to identify it, then do
something with it. In the case of last year’s model, that is generally what drove clearance
sales. Yes, they used to happen after the fact when we didn’t have computer systems to
manage the warehouses.
Pretty soon, it became obvious to both business and government that this computer
system plus its software should be booked as an asset well above its acquisition cost.
You who live in a world of free two-day (sometimes same day) delivery probably find it
impossible to believe but, the first company to computerize its order flow to the point of
Please allow up to 3 weeks for delivery

on their order form trumpeted that fact. They had a competitive advantage that would let
them charge just a little bit more. The automation cut down labor costs substantially as
well, so the profit margins got better. Computer software allowed many warehouses to
stock the same items and back-ordered items to be identified at the time of order entry
so notifications could be printed and mailed.
Here is a real-life example for you.
During the early days of my career, I worked at a DEC (Digital Equipment Corporation)
VAR (Value Added Reseller.) We had a customizable ERP (Enterprise Resource
Planning) system we sold to various companies. Only once during my time there did a
company buy an off-the-shelf version of the software without any modifications.
Everybody wanted it customized to match their business.
One customer was a pipe supply company. They sold pipe and plumbing supplies to
contractors. Part of their business model was cumulative quantity price breaks. They had
been keeping track of all this stuff by hand. As business started to increase, that became
You’ve all seen online or paper product listings where 1–5 of something is one price, 6–
10 another, etc. Cumulative quantity pricing worked by item over a calendar year. In
January, everybody paid pretty close to list for everything. As you started buying more
and more stuff throughout the year, the price for each piece got cheaper.

Back in the day, this was a unique pricing model for a construction supply house. Word
got around, and business increased to the point they physically couldn’t keep track of it
on an item-by-item basis. This caused tension with some customers.
Eventually, they came to us and said, “This is how we do business.” Certain people went
off into dark little rooms and estimated all of the changes needed to our base software,
haggling occurred, and a deal was struck. Off to the races, we went modifying our
software to implement cumulative quantity discounts on an item-by-item, customer-by-
customer basis.
One of the haggle points was customer-specific discounts vs. customer-specific pricing.
The former could be justified in a court of law but, the latter could get you into a world
of hurt.
The reason I remember this project so clearly wasn’t just the unique requirements but,
the fact the owner of the company was in our office most every day. When someone
thought they had a piece done, and it had a place on a screen, he sat down in their little
development environment to beat on it. We had test teams that came in when things
were “done” for most companies. We never had the end customer coming in and testing
each little piece as it got done. By the time we delivered, I think he knew more about our
software than the sales team.
Now, this plumbing supply company correctly applied prices through the breakpoints on
each order automatically. It could track inventory and generate reports based on sales
volume when stock orders needed to go out and how much they needed. The plumbing
supply company trumpeted this new system to all of the contractors, and they got a lot
of new business. Contractors would bid their jobs quoting list price for materials and
make a lot more money when the client waited to pull the trigger.
Another financial benefit of this system was that it encouraged year-end hoarding.
Construction companies that had a bit of storage space and were bidding on contracts
that would start after the first of the year would buy the supplies in December, leaving
the supply company cash-rich and inventory-shy on December 31.

Digital Equipment Corporation sold midrange computers. They didn’t have the multi-
million dollar price tags of IBM hardware. While I’m not privy to all of the hardware
and software license sales, I never heard of anyone spending over a quarter-million
dollars with us on any one deal.
You must understand the magnitude of my prior sentence. Please read it again.
Businesses of every size were buying these computer systems, getting customized
software written for them, and spending under a million dollars. Sales, cash flow—
whatever you wish to call it—then went up, in many cases by tens of millions. This is
the time and business condition that created the practice of having software packages
receive a valuation by an external entity and booking that as an asset.
Computers were rare. If software let you do something your competition couldn’t, it was
a massive business advantage. It had value; therefore, it should have its own valuation,
distinct from the acquisition cost.
Of course, once companies found out about this, management became relentless. The
dark side of this ability, besides the Enron-style accounting fraud, was management
pushing employees to constantly do more.
Today you think nothing about taking your prescription into a national chain pharmacy
and being able to get a refill no matter where you are on vacation. That feature happened
in my lifetime. It also had a downside according to the urban legend.
Management went to the board who went to the investors and advertised to the public,
“On this date, you will be able to take your prescription into any of our store locations
and get it filled at any other.”
One tiny problem. Nobody talked to IT.
I don’t tell you this story to frighten you away from IT. Yes, there will be projects
(plural) throughout your career where coworkers keel over. In large part, this will be due
to diploma mills handing out MBAs to anyone whose check clears. I tell you this so you
understand just how coveted these systems were.

That pharmacy became a trusted household name. For the first time, families with one or
more members on some kind of prescription could drop a prescription off at their local
pharmacy chain location then take off on summer vacation knowing they could get that
prescription refilled any day of the week at a national chain location near their chosen
vacation spot. It was correct for them to pursue the development of this system. It was
incorrect to announce a live date before the system was even doing well in testing.
People born after the Internet really need to think about this. There was a true quality of
life change that happened when that system went online. To this day, some medications
cannot be dispensed in 7-day or larger quantities. Families with one or more individuals
having a chronic condition like asthma or diabetes could go on two-week vacations for
the first time in years.
Some people have intermittent reoccurring needs. Prescription allergy medicine would
be a good example. Your family doctor back home has all of your records and is familiar
with your child’s allergy issue. You call your doctor’s office while on vacation. They
phone/fax/walk a prescription over to the pharmacy back home, and a couple of hours
later, it is ready to pick up where you are.
Today we have too many MBAs who, quite frankly, are a waste of oxygen. They don’t
see software as an asset because they don’t understand any business. They didn’t start in
the mailroom or a warehouse. They want every business to be like that mythical generic
business they studied in school, and to make it appear that way. They want some off-the-
shelf software package advertising “turn the knob” configuration. They also want it to
have a “dashboard” so they can turn these knobs and see stuff change on the screen.
Nowadays, we don’t have businesses with bold and unique business models. Most run
some kind of canned software, so they are just as worthless as their competitors. Before
you throw Amazon™ and Facebook™ out as examples, consider all of the retail chains
and newspapers that went out of business because management was complacent and did
not bring in bold and unique ideas.
As I write this, we are standing at a threshold where software will become an asset
again. Corporate America has sunk about as low as it can go with off-the-shelf software
and “offshore” labor. The diploma mills have cranked out MBA degrees for far too
many people who have no ability to manage. They know a few buzzwords to say in

meetings and are always chanting “cut costs, cut costs, cut costs,” sounding like either a
henhouse or a flock of peacocks depending on your geographical upbringing.
Cutting is the only way they know to increase profits. Real managers have real vision.
They learn the business. They study the competition and finally come up with an
advantage to exploit. You don’t get management like that from an MBA diploma mill.
They have to come up through the ranks.
All management eventually devolves to the “cut costs” mantra level. It’s the MBA
equivalent to sitting on the couch with the television remote and a cold beer. While the
leisure time passes, the golden parachute gets closer. The vast majority won’t even admit
they are just looking for the retirement package. Instead, they will walk around heaping
praise on one another for being such a phenomenal management team.
Success breeds stupidity. Nothing fills suits with I.Q.s of 40 faster than a bit of success.
Management wants to be told just how great it is, and stupid people are willing to do
that for a decent paycheck.
Don’t believe me? Find some interviews with Chicago-area reporters who were around
during the heyday of Sears, Roebuck and Company, back when the Sears™ catalog
ruled the land, and they built the Sears Tower. You will find a good number of them who
interviewed Sears™ management and asked one simple question: “Who is your
competition?” They were all shocked when management responded, “Nobody!”
Here it is 2019, and Sears™ is selling off everything, possibly even the sinks from the
employee restrooms, trying to avoid one final trip through bankruptcy.
Management cared so much about stroking its own ego and so little about their
customers they created a subculture when they closed down the catalog division. The
twenty- and thirty-somethings of the day decided one couldn’t be called a “True
Chicagoan” unless they went downtown for a night of drinking then peed on the Sears
Tower before going home.
Laugh all you want; it was real. Probably more real in the suburbs than the city itself
but, it existed. It’s a good thing that culture existed before the Internet; otherwise, we
would have had travel agents marketing Sears Tower peeing vacations.

Once the public turns against you like that, the end is approaching. Keep that in mind.
Sears™ had software and systems that allowed it to completely dominate the catalog
industry for years. They had an insurmountable competitive advantage and management
peed it down the drain, choosing to focus on their egos. This would be where the “I peed
on the Sears Tower club” got its idea. They were “inspired” by upper management.
I wonder what the young ones will come up with when they finally turn on Amazon™.
Don’t think for a minute that Amazon™ won’t eventually go under. Amazon™ is now
where Sears was in the 1960s. Like Sears, management will eventually start hiring
lower-level managers who worship and kiss their ass all day long.
It is a sad reality of human nature. Nobody wants to hear they are the worst manager
ever. People who tell them that get transferred or fired. Eventually, the founder retires,
and the culture changes. New management wants bigger stock options and larger
bonuses. Coming up with viable new ideas is a lot of work. Not everybody is capable of
it. But if your checks clear the bank, you can get an MBA.

Joe’s Pretty Good Consultants

If we don’t know it, you don’t need it.

Not the best, not the worst, we’re pretty good.