You are on page 1of 8

Technical Note-MCWG.2.Oct.1998.

15:59:05 -0300
Dr. Faust's Internet Dilemma
By Ed Gerck*
(c) 1998
The many-sided story of Dr. Faust has in Johann Wolfgang von Goethe
one of its several writers, and certainly the most influential [1].
In the story, Dr. Faust wants to gather all and everything he
desired, essentially looking for what we would call security today -and the deal with Mephistopheles gave him access to that, but at the
cost of Faust's soul. Now, given that experience, I recall Dr. Faust
to give us warnings and hints on how to solve Internet security
problems without giving out our souls to the Devil (please, insert
your favorite Devils here: _____ from the list of TTPs, key-escrow,
CAs, notaries, Big Brother, data mining, data warehouses, spam,
useless costs, renewal costs, hackers, virus, fraudsters,
"innovative" products, etc.).
This message also summarizes some discussions on this general theme
that I had in e-carm, cert-talk, mcg-talk, sci.crypt, dig-sig,
ssl-talk, spki and other lists. The commentators were too many to cite
and thank nominally but the cited lists' archives are avaliable for
context and references, if needed. The text here is my own, except
as noted.
Certification contains a paradox between privacy and security [2].
Basically, if personal or commercial data are denied on privacy
reasons then the other party cannot ascertain the correctness and the
effectiveness of transaction data. Third-party certification systems
such as PKIX/X.509/CA makes the paradox stronger, by introducing a
third element in a binary dialogue.
Of course, we need security now. But, also of course, we need to
protect privacy, because privacy once lost -- is lost for life. This
is usually forgotten not only by CAs that demand your SSN or any
other private data that has nothing to do with a cryptographic
certificate, but also by proponents of "innovative" biometric
products. They all want to offer you a bargain trade: security now
versus your own self forever.
But, as we can read in Dr. Faust, "security now versus your soul
forever" is not a good deal.
Beginning with the issue of biometrics, since biometric data cannot be
revoked in case of theft/loss and cannot be recalled if the proverbial
truck hits the keyholder, one must be very careful with "innovative"
biometric products which try to sell biometrics as a "self-secure"
private key. One should restrict biometrics much more than usually
conceded by vendors, in order not to repeat Dr. Faust's choice. In a
recent exchange with Nicholas Bohm, he also agreed that biometrics
can indeed be useful:
As to biometrics, I wouldn't care to give them away, even if it
isn't quite my immortal soul but just my mortal bank balance that
goes with them. Where they might be useful is as a point of control

for access to my tamper-resistant smartcard which never exports its


key but can import and sign however big a document I want to sign.
Some combination of voice and iris recognition as a requirement
before the card will run would be reassuring without giving too much
away.
but not in a Faustian choice.
The protection of privacy is thus a MUST, where I make the
distinction between "protection" and "security" [2]. Mainly:
- Protection is objective and only depends on self-trust, you can
have 100% protection, protection can be added on.
- Security is subjective and depends on overly-determined
intersubjective trust relationships, you cannot have 100% security,
security cannot be added on.
For example, you can protect your private data that may reside in your
smart-card in several ways -- including 100% physical control of it
and immediate verification of loss/theft. The same cannot be
guaranteed regarding data that resides in a computer linked to the
Internet, that sits at your table while you are away -- even if the
computer is turned off and kept in a closed room.
Thus, it is interesting to consider how much one can decrease the
amount of private data one needs -- for e-commerce or commerce in
general -- or, that one must give away. Clearly, the less private
data you need to carry with you or give away, the safer you are.
This has been discussed at length here, including the usual
misconception that credit-cards do not need user identification when
used in commerce or e-commerce deals. As summarized in [3]:
Regarding cyber-world misconceptions, some think that by escaping
names one can escape reality. Others think that credit-cards deals
would not need names or any real-life id, just assets. Surely, the
merchant gets paid regardless, even if you use a false name. But
this is not the end of id fraud. The bank still goes after the
money...and uses the law against fraudulent practices to enforce the
cardholder agreement, or criminal statues. If Mr. X uses his wife's
credit-card, Mr. X is technically committing id fraud, and
wire-fraud. Of course it works most of the time... But when it does
not, and someone comes enforcing, someone will ask, did you Mr X,
uses Mrs X's credit-card, and represent yourself thereby as Mrs X?
Some claim: Oh, but this is a brave new world! It's cyber-world!
New life! However, history has taught us over and over again that
the new has an uncanny resemblance to the old...
However, as Nicholas Bohm commented in [4]:
In the great majority of private transactions neither party requires
knowledge of the other's identity. The obvious example is a cash
transaction in a shop.
but, also recognizing that even though *identification* is oftentimes
not needed in personal transactions, there are several
*authentication* acts that occur -- even though the majority of

people are unware of their need, mechanisms and safeguards [1],[4]:


27 A number of issues raised in this paper can be illustrated by
referring again to the first example of the paper, a cash
transaction in a shop. If the shop owner does not want to know the
identity of the customer, then at least he must authenticate the
cash used ("No, sir, it's ten pounds and not five pounds you have to
pay"). He may use a visual authentication of value and banknote, or
he could use a UV light, a metal-strip detector, etc., to
authenticate the bill. Sometimes the state imposes a burden of
authentication on the private parties to a transaction, as in the
case of sales of alcohol, tobacco or firearms. In such cases the
seller may also need to authenticate some attribute of the customer.
This may be his age ("No, sir, this liquor may not be sold to
minors") or indeed his identity ("No sir, I must record the name,
address and social security number of all purchasers of firearms").
It is all a question of his acceptable risk versus his incurred
cost, since he is the party at risk (not the customer).
and, also a question of the legal requirements (as stated before)
that may derive from third-parties' rights (such as copyrights) or
from the lawful need to protect the public at large (such as with
digital certificates).
However, how much anonymous can one become and still be able to
participate in e-commerce, or simply in information exchange?
The answer to this question may span a large technical and political
spectrum. However, let me approach it in the general sense -- which
undoubtably will make the analysis partial to special cases, but I am
following the 80/20 rule (80% of all cases are attained by 20% of the
conditionals), which is also general ;-)
Of course, if I don't reveal who I am to *anyone*, then there is no
way that *anyone* can know that I really am who I claim to be. I am
then incomunicado from your side and my acts can only be verified by
myself -- not a general option for commerce.
When I affirm in first place that "If I don't reveal who I am to
*anyone*", exactly as it is written, then (to put it in another way)
no one can track me. This is the essence of anonymity. You are
incomunicado from the side of anyone that wants to contact you -which is also the essence of anonymity's need. However that does not
mean you may not have an identity! You can sure have it but that is
just a "local name". A "local name" is only meaningful to you but can
be used for example in Usenet discussions or even in this mailing
list.
For example, if you go to hotmail.com and register yourself as "Rene
Descartes" <rdescartes@hotmail.com> then you can send your
"cartesian" messages as much as you want, people will answer you but
they can never reach you persoanlly unless hotmail.com cooperates.
You may even be a computer simulation and people would not know, if
it is done well enough. You may be a pool of writers, taking turns at
answers. You may be a group, answering messages by joint decisions.
And yet, I could never track you nor any of your group if it is done
well enough (eg, anonymized mailers).

What I affirmed in second place is that commerce, however, needs to


track you down in order to be fair and useful.
Thus, we see the dialectic tension in Faust's Internet dilemma! You
need to protect your privacy, because privacy once lost is lost for
life. But, you also need your security -- as well as the vendor needs
his. This is the "security now versus your soul forever" dilemma. As
we can read in Dr. Faust's story, we know what we must not do. The
question however is how to trick the Devil to let you do what you
want -- without giving your soul in return.
In other words, you have to reconcile the FACT that preserving your
identity with various degrees of anonymity is a MUST for you -notwithstanding the FACT that commerce can do little with it in
order to provide you with what you want.
A further need for commerce is non-repudiation -- that your legal
acts are traceable and perfectly formed, thus allowing them to be
enforced if needed either by you or by the other party. This is the
reversal affirmation of acts purportedly done under an identity,
where you see the result of persona --> (identity, authorization)
and you want to reverse it to define which persona is paired to the
(identity, authorization) that you have. This leads into three forms
of non-repudiation, in its syntatic (is the signature yours?),
semantic (did you understand what you were signing?) and trust (did
you yourself willfully sign it?) forms. Of course, one cannot even
begin to answer these three questions unless one can reach the
persona -- ie, in its privacy -- where one also needs to define the
persona's legal capacity at the time of signing and the way it had to
be legally expressed (eg, minors).
But, not all our acts are just buying and selling large amounts, or
include a need for non-repudation, especially in the information and
easy-travel age.
One possibility discussed last year (in this thread, mainly at the
mcg-talk) was also the interplay between accountability, reputation
and a lesser form of identification that could be possible if one
relies on reputation to enforce accountability. One of the earliest
references of that possibility was by Moscaritolo and Hettinga [5],
it has also been recently discussed by Brendan MacMillan [6].
However, reputation also has its limits, as when the other side has
lots of power (eg, Nero, Gengis-Khan, Hitler, monopolist companies,
military-force countries, nuclear weapons, etc.) or nothing or few to
lose (eg, crooks, fraudsters, rebels, terrorists, miscreants in
general).
As commented by Nicholas Bohm within a mcg-talk discussion that also
involved 32946 (an anonymous participant in mcg-talk, and by itself a
further example of the privacy concerns we are discussing) last year,
in some cases "reputation is too frail a vessel to carry the load"
[7]:
I think that the case of the deposit of $100 is interesting, but may
not be of wide consequence in principle. I would see it as a case
where an anonymous person appointed a known person as agent so as to

render himself accountable up to the specified limit. The holder of


the deposit must be a known person to enable the victim to enforce
payment of the compensation out of the deposit. I do not think the
fact that an anonymous person can expose himself voluntarily to
liability undermines the consistency or usefulness of the
definitions. Being unaccountable may be part of being anonymous, but
this is not necessary, as 324946 has ingeniously demonstrated.
Turning to the rant, the question relevant to our present purpose is
a social or political one. Will people be willing to trade with
anonymous entities on the basis that they cannot afford to default
because of the effect on their reputation? If so, then knowing an
identity in domain-space is not essential because resort to law
(which depends on that knowledge) is unnecessary. In that case we can
just all live together happily and electronically ever after. As that
slightly sour comment may suggest, I think the answer to the question
in the previous paragraph is "NO". Reputation is too frail a vessel
to carry the load. Look in the newsgroups for the complaints, by no
means always unjustified, against major software providers for
releasing shockingly defective software to gain a market lead over
opponents. Look how successful that tactic has been for some
traders. Also consider the small trader, who may be ruined by an
unjustified attack on his commercial reputation by an unreasonable
but powerful customer. I think that for the protection of the weak
from exploitation or oppression by the strong, resort to independent
dispute resolution methods backed by enforcement at the behest of
society as a whole is an essential foundation for fair trade. If
that is right (and it is essentially a view about human nature),
then the client in the MC system must be able to know from it
whether or not the client can connect the server to a person in
domain-space.
Lack of reputation concern by major players, that can hurt even the
concept of fair trade, has been vividly demonstrated in the last
weeks, for example in the present Russia debacle or with the
stock-exchange sudden oscillations.
We have to remember also that a hacker's "reputation" is to hack
well, a fraudster's "reputation" is to fraud well -- so that when we
say "reputation" different people understand *different* things!
Different people brag in different ways...and have different
standards as to what a "good reputation" is.
Of course, reputation and performance have a link -- they are joined
in feedback -- but that also did not stop Volkswagen AG when they
decided they could get some good money by tapping into General
Motor's Co. secrets, for which they paid US$ 1.1 billion in fines
last year. The worse was that it was worth it, as one of their
directors publicly declared some months before the settlement, when
Volkswagen admitted they had already profited more that US $ 500
million with the situation.
I think all the above indicates that one must consider a feedback
inhibition effect when one wants to rely on reputation to influence
fair performance and accountability: as the commercial power of a
company increases (eg, by monopoly, oligopoly, raw resources
availability, patents, money, tradition, import protection,
partnerships, government support, etc.) then the company becomes LESS
susceptible to its reputation!

This completely negates the indiscriminate use of reputation


mechanisms that do not depend on some form of stronger identification
or at least asset authentication in e-commerce, even for product
values as low as one pays for a copy of Windows 95. So, this
deleterious effect does not depend on product price either. As recent
e-mail scams have shown, precisely the "US$ 50.00 credibility limit"
is targeted by the miscreants that deny stronger authentication or
identification -- which pays off well quickly in large volumes, of
course.
However, as Brendan MacMillan says in a recent posting in e-carm:
No system is perfect; the question is one of sufficiency. I 100%
agree that a legal system with strong identification is fairer and
more useful; though no legal system is perfect either. Litigation
is too expensive and time consuming for most consumer purchases, etc
etc.
But legal recourse is not necessary for commerce; pre-legal commerce
has worked; and even "ex-legal": the mafia and the black market.
Mechanisms other than legal systems can support commerce. Yes,
there are problems of unequal power, but this does not make it
unworkable. Yes, reputation does not encourage my performance if I
have no reputation to lose - but this is true of all recourse
("future vulnerability" is an essential element of recourse).
So, reading all the above sections, a consensus perhaps emerges that
since neither party in a usual commercial transaction needs to know
all the other party's private data, we should be able to set somewhere a
general border to Faust's Internet dilemma that will be necessary and
sufficient for some e-commerce activities -- in a broad sense, as a
guideline, without relying too much on the effectiveness of the
reputation/performance feedback. We hope thus to develop some guidance
as to the least amount of mutual private data that is "good enough"
for some e-commerce uses, under best efforts of all sides involved.
We begin by seeking the lowest possible levels of identification (ie,
by making one's "name" as local and equivocal as possible) that can
be provided under present-day certification methods.
With PKIX/X.509 or PGP, certificates can be either self-signed or
signed by a Trusted-Third-Party (TTP or CA). However:
- self-signed certificates cannot convey trust, neither on the
certificate signature (ie, Is the name valid? Is the signature valid
for that name?) nor on the certificate's declarations (ie, Who
warrants and indemnifies what? Under what context?), since
self-declarations cannot induce trust [3]. Thus, self-signed methods
present no privacy risk to the subject (ie, who signs the certificate)
that the subject could not previously define, but their security risks
cannot be directly evaluated by the recipient.
- certificates that depend on a TTP/CA must include an artifact in the
binary dialogue -- the TTP/CA itself with its own CPS rules, which
needs to know private data from any dialogue party that is to be
certified by the TTP/CA. However, and contrary to common misconceptions

about it [cf. 8], TTP/CA methods have security risks that cannot be
directly evaluated by the recipient (eg, because the recipient is not
privy to the contract between the subject and the CA, etc. [8]) and
they impose an immediate privacy risk to the subject -- that the subject
cannot previously define since it depends on the particular TTP/CA
requirements and actions.
The above considerations lead to the following *best possible* set
of guidelines for PKIX/X.509 or PGP certification, with certificates
issued by a TTP/CA or self-signed:
- e-commerce can work with low or even zero levels of identification
between parties, as provided by self-signed or by TTP/CA certificates,
if the risks are affordable to those who bear them in practice and if
so allowed by law when third-parties are also considered.
- the effectiveness of low levels of identification in e-commerce
depends on a feedback system between reputation and performance, but
feedback inhibition can set in between parties as an unilateral
function of power, reward, ethics, reach, assets, sensitivity, etc.
- non-repudiation cannot work with low levels of identification,
which is both a positive quality as it regards privacy when using
low levels of identification (since you can enforce repudiation)
and a negative quality if the risks are not affordable or if law
requires otherwise.
Seeking a better solution by moving off in an orthogonal direction,
I point out that Dr. Faust's Internet dilemma only applies strongly to
tertiary security (ie, extrinsic certification) -- while it applies
weakly to binary security (ie, intrinsic certification). All known
security designs such as PKIX/X.509, PGP, etc. correspond to the
extrinsic model -- which depends on references that are extrinsic to
the current dialogue, with certification relative to past events and
a third-party such as a CA. The intrinsic model is the new security
design which was proven [9] to exist and MCs strive to make possible
-- which depends on references that are intrinsic to the current
dialogue, with certification obtained by measurements that rely upon
intrinsic proofs [11] and which allow mutual trust [3] to develop
directly in a binary-centered relationship.
To summarize, the strategy to solve Dr. Faust's Internet dilemma is to
force the Devil to remain incomunicado as much as possible -- by denying
one's private information to Mephistopheles, and resisting the lure to
supply one's own self in exchange for "security" now. As shown, this
denial attitude can be applied only to a limited extent when using
extrinsic certification methods such as PKIX/X.509 and PGP that depend
on tertiary security, but should be much more effective with intrinsic
certification methods such as being developed for the MCS, that depend
on binary security.
As a final thought, we need to realize that the Internet provides raw
power, in Einar Stefferud's words [10]. It works as an amplifier
which can provide more of anything that is fed into it: results and
self-discipline, problems and inefficiency, waste of time, order,
chaos, hackers, frauds, etc. In other words, the Internet presents
us with an ever changing phantasmagoria -- and we realize that the
Internet is also a Devil, the Net-Mephistopheles! Thus, Dr. Faust needs

also to deal with the Net-Mephistopheles in order to obtain and defend


his Net security. What strategy should he follow? The same strategy that
was devised above -- he cannot leave his *network's* private data wide
open nor equally available to all. Current firewalls and intrusion
detection agents begin to offer that -- with IP address translation,
operation profiles, etc.
======================================================
References:
[1] Goethe also named and pioneered the science of Morphology, which
predates by 200 years the observations of Maturana and Varela that
the observer is an active part of a measurement process -- called
today second-order cybernetics; a concept also used in the
definitions of Subjective Logic and MCs as discussed in mcg-talk.
Goethe's further observations how "self knowledge" defines the
acquisition of "world knowledge" contain some elements also present
in Trust Theory [3] and being used in MC development.
[2] "Privacy versus Security: Trust is the bridge", E. Gerck,
http://mcwg.org/mcg-mirror/slides.htm
[3] http://mcwg.org/mcg-mirror/trustdef.htm
[4] http://mcwg.org/mcg-mirror/auth_b1.htm
[5] http://www.shipwright.com/rants/rant_15.html
[6] http:// www.cs.monash.edu.au/~bren/thesis.html
[7] http://mcwg.org/mcg-mirror/mcgthreads.htm
[8] http://mcwg.org/mcg-mirror/certover.pdf or
http://mcwg.org/mcg-mirror/cert.htm
[9] http://mcwg.org/mcg-mirror/intrinsic.htm
[10] http://mcwg.org/mcg-mirror/cie.htm
[11] "What is the Internet Paradigm?", Einar Stefferud,
http://mcwg.org/mcg-mirror/slides.htm
______________________________________________________________________
Dr.rer.nat. E. Gerck
ed@gerck.com