You are on page 1of 26

Science in China Series F: Information Sciences

© 2007 Science in China Press

Springer-Verlag

Survey of information security*


SHEN ChangXiang1, ZHANG HuangGuo2†, FENG DengGuo3, CAO ZhenFu4
& HUANG JiWu5
1
Computing Technology Institute of China Navy, Beijing 100841, China;
2
School of Computer, Wuhan University, Wuhan 430072, China;
3
Institute of Software, Chinese Academy of Sciences, Beijing 100080, China
4
Department of Computer Science and Technology, Shanghai Jiaotong University, Shanghai 200030,
China;
5
Information Technology Institute, Zhongshan University, Guangzhou 510275, China

The 21st century is the age of information when information becomes an important
strategic resource. The information obtaining, processing and security guarantee
capability are playing critical roles in comprehensive national power, and informa-
tion security is related to the national security and social stability. Therefore, we
should take measures to ensure the information security of our country. In recent
years, momentous accomplishments have been obtained with the rapid develop-
ment of information security technology. There are extensive theories about in-
formation security and technology. However, due to the limitation of length, this
article mainly focuses on the research and development of cryptology, trusted
computing, security of network, and information hiding, etc.

information security, cryptology, trusted computing, network security, information hiding

1 Introduction
The 21st century is the age of information. One side, information technology and the respective
industry have got rapid development, presenting all-time thriving appearance; the other side, the
threat events of information security often happen, having severe security problems. Information
security concerns national security, and concerns the society stabilization. So we must take meas-
ures to ensure our national information security[1].
Information security mainly includes the following four aspects: Devices Safety, Data Security,
Content Security and Behavior Security. The security of hardware infrastructure and operating
system is the basis of the information system security; cryptography and network security, etc. are

* Chapters 1 and 3 of this paper were written by Zhang Huanguo, chapter 2 by Cao Zhenfu, chapter 4 by Feng Dengguo, chapter 5 by Huang Jiwu.
The full text of the paper was examined and approved by Shen Changxiang.
Received December 19, 2006; accepted January 4, 2007
doi: 10.1007/s11432-007-0037-2

Corresponding author (email: liss@whu.edu.cn)
Supported in part by the National Natural Science Foundation of China (Grant Nos. 60373087, 60673071 and 60572155) and the National
High-Tech Development 863 Progranm of China (Grant No. 2006AA01Z442)

www.scichina.com www.springerlink.com Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298
the key technologies of information security. No other measures than beginning to take security
measures from the ground layer of hardware and software, taking measures as a whole, can en-
sure the information system security availably[2].
Why are the information security problems so severe? From the technical point of view, the
main causations are described as follows.
1. The security architecture of personal computer is too simple. In the 1970s, with the de-
velopment of integrated circuits technology, microcomputer appeared, also named Personal
Computer. Since microcomputer was used by individuals, not the public, many security mecha-
nisms have been considered not necessary. For reducing cost, many mature security mechanisms
are also thrown off, such as memory isolation protection, security protection for programs, etc. So
programs can run in the microcomputer without any authentication, and be tampered arbitrarily.
The system data area also can be modified arbitrarily. And then malicious programs begin to
overflow, such as virus, worm, Trojan horse[3].
2. Personal computer has become public computer with the development of information tech-
nology. Microcomputer is not simply personal computer any more, but becomes public com-
puter at office or home. Because many mature security mechanisms are thrown off, the micro-
computer has a lack of security function in the current public environment.
3. Microcomputer has become a part of network. With the development of network, the mi-
crocomputer has become a part of network, which breaks the geography distance limit of com-
puter, and then the information interaction is extended into the whole network. But there are not
enough security mechanisms for the Internet designing, which endangers the security of micro-
computer linking to network. So there is a common argument, if microcomputer is connected with
network, the security threat received will be increased several times; if microcomputer is not
connected with network, the service got will be decreased several times. Because of the complex-
ity of network protocol, the security proof and attestation of network protocol is very difficult. At
present, only some simple protocols can be proved and certified to be secure. So, it is hard to
avoid that the network protocol used in reality has not any defect. Even if the network protocol
has not any defect, unconditional security also cannot be ensured. For the protocol with no ap-
parent defect, it is easy to be used to launch attack through initiating a great many normal ac-
cesses to exhaust the resource of computer or network, which means the philosophy principle that
Quantitative change leads to qualitative change. The famous DoS attack is a clear proof[4].
4. Operating system has security defects. Operating system (OS) is the most primary system
software of computer, which is one of the foundations of information security. But because oper-
ating system is too huge, e.g. Windows XP has several tens of millions of lines of codes, it is al-
most impossible that an OS has not any fault. The functional fault usually can be ignored. For
example, when Windows halt or shut down, we can press reset button to resume it. But once the
attacker uses defects of OS, the security loss could not be ignored[5].

2 Research and development of cryptology


Cryptology is an important tool in the area of information security. Cryptology can provide data
secrecy, integrality, usability and non-repudiation. Cryptology mainly consists of two parts, cryp-
tography and cryptanalysis. The top task of cryptography is to hide data using coding technology,
while cryptanalysis is about obtaining the plaintext form ciphertext. Those two parts, which are
co-opposite and co-exist, push the cryptology development forward[6,7]. Nowadays, most research

274 SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298
on cryptology is based on mathematic theory and technology. Hence, we could divide the modern
cryptology into three classes: Hash functions, symmetic cryptography (private key cryptography)
and asymmetric cryptography (public key cryptography) [8, 9]. In the following, we would like to
respectively introduce the recent status and developing trends in these cryptosystems.
2.1 Research on Hash functions
Cryptographic hash functions use a string of any length as its input, and output one with constant
length. We could also call this output string the hash value. Here, we assume y=h(x), where h is a
hash function, which has to meet the following requirements: 1) the length of x is arbitrary, while
the length of y is constant; 2) for a given x, it is easy to compute y; while, for a given hash value y,
it is hard to find x to satisfy y=h(x); 3) it is computationally unfeasible to find two different inputs
x and x′ and h(x) = h(x′).
The significant purpose of hash functions is to provide the verification of data integration and
validity of digital signature. Today, there are a lot of constructions for hash functions, including
three categories: 1) based on several mathematical hard problems, such as integer factoring, dis-
crete logarithm problems; 2) based on some symmetry cryptology systems, such as DES design;
3) not based on any assumptions, whereas using directly cryptographic construction instead[8].
The third category of Hash functions covers some famous ones, for example, SHA-1, SHA-256,
SHA-384, SHA-512, MD4, MD5, RIPEMD, HAVAL.
In Crypto 2004, Wang[10] made a landmark research in hash functions development in cryptol-
ogy, which is about “Collisions for hash Functions MD4, MD5, HAVAL-128 and RIPEMD”. This
research gives a feasible and acceptable way to find the collision of international current hash
functions. Afterwards, in Eurocrypt 2005 and Crypto 2005, Wang[11 14] published her more re-

sults about hash functions. Today, studying and designing a more secure hash functions becomes
a hot topic in cryptology development.
2.2 Research on private key cryptosystem
A cryptosystem is called uni-key symmetric cryptosystem or private key cryptosystem, if the en-
cryption key and decryption key in it are the same, or one can be deducted from the other, in spite
of their difference.
Block cipher is one kind of classic private key cryptosystem, e.g. DES, IDEA, Skipjack, and
Rijndael Algorithms. The key point in designing a block cipher is how to choose an algorithm,
with which we can easily and quickly select one from a large and good enough subset of substitu-
tions under the control of keys. Since a good block cipher should be simply implemented but hard
to break, two requirements should be satisfied: 1) The computation is easy for both encryption
function Ek(.) and decryption function Dk(.). 2) If y is the ciphertext of plaintext x under the key k:
y= Ek(x), it is computationally unfeasible to get k from y= Ek(x) and x = Dk(y).
With the development of research on block ciphers’ design, analytic technology on it has also
been improved rapidly. Several kinds of analytic technology on block ciphers have been proposed
and discussed so far, including brute force, differential cryptanalysis, extension of differential
cryptanalysis, linear cryptanalysis, extension of linear cryptanalysis, differential-linear cryptana-
lysis. On November 26, 2001, the United States’ National Institute of Standards and Technology
(NIST) formally announced a new encryption standard named the Advanced Encryption Standard
(AES)[15], and New European Schemes for Signatures, Integrity and Encryption (NESSIE) plan
was launched in Europe after that, as well as European Network of Excellence for Cryptology

SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298 275
plan. A series of cryptographic algorithms were made for improvement of researches on cryptol-
ogy and its application. In China, standardization for cryptology is also going to be placed on
agenda in the National “863” Program.
Currently, the key research direction of block ciphers is the design of new-style cipher, soft-
ware optimization for cryptosystem, hardware implementation and dedicated cipher chip’s design,
etc.
Zhang and Qin[16 18] borrowed the idea of organic evolution, incorporated cryptology and

evolutionary computation, and proposed the conception of evolutionary cipher and methods to
design ciphers with evolutionary computation, which acquired significant achievements in S-box
of block cipher, Bent function, and evolutionary design of random sequences.
In addition to block cipher, stream cipher is also a kind of important private key cryptosystem.
One-time Pad cryptosystem is absolutely secure in theory, which enlightens us if it could be imi-
tated by some way; we would gain one kind of cipher with high confidentiality. People have
made much effort on it and sped up the study and development of stream cipher. Compared with
block cipher, theory and technology on stream cipher is more mature, which makes it be consid-
ered as the main cipher all over the world and occupy a high place in information security. On the
design side, besides shifting register sequences, non-linear grouping sequences, non-linear filter
sequences and clock-controlled sequences, in recent years chaotic sequence is introduced into
stream cipher, and gratifying progress has been achieved[19]. Ding and Xiao et al.[20] have made
outstanding contribution in this area.
2.3 Research of public key cryptography
A cryptosystem is called asymmetric cryptosystem (a.k.a. public key cryptosystem), if the en-
cryption and decryption capabilities were separated, namely the encryption and decryption using
two different keys, and it is impossible to deduce the correspondence decryption key (private key)
by the encryption key (public key).
Since the concept of public key cryptography was proposed in 1976[21], many outstanding pub-
lic key cryptosystems were designed by cryptography scientists, such as, RSA public key crypto-
system[22], MH knapsack cryptosystem[23], Rabin cryptosystem[24], ElGamal public key crypto-
system[25], elliptic curve public key cryptosystem [26], McEliece system[27], finite automaton ma-
chine theory based public key cryptosystem[28], multi-dimension RSA[29], improved RSA[30],
conic cryptosystem[31,32], MC public key cryptosystem based on Chinese remainder theorem[33],
and proxy cryptosystem[34]. Besides public key cryptosystem, public key cryptography also in-
cludes digital signature[35]. The famous digital signatures include RSA signature, Rabin signature,
ElGamal signature, Schnorr signature[36] and American country digital signature standard DSS[37].
Since the digital signature can provide the authenticity, integrity and non-repudiation, hence,
along with practical applications’ requirements, the special digital signatures have been widely
proposed. These mainly include: proxy signatures [38,39], blind signatures[40 42], encryption verifi-

able signatures[43], undeniable signatures[44,45], forward secure signatures[46,47], key isolation sig-
natures[48], online/offline signature[49], threshold signatures[41,50 52], aggregation signatures[53],

ring signatures[53,54], designated verifier signatures[55,56], confirmed signatures[57], and their vari-
ants[58].
Although public key cryptography has many advantages, public key authentication and the
certificate management are quite complex. For example, the establishment and maintenance of

276 SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298
X.509 certificates framework is complex, and the cost is expensive[59]. In 1984, for simplifying
certificate management, Shamir[60] constructively proposed the idea of ID-based public key cryp-
tographic system. In key generation of this kind of public key cryptosystem, the public key is the
user’s identity information, such as the unique ID card numbers, email address. As a result, iden-
tity based cryptosystem can solve the public key and the entity binding question naturally. Since
the identity based signature scheme was proposed by Shamir, finding a practical identity based
encryption scheme remained an open challenge until 2001, Boneh and Franklin[61] used bilinear
maps over algebraic curves to solve the challenge. After that, this fashionable bilinear mapping
technology becomes the main direction on the construction of the identity based cryptosystems
and identity based digital signature schemes, and many outstanding achievements have been
gained[62 64].

Although identity based cryptography simplifies the CA public key certificate management, it
needs a credible private key generator (PKG) generated keys for all users. Once the security of
PKG has the problem, the whole system will be paralyzed. Therefore, study on the security of
PKG is a key problem to solve the key escrow problem in identity-based cryptography. At present,
in order to ensure the safety of PKG, the distributed key generation PKG is proposed through the
threshold cryptographic technology[65]. In order to solve the key escrow problem, the certificate-
less cryptosystem[66] was proposed in 2003, which has obtained extensive research in recent
years[51,67].
Public Key Cryptography is a complex system, with a hostile work environment; hence, it is
very easy to suffer from many kinds of external or internal attacks. However, at the beginning of
the public key cryptography’s research, it lacked a rational understanding of the various attacks,
which made people misunderstand the public key cryptosystem’s security. For example, the initial
attacks all have a classic “textbook” form. Afterwards, people gradually realized the importance
of formal methods in the design and analysis of public key cryptosystem. Currently, the research
of provable-secure public key cryptosystem has already become a main direction in the modern
cryptology[7,9].
2.4 Research on provable security
During last two decades, provable security (providing proofs in the framework of complexity
theory) is a hot topic in the public key cryptography research community. In fact, provable secu-
rity is a kind of “reduction”. Firstly, it defines the security notions required by actual crypto-
graphic schemes, then defines an attack model according to the adversary’s ability, and lastly,
finds the reduction between the security notions and the attack model. For example, one crypto-
graphic scheme is based on RSA problem, and then we can use the attack model to analyze the
scheme’s security as follows: If an algorithm can break the scheme in probabilistic polynomial
time (PPT) with a non-negligible probability, through reduction, we can build another algorithm
to solve RSA problem in PPT with a non-negligible probability. Since RSA problem is unsolvable
when its parameters satisfy some requirements, we can conclude that the scheme is secure from
the contradiction in the reduction. Currently, the provable security mainly concerns public key
encryption schemes, digital signatures, and key agreement protocols.
For public key encryption schemes, in attack model, the adversary aims to achieve the follow-
ing goals. The first one coming to our mind is the public key encryption’s one-wayness, i.e., the
adversary cannot get the whole plaintext from a given ciphertext by only knowing the public in-
formation. However, one-wayness is not enough in many applications, which usually need higher

SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298 277
security requirement. In 1982, Goldwasser and Micali[68] introduced probabilistic theory into
cryptology, and proposed the definition of “semantic security” in their breakthrough work. Se-
mantic security is also known as indistinguishability of encryptions, which is mainly from the
following scenario: there exists two-stage adversary A=(A1, A2). At the beginning, A2 chooses two
same length messages m0 and m1 from the plaintext space, and sends them to A1. Receiving the
messages m0 and m1, A1 gets a bit value b∈{0,1} by coin-toss, and then encrypts message mb to
get the ciphertext c. At last, A1 sends c to A2, who guesses the bit value b (denoted as b′). We de-
fine Adv(A)=2Pr[b′=b]−1 as the guessing advantage for any PPT adversary A. If Adv(A) is negli-
gible, the scheme is semantic-secure. Besides semantic security, Dolev et al.[69] proposed another
security notion: non-malleability in 1991. This security notion resists the following attack: the
adversary, given a ciphertext, can produce a new ciphertext such that the plaintexts are meaning-
fully related, in PPT with non-negligible probability. This security notion is important, however,
it is difficult to formalize the notion due to the computational essence of non-malleability. Fur-
thermore, according to the available information for the adversary, the attacks can be classified as
follows: chosen-plaintext attack, validity-checking attacks, plaintext-checking attacks, and cho-
sen-ciphertext attacks[69].
For digital signatures, in attack model, the adversary aims to achieve the following goals. 1)
Total break, revealing the signer’s private key. Obviously, it is the most serious attack. 2) Uni-
versal forgery, constructing an efficient algorithm which succeeds in signing messages with high
probability. 3) Existential forgery, providing a new message-signature pair. The corresponding
security level is called existential unforgeability. In many cases the last forgery is not dangerous,
since the output message is likely to be meaningless. However, an existentially forgeable signa-
ture scheme cannot guarantee the non-repudiation property. In 2002, the concept of stronger ex-
istential forgeabilty was proposed[70]. On the other hand, according to the available information
for the adversary, the attacks can be classified as follows: known-public key attacks, known-
message attacks, and adaptive chosen-message attack[71,72].
For key agreement protocols, the adversary in the attack model can control the whole commu-
nication by querying to pre-defined oracles. The query to Execute oracle models passive attacks,
the query to Send oracle models active attacks, the query to Reveal oracle models known-session
key attacks, and the query to Corrupt oracle models forward security and private key reveal at-
tacks. At last, the query to Test oracle models the semantic security of key agreement protocols.
The security model BR93 for key agreement protocols was proposed by Bellare and Rogaway in
1993[73]. Since then, other security models were proposed, including BR95[74], BPR2000[75], and
CK2001[76]. In Asiacrypt 2006, Choo[77] investigated the relations among these security models.
For more information, readers can refer to refs. [78―81].
Currently, the most popular method to achieve provable security is using the random oracle
model, proposed by Bellare and Rogaway[82] in 1993, which improved the Fiat and Shamir’s
idea[83]. This model is not a standard model. In this model, Hash function can be formalized by an
oracle which produces a truly random value for each new query. In the proof, the random oracle
is shared with all parties, and is replaced by Hash function, such as SHA-256, in the implementa-
tion. Although, the proof in the random oracle model is efficient, the security of the random ora-
cle model is controversial. For example, in 1998, Canetti et al.[84] proposed a signature which is
proved secure in the random oracle model but insecure in the instance of random oracle. Hence,
currently, provable security has two directions, one is proving in the random oracle model, and

278 SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298
the other is proving in the standard model. In 1998, Cramer and Shoup[85] proposed the first effi-
cient public key encryption scheme which can be proved in the standard model. Since 2004,
many public key encryption schemes based on bilinear map were proposed; these schemes can be
proved in the standard model[86,87].
Except cryptography based on mathematics, many non-mathematical cryptography, such as
quantum cryptography[88], DNA cryptography[89] attract a lot of attention. Currently, the commu-
nication distance for quantum key distribution can achieve 100 kilometers in experiment.
In 2006, Chinese government published its own commercial cryptographic algorithm, which is
a milestone in the history of Chinese cryptology, and will promote thriving research and applica-
tions of commercial cryptography in China.

3 Research and development of trusted computing technology


Through the practice of information security, people have realized that the causation of security
comes from microcomputer terminal. To ensure the source security of microcomputer terminal,
the solution must be brainstormed from chips, hardware architecture, and operating system, etc.
synthetically, which is the original idea of trusted computing.
3.1 History of trusted computing technology
3.1.1 Concept stage
(1) The Rainbow Issues. In 1983, American Department of Defense established the first
TCSEC (Trusted Computer System Evaluation Criteria) in the world[90]. The notion of Trusted
Computer and Trusted Computing Base (TCB) is the first advance in TCSEC, and it is considered
that TCB is the basis of system security.
After establishing TCSEC, American Department of Defense established Trusted Database In-
terpretation (TDE)[91] and Trusted Network Interpretation (TNI) as supplement in 1984[92].
These files formed guidelines of Rainbow Serials information system security
(2) The Significance and Limitation of Rainbow Serials. The rainbow issues introduce the
concept of trusted computing and trusted computing base. Up to the present, Rainbow Serials
always are the main principles for evaluating computer system security.
But for history limitation, there are some problems in Rainbow Serials:
(a) They only emphasize information secret, but consider less integrity and authenticity.
(b) They emphasize the evaluating of system security, but do not give the architecture and
technology method to reach respective security level.
3.1.2 Climax stage
(1) Appearance of TCPA and TCG. In 1999, IBM, HP, Intel and Microsoft, etc. founded the
TCPA (Trusted Computing Platform Alliance). The foundation of TCPA indicates the appearance
of an advanced stage about trusted computing. In 2003, TCPA was reorganized as TCG (Trusted
Computing Group), which indicates that trusted computing technology and its application area
further expand. The appearance of TCPA and TCG forms a next climax of trusted computing.
TCPA and TCG have established a serial of technology specification about trusted computing
platform, trusted storage and trusted network connection, etc.[93].
(2) Significance of TCG Trusted Computing.
(a) TCG firstly establishes trusted computing platform notion, embodies this notion into server,

SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298 279
microcomputer, PDA and mobile, and gives the architecture and technology method of trusted
computing platform.
(b) TCG not only considers information secret, but also emphasizes information integrity and
information authenticity.
(c) TCG makes trusted computing platform form industrialized much more, and have more
universality. Now there are more than 200 international famous IT companies which have joined
TCG. IBM, HP, DELL, NEC, GATEWAY, TOSHIBA, FUJITSU, SONY and some other compa-
nies have developed their own trusted PC (tabletop Computer or laptop) respectively. ATMEL,
INFINEON, BROADCOM, NATIONAL SEMICONDUCTOR, and other companies have re-
searched and developed TPM chips.
(3) Trusted Computing in Europe. In Europe the research plan of Open Trusted Computing[94]
has been started to develop open code software of trusted computing in January 2006. More than
32 science institutes and industry groups have participated in this plan.
(4) Other genres of Trusted Computing. At present, except TCG, there are two other genres
of trusted computing.
(a) Microsoft genre. Although Microsoft is one of TCG founding promoters, Microsoft built
a plan named Palladium[95] all alone. Microsoft used the term as Trustworthy computing, not
Trusted Computing. Intel gave support for Microsoft’s Palladium plan immediately, declared La-
Grande hardware technology for supporting Palladium plan, and planed to develop the new gen-
eration of Pentium processor with adopted LaGrande[96] technology. Later, Microsoft renamed
this plan as NGSCB (Next Generation Secure Computing Base).
Microsoft will publish a new generation operating system VISTA which supports Trusted
Platform Module (TPM). It will push trusted computing to a new high tide.
(b) Fault tolerance genre. Fault-tolerant computing is an important embranchment of com-
puter area. In 1995, French Jean-Claude Laprie and American Algirdas Avizienis advanced the
notion of Dependable Computing. Since 1999 the Pacific Rim International Symposium on
Fault-Tolerant Systems was renamed as The Pacific Rim International Symposium on Depend-
able Computing, fault-tolerant computing experts have dedicated their energy to research de-
pendable computing. Their dependable computing emphasizes the dependability, usability and
maintainability of computing system, and emphasizes the certification of dependable comput-
ing[97].
We think that it is natural that different researchers study the problem from different points of
view during the development of trusted computing, which represents the prosperity of academic
research. With the development of trusted computing, different research schools would arrive at
an amalgamation gradually.
3.2 Trusted computing in China
In China, the initiative research of trusted computing is not late, and the achievements are plenti-
ful and substantial[98].
In June 2000, Jetway Information Security Industry Co. Ltd cooperated with Wuhan Univer-
sity to start researching and developing security computers. This is the first practice with trusted
computing platform technology in China. In October 2004, Jetway trusted computer passed the
technology certification of the State Cipher Code Administration Committee of China. The certi-
fication conclusion is, “Jetway trusted computer is the first TCP in China developed by ourselves
independently, with reasonable designing and effective security mechanisms”. The architecture

280 SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298
and main technology of this type of trusted computer are in accord with the TCG specification,
and also have some differences and innovations. Jetway trusted computer was awarded as “Na-
tional Significant New Product” by the State Science and Technology Ministry and other three
Ministries of China, and has been used in applications of government, police, bank, army and
other departments[99,100].
The first TCP forum of China was held in June 2004 in Wuhan. Supported by the Cipher Code
Administration Committee of the Chinese People’s Liberation Army, the first academic trusted
computing conference of China was held in October 2004 in Wuhan University.
In 2005, Lenovo developed TPM chip and trusted computer successfully. In the same year,
Sinosun also developed TPM chip successfully. These products also passed the checkup and were
recognized by the State Cipher Code Administration Committee of China.
Besides, Tongfang, Founder, Inspur, Topsec and other companies also have joined the train of
trusted computing. Wuhan University, Institute of Software of Chinese Academy of Sciences,
Beijing University, Tsinghua University, and other universities and institutes also have carried out
research on trusted computing.
Wuhan University, cooperating with Huazhong University of Science and Technology and HP,
researches on the improvement of the security in grid based on the TCP technology[101]. The re-
search achievement gains a favorable comment.
Nowadays all levels governments in our country greatly support the research, development and
application of trusted computing.
Thus the career of Chinese trusted computing has entered into a flourish developing stage.
3.3 Main ideas and technical method of trusted computing
3.3.1 Goal and main ideas of trusted computing. In TCG specification, a trusted computing
platform is described to ensure data integrity, secure storage and platform identity attestation. A
trusted platform has four characteristics such as Secure I/O, Memory Curtaining, Sealed Storage
and Remote Attestation. Trusted computing products can be applied in e-business, security risk
management, digital right management, security monitoring and emergency response, etc.
The main technology method of trusted computing is, build-
ing a root of trust first, then building a trust chain, starting from
root of trust to hardware platform, operating system, and appli-
cations. The previous portion of code that is executed checks the
integrity of the next component to be executed and passes trust,
then trust can be extended into the whole computer system.
One trusted computer system is composed of root of trust,
trusted hardware platform, trusted operating system and trusted
applications, which is shown in Figure 1.
3.3.2 Definition of trusted entity and properties of trust
(1) Definition of Trusted Entity. The first question answered
Figure 1 Trusted computer system.
by trusted computing is what trusted entity is. At present, there is
not a uniform definition; different group and organization has different interpretation. Mainly
there are the following several interpretations:
TCG uses a behavioral definition of trust, and an entity can be trusted if it always behaves in
the expected manner for the intended purpose[93]. This definition has caught hold of the behavior

SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298 281
character of entity, which is in accord with the philosophy basic principle; practice is the sole
criterion for judging truth.
ISO/IEC 15408 defined that if the component, operation, or procedure that participates in
computing is divinable and can resist virus and physical disturbance then they are trusted.
IEEE Computer Society Technical Committee on Dependable Computing thinks that if the
service provided by one computing system can be certified to be trustworthy then this computing
system is trusted. This definition is from the point of view of users[97].
Here we present our own definition, trust≈secure + dependability, trusted computing platform
is a computing entity integrated with hardware and software, which can provide trusted comput-
ing service, and can ensure the dependability and usability of system, and the security of infor-
mation and behavior[98].
IEEE published special proceeding of trusted computing, IEEE Transactions on Dependable
and Secure Computing. It is obvious that our definition is in accord with the viewpoint of IEEE
proceeding.
(2) Properties of Trust. Trust relation is a binary relationship. This relation can be between
one entity and one entity, or between one entity and many entities (between individual and group),
or between many entities and one entity (between group and individual), or between many enti-
ties and many entities (between group and group).
● Trust has twofold properties. Trust not only is subjective, but also is objective.
● Trust is not always symmetrical. The fact that A trusts B cannot always get B trusts A.
● Trust can be measured. Trust degree can be measured into different levels.
● Trust can be transferred, which is not absolute, and has loss during the transferring process.
● Trust is dynamic. Trust is related with context and time factor.
(3) Obtaining Method of Trust. The obtaining method of trust mainly has two types of
methods, direct method and indirect method. If A has interaction history with B in the past, then
the trust degree of B can be confirmed by reviewing the past behavior of B. We call this type of
trust value from direct interactions direct trust value. If A has not any interaction with B in the
past, A may ask one acquaintance C to get the trust value of B, which requests that C had direct
interaction experiences with B. We call this type of trust value indirect trust value, or recommen-
dation trust value from C to A. Sometimes there exists many levels recommending instance, and
this is the trust chain.
3.3.3 Trust management and trust model. At present, the trust management theory is mainly
trust models based on probability statistics[102,103], fuzzy mathematics[104], subjective logic, the
theory of evidence[105,106], and software behavior[107]. But the models above need more precise
and simplified improvement. It is worthy of being mentioned that the research on trust measure-
ment is required by trust theory.
3.3.4 Root of trust and trust chain. Trust chain mechanism based on a root of trust is one of
the most primary key technologies.
The root of trust is the basis of trusted system. One trusted computing platform must contain
three roots of trust: Root of Trust for Measurement (RTM), Root of Trust for Storage (RTS) and
Root of Trust for Report (RTS). Whether the root is trusted is guaranteed by physical protection
mechanisms and managerial approaches.
TCPA PC specific implementation specification has given the scheme of trust chain in PC,
which is shown in Figure 2. We can see that TPM is the root of this trust chain, which includes

282 SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298
BIOS, OS Loader, and OS one after another. The previous portion of code that is executed checks
the integrity of the next component to be executed and passes trust along the trust chain to ensure
the system resource integrity of the whole platform.

Figure 2 Trust chain of trusted PC.

3.3.5 Trust measurement, storage and report mechanism. Trust measurement, storage, and
report mechanism is another key technology of trusted computing. The integrity of computing
platform is measured and stored; when the platform is accessed the integrity data will be reported
to the challenger.
We should point out that the trust measurement based on Figure 2 just is the system resource
integrity data during booting process. This is not dynamic trust measurement during system work,
which cannot ensure the dynamic trust after boot up. Because of the limitation of software trust
measurement technology, the dynamic trust measurement, storage and report mechanism in
trusted computer have not been implemented completely, regardless of inland or abroad.
3.3.6 Trusted computing platform. TCG does not only present the notion of Trusted Server,
Trusted PC, Trusted PDA, Trusted Mobile and other trusted computing platform, but also gives
their respective specifications.
Trusted PC is one trusted computing platform that has products. The main characteristic of
trusted PC is motherboard embedded with a Trusted Building Block, TBB. TBB is the root of
trust of Trusted PC, which includes CRTM (Core Root of Trust for Measurement) for trust meas-
urement, TPM (Trusted Platform Module), and the connection between them with motherboard.
3.3.7 Trusted platform module (TPM). TPM is a SOC chip, which is the root of trust of
trusted computing platform, whose architecture is shown in Figure 3. TPM includes CPU, memo-
ries, I/O resources, crypto coprocessor, random number generator and embedded operating sys-
tem, etc. The TPM functions include trust measurement data storage and report, key generation,
encrypting data and generating digital signature, security storage, etc. We must notice that TPM is
the root of trust of trusted computing platform. We must let the root of trust of trusted computing
platform stay in China. That is to say, Chinese trusted computer must use Chinese root chip, and
Chinese root chip must use Chinese cryptography algorithm.

SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298 283
3.3.8 TCG software stack. TSS (TCG Software
Stack) is the supporting software of TPM. The
main function of TSS is providing an interface for
other software to use TPM expediently and uni-
formly.
TSS can be divided into three layers, kernel layer,
system service layer and application layer.
The core software of kernel layer is TPM device
driver, which is a module directly operating with
TPM, confirmed by the developer and operating
system. The core software of system service layer
is TPM Device Driver Library (TDDL) and TSS
core service (TCS). TDDL provides the interface of
Figure 3 TCG TPM component architecture.
user mode; TCS provides a universal service for all
applications. The core software of user application
layer is TSS Service Provider (TSP). TSP provides API for applications; applications can make
use of trusted computing function of TPM.
The workflow is, applications send data and command to TSP API, TSP analyzes the data and
command then sends the data package to TDDL through TCS. TDDL organizes the package again
and sends it to TPM Device Driver (TDD). TDD directly accesses the TPM. The response data of
TPM is transmitted to applications in reverse order, through TDD, TDDL, TCS and TSP.
Different kinds of applications can use the trusted computing function of TPM easily under the
support of TSS.
3.3.9 Trusted network connect (TNC). The purpose of TNC is ensuring the integrity of net-
work requestor. TNC can be divided into three parts. The network access layer: These are the
components whose main function pertains to traditional network connectivity and security. They
may support a variety of networking technologies (e.g. VPN, 802.1X). The components found in
this layer are the Network Access Requestor (NAR), the Policy Enforcement Point (PEP) and the
Network Access Authority (NAA). The integrity evaluation layer: The components in this layer
are responsible for evaluating the overall integrity of the Access Requestor with respect to certain
access policies. The integrity measurement layer: This layer contains plug-in components that
collect and verify integrity-related information for a variety of security applications on the Access
Requestor.
When someone requests a network access, TNC can collect and verify the integrity of re-
questor, then evaluate the information according to security policy, and decide the network re-
quest.
Although TNC can ensure trust of network access, it is not enough to protect the data exchange
and resource sharing.
3.4 Some problems in current trusted computing development
At present, trusted computing has been a new tide in worldwide information security area. But
there are some problems in current trusted computing development, which should be resolved[98].
(1) Delay in theory research on trusted computing. Both in the country and in international
research fields, technology development is in front of theory research in trusted computing area,

284 SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298
and there is a delay in theory research. At present, there is no recognized trusted computing the-
ory model.
Trust measurement is one of the bases of trusted computing. But now there is a lack of the dy-
namic trust measurement methods and theory of software.
Trust chain is one of the key technologies of trusted computing. But the trust chain theory and
the loss measurement of trust transferring should be further researched. So that the trust chain
could be built on a solid theoretical base.
Theories come from practice, and reciprocally, theories will provide guidance for practice. The
practices cannot last for a long time without guidance of theories. Now there is a great develop-
ment of trusted computing technology, so we should enrich and develop the theory of trusted
computing in practice.
(2) Some key technologies to be developed. Regardless of overseas trusted computer or
Chinese trusted computer, both does not implement the TCG PC specification completely, such as
dynamic trust measurement, storage, report mechanism and security of I/O, etc.
(3) Lacks of trusted operating system, trusted network, trusted database and trusted applica-
tion. Now TCG has given trusted computing hardware platform respective specifications and
trusted network connection specification, but there are no specifications about trusted operating
system, trusted database and trusted application software. Network connection is the first step of
network action; the main purpose of the network connection is data exchanging and resource
sharing, which has a lack of respective technology specification. Only with the trust of hardware
platform, the whole system is still insecure without the trust of operating system, network, and
database and application[108,109].
(4) Lacks of combination between security and fault tolerance. Security and dependability
are important for users. But many researchers do not concern the combination of both two.
(5) Development of trust application. The ultimate purpose of trusted computing develop-
ment is the application. Now trusted PC and TPM chipset have been applied, but the application
scope is inadequate, which needs to be extended.
3.5 Research area of trusted computing
The climax of trusted computing in current stage starts from trusted PC. But its research area and
application domain have a more broad range. Trusted computing key technology and theory
should be the emphases of research[98].
(1) Key technologies.
① Trusted computing system architecture: the hardware architecture and software architecture
of trusted computing platform.
② TPM system architecture: the hardware architecture, physical security, embedded software
of TPM.
③ Trusted computing cryptography: public key cryptography, traditional cryptography, Hash
function, random number generating.
④ Trust chain technology: the integrity of trust chain, the extend of trust chain.
⑤ Trust measurement: the dynamic measurement, storage and report mechanism of trust.
⑥ Trusted software: trusted operating system, trusted compiling, trusted database, trusted
application software.
⑦ Trusted network: trusted network architecture, trusted network protocol, trusted network

SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298 285
device, trusted grid.
(2) Theory basis.
① Trusted computing model: trusted computing mathematical model and behavior model.
② The measurement theory of trust: the measurement theory and model of software dynamics
trust.
③ Trust chain theory: trust transferring theory, the loss measurement of trust transferring.
④ Trusted software theory: the measurement theory of software trust, trusted software
engineering methodology, software behavior.
(3) Application of trusted computing.

4 Research and development of network security


The conception of network security consists of two aspects: security of network itself and secu-
rity of network information. This chapter mainly focuses on security of network information.
Based on security requirements of network, the academic and industrial community have con-
ducted substantial research work on network content security, network authentication and au-
thorization, firewall, virtual private network, network intrusion detection, vulnerability probe,
security isolation and exchange, security gateway, security surveillance and management, net-
work security auditing, detection and protection against malicious code, spam handling, emer-
gency response, etc. Large numbers of network security products have been released, developing
into a rising industry. It is anticipated that security technology based on network will become the
major trend in respect of development of information security technology[111,112]. This chapter
summarizes the current research situation and development trend of network security, including
Public Key Infrastructure, Intrusion Detection System, network emergency response, network
survivability and trusted network.
4.1 Public key infrastructure
As a kind of important solution to the trust and authorization issues in network environment, such
as validity of identities, confidentiality of data, integrity of files, and non-repudiation of behav-
iors, Public Key Infrastructure (PKI) technology has, especially in the research and application,
attracted great attention of the industrial community, the academic community and the govern-
ment. Large international IT companies, such as IBM, MICROSOFT, BALTIMORE, CERTCO,
RSA, FUJITSU, MITSUBISHI, are providing PKI products. Meanwhile, many companies in
China, e.g. Jilin University of Information Technologies, Shanghai Wellhope, Jinan Dean Com-
puter Technologies, etc., have their independent PKI products. The academic community, such as
State Key Laboratory of Information Security, has performed systematic research work and made
a lot of advanced achievements on PKI standards, PKI architecture, security technology of PKI
itself and cross-authentication techniques. The National Standardization Organization has made
great efforts to put forward the research and establishment of PKI standards, with a volume of
relevant standards and specifications issued.
The academic and industrial communities have achieved a lot of innovative and applicable re-
sults of trust and trust verification mechanisms (trust model, trust policy, verification mechanism,
etc.), key management technology (the generation, storage, backup and recovery techniques of
public key and private key of CA and users), core components of PKI security (CA systems, RA
systems and KM systems with different scales), cross-authentication techniques, resilient and

286 SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298
intrusion-tolerant CA techniques. Among others, China has made outstanding improvements in
the research and application of PKI technology during the 10th “Five-year” Plan, which can meet
practical requirements of PKI in respect of large scale network environment and successfully be
applied in those critical sectors of government, military and finance.
As the key solution to the trust and authorization issues in network environment, PKI has the
prospect of widespread application to e-commerce and e-government. The development of PKI
implies several trends as follows:
1) Trend of application. With the application of e-commerce and e-government, under the cur-
rent technical circumstances, PKI is still the best choice to solve the problem of trust and au-
thorization in the environment of public network, and hence will be more widely used. Neverthe-
less some practical problems in application and the use of verification model should be further
studied and investigated.
2) Trend of standardization. Widespread application of PKI will introduce problems of inter-
connection, intercommunication and interoperability. In addition to technical solutions,
standardization is the critical approach to these problems.
3) Trend of integration. That is, integration with novel technology and applications such as
biometric identification, identity-based public key cryptography, trusted computing platform, in-
trusion-tolerant CA, self-adaptive CA, etc.
4.2 Network based intrusion detection system
Intrusion Detection System (IDS) can be divided into two categories: network based IDS and
host based IDS. This section focuses on network based IDS, which can help us to rapidly detect
attacks against network systems, thereby extend the security management capability of adminis-
trators and improve the integrity of information security architecture. Network based IDS collects
information from key nodes of a network system, performs analysis, and observes violation of
security policies and indications of attacks.
The industrial and academic communities have made great efforts to study IDS, with produc-
tion of plenty of practical products. Many IT companies in China, like Lenovo, Venus Info. Tech.,
Nandasoft Tech., etc., have independent IDS products. In addition, many research institutes have
developed their IDS prototypes, and published large amounts of papers and books.
In 1990, intrusion detection technology was applied to distributed systems which marked the
emergence of network based intrusion detection. UC Davis’s Heberlein developed the NSM
(Network Security Monitor) which firstly introduced network traffic to the data source of security
auditing.
In 1992, the research work on Distributed Intrusion Detection System (DIDS) made remark-
able progress. Funded by US Air Force, National Security Agency (NSA) and Department of En-
ergy (DoE), DIDS integrated two kinds of existing intrusion detection systems, Haystack and
NSM, which combined functions of the two systems and made improvements in detection tech-
nique and system architecture. DIDS was composed of the host monitor, the LAN monitor and
the controller, while the analysis engine was a rule based expert system. DIDS employed distrib-
uted data collection and analysis, while centralized processing of core data was adopted.
In 1994, Crosbie and Spafford proposed the conception of autonomous agent to improve the
extensibility, maintainability, efficiency and fault-tolerance of IDS.
For the purpose of improving extensibility of IDS, Stanford-Chen put forward the Graph-based
Intrusion Detection System (GRIDS) in 1996, which was quite effective for detecting large scale

SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298 287
automated or coordinated attacks. GRIDS employed graphs to describe behaviors in a large scale
network, with capability of detecting extensive network attacks. The disadvantage of GRIDS
consisted in the graph representation of network connections without automatic analysis result,
with final decisions still depending on human efforts.
There are seldom novel theories of intrusion detection proposed ever after 1998. During this
period, research work has focused on improvement of detection algorithms, especially those of
network based intrusion detection, distributed intrusion detection, and intrusion detection based
on intelligent agent, neural network and genetic algorithm. In order to enhance the interoperabil-
ity of IDS products, components and other kinds of security products, people placed great em-
phasis on standardization of IDS at this stage.
In 1998, the Defense Advanced Research Projects Agency (DARPA) established the Common
Intrusion Detection Framework (CIDF) drafted by UC Davis Security Lab. CIDF introduced a
kind of Common Intrusion Specification Language (CISL) to represent system events, analysis
results and response measures. For the purpose of dividing IDS into task-oriented components
logically, CIDF attempted to specify a kind of common language format and encoded mode to
describe data transferred between IDS components. CIDF was composed of four parts: the archi-
tecture of IDS, the communication mechanism, the description language and the Application
Programming Interface (API). Moreover, CIDF defined Common Intrusion Specification Lan-
guage (CISL) to represent intrusion behaviors. CIDF was superior in extensibility and normaliza-
tion.
During the past 20 years, IDS has made great progress. However, under the circumstances of
rapid development of network technology and increasing complication of attack behaviors, it
shows much room for further improvement. The development of IDS implies several trends as
follows:
1) Trend of meeting practical requirements. High-speed networks, such as ATM, Gigabit
Ethernet, etc., are more and more widely deployed. How to realize real-time intrusion detection
in the high-speed network environment has become a practical problem[113]. Hence the further
research on large-scale distributed detection is urgently needed so as to meet the requirement of
detection in large-scale distributed environment.
2) Trend of standardization. With the widespread application of IDS, it is necessary to improve
interoperability of IDS products, components and other kinds of security products. The situation
makes the standardization of IDS one of the development trends in the future.
3) Trend of evolving into Intrusion Protection System (IPS). IPS gives a way of improving IDS.
The industrial community has released large amount of IPS products, regarding IDS as one of the
optional operation modes. Hence IPS will substitute IDS gradually and become the product in the
mainstream. IPS is a container of IDS but, in addition, it provides the capability of a firewall.
Furthermore, IPS can effectively solve the problem of linkage latency with firewalls and, as a
result, reduce the side effects of linkage.
4.3 Network emergency response
With the development of network and information technology, traditional static methods of secu-
rity prevention are not enough to counteract hackers’ intrusions and organized attacks. New
mechanisms have to be established. In 1989, sponsored by the US Department of Defense, Car-
negie Mellon University set up the first “Computer Emergency Response Team (CERT)” and its
Coordination Center (CERT/CC). This event implied that information security had evolved from

288 SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298
static prevention into dynamic protection.
Soon after the foundation of CERT/CC, US Army, US Navy and US Air Force successively set
up emergency organizations, followed by foundations of emergency agencies of FBI, DoE, DoC
and NASA. Up to now, more than 50 computer emergency organizations have been set up by US
DoD, Federal government and large corporations. With the coordination of US National Infra-
structure Protection Commission as well as its conciliation commission, a country-wide emer-
gency network has been established.
Many countries and regions in Europe, Oceania, North America and Asia, especially those de-
veloped countries, have set up information security emergency organizations. According to the
proposal of emergency organizations of the United States and Australia, the international “Forum
of Incident Response and Security Teams” (FIRST) was founded in Nov 1990. Besides, countries
and regions in Asia-Pacific set up “Asia Pacific Security Incident Response Coordination Con-
ference” (APSIRC). It has become the international tide in information security to set up emer-
gency organizations, establish information assurance architecture and promote international co-
operation.
In China, some emergency response organizations have been set up, e.g., the China Education
and Research Network Computer Emergency Response Team (CCERT) founded in 1999, and the
National Computer network Emergency Response technical Team/Coordination Center
(CNCERT/CC) organized in 2000. The latter is an agency directed by Internet Emergency Re-
sponse Coordination Office of Ministry of Information Industry of China. CNCERT/CC is re-
sponsible for coordinating computer emergency response teams in China, international informa-
tion exchange and actions with international organizations. CNCERT/CC provides services and
technical supports, e.g., incident handling, security surveillance, early-warning, emergency re-
sponse and security prevention, for national public telecommunication networks, critical network
application systems and key sectors. In addition, it collects, verifies, summaries and publishes
authoritative documents on security issues of Internet. Currently, many research institutes, col-
leges, universities, enterprises and public institutions have actively participated in the research
work of network emergency response. In the aspect of architecture, the classical PDCERF meth-
odology has been established, which includes six stages of Preparation, Detection, Containment,
Eradication, Recovery and Follow-up. Lots of practical products and systems have been devel-
oped as response tools.
The academic and industrial communities have conducted extensive and thorough research on
emergency response with plenty of outstanding achievements. Nevertheless, many issues still
remain for further research:
1) Research on the architecture of emergency response, including organizational architecture,
technical architecture and supportive architecture of emergency response.
2) Research and establishment of technical standards. The standardization of emergency re-
sponse acts as the basis of communication and coordination mechanisms for emergency response
architecture, as well as the basis of keeping normal operation of emergency response systems.
Standardization of emergency response is conducted according to the security incident flow,
which covers the whole process of security incidents, including discovery, reporting, analysis,
classification, transfer, decision, response and recording of the incidents, e.g., standards of emer-
gency response workflow, security incident classification, security incident report format, secu-
rity incident description and exchange format.

SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298 289
3) Construction of the experimental environment. It is necessary to construct the experimental
environment for application research of emergency response architecture. With the typical,
small-scale hardware environment which is able to simulate the characteristics of a large scale
network, accompanied by corresponding simulation software, we are provided with the experi-
mental environment for theoretical research and the verification environment for application re-
search.
4) Development of core tools. The development of core tools for emergency response is re-
garded as the key technical element of constructing the emergency response architecture, includ-
ing Information Sharing and Analysis Center (ISAC), coordinated location and rapid isolation
mechanism in a large scale network, security incident plan system, simulation platform of secu-
rity situation in a large scale network, linkage system, backup and recovery system.
4.4 Network survivability
Network survivability is the ability of a network to carry out its mission in case of attacks, fail-
ures or emergent accidents. The basic idea of network survivability is to carry out its tasks and
resume damaged capability of services under the circumstances of successful intrusion or de-
struction of critical components. Research on theory of network survivability has become the hot
spot in network security. Large numbers of organizations and research institutes have made great
efforts on it. University of Virginia and University of Portland are cooperating with each other in
the project of “Information Survivability of Critical Infrastructure Protection”, including the sur-
vivability evaluation of Critical Infrastructure Protection, infrastructure for army and public, ar-
chitecture engineering of survivability, etc. The theory is far from maturity, leaving much room
for research on theories and implementations.
Network survivability requires four critical features: resistance, recognition, recovery and ad-
aptation. The proposal of network survivability broke through traditional security notions. People
have been aware that security depends on not only technology. Computer security should be
combined with risk management to protect information system and to reduce the impact of at-
tacks, failures and accidents[114]. Survivability centers on the protection of services. We should try
our best to keep the continuity of critical services even if the network system has been somewhat
damaged by attacks.
The core of network survivability lies in the survivability theory of unbounded system. Un-
bounded system requires that every participant could not realize the complete and exact informa-
tion about the whole system. Large scale computer network, telecommunication network and
power grid can be regarded as unbounded systems. The development of network computing en-
vironment implies the trend of unlimited network architecture. No central administrative organi-
zation is required in unlimited networks. Without the comprehensive knowledge about the whole
system, every participant has to trust and depend on neighbors to provide necessary information.
Exercise of controlling must be limited in its own scope. We should break through the limitation
of traditional methods and establish novel approaches for analyzing survivability of systems on
the condition without complete and exact information, and for cooperating without mutual coor-
dination.
Currently, research on theories of network survivability consists of:
1) Analysis approaches for survivable networks. Research on approaches to analyzing the sur-
vivability of large scale network systems, including the threats to survivability, mechanisms to
reduce existing risks such as sensitive services, elements with capabilities of resistance, recogni-

290 SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298
tion and recovery, and the system architecture with robust survivability.
2) Emergent algorithms. Emergent algorithms are used to analyze the performance of surviv-
ability of unbounded systems both in peacetime and in case of attacks. With the hierarchical and
distributed characteristics different from traditional algorithms, emergent algorithms are similar
to the natural process, e.g. the biological system and the economic system. Emergent algorithms
should be adapted to the condition without complete and exact information, without central con-
trolling, without hierarchical structure, without individual and separable vulnerability information.
In the meantime, cooperation without coordination should be considered to overcome the difficult
problem that traditional theories can hardly solve.
3) Research on simulation of survivable systems. Because of the characteristics of unbounded
systems, traditional simulation approaches cannot be applied smoothly to unbounded systems. We
have to make further efforts to develop novel simulation approaches.
4) Development of related software and tools, including agent software and hardware for col-
lecting information from networks, software for early-warning, location and isolation of security
incidents, and simulation tools of system environment.
In general, the government, the academic community and the industrial community have made
much account of network security, nevertheless, there are still many problems needing in-depth
research, especially on the following directions:
1) Network emergency response. To improve the research on the architecture of emergency re-
sponse and technical standards, construction of experimental environment, and development of
core tools.
2) Network survivability. Although immature and mostly unsettled, the theory of network sur-
vivability has been a new focus of the area. In-depth research on theories and implementations
constitute our most urgent problems.
3) Mobile and Wireless security. Research on the security structure, policy, mechanism, man-
agement, monitoring[115] and evaluation of mobile and wireless networks[116].
4.5 Trusted network
With the widespread application of networks and development of attack techniques, traditional
isolated security prevention can hardly deal with new kinds of attacks. Efficiency was regarded as
the primary objective of networks in the past, while nowadays network is required to provide
highly trusted services. Trustworthiness turns into the important criteria to evaluate the Quality of
Service of networks. For this reason, trusted network has become a new direction of research in
recent years. Characteristics of trusted network consist of security, survivability and controllabil-
ity. TCG proposed the technical specification of Trusted Network Connect (TNC), which greatly
improved research work on trusted network. As a challenging research subject, trusted network
will certainly attract more and more attention of academicians and researchers.

5 Progress in information hiding


Information hiding is an old but young discipline. Basically, it consists of convert channel and
multimedia information hiding.
5.1 Subliminal channel and convert channel
Subliminal channel[117] is constructed on the digital signature and authentication based on public

SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298 291
key cryptography. Its host is the cryptography system. At the sender of the subliminal channel,
the message is randomized by a private key and then embedded into the parameters of the cryp-
tography system using an embedding algorithm. At the receiver, after the system performs the
authentication of digital signature, the subliminal message is extracted. Except for the receiver,
no one can know whether there exists subliminal information within the cryptography data[118].
Many design methods for subliminal channel have been proposed. Basically, these methods are
constructed on the digital signature systems based on the discrete logarithm problem and the el-
liptic curve discrete logarithm problem. The research of subliminal channel mainly focuses on the
capacity and the security of the subliminal channel, and the design of new subliminal channels.
The convert channel is constructed on the public channel for convert communications. It is
used to transmit secret information for the illegal owner of the public channel. It can be divided
into two kinds, covert storage channel and covert timing channel[119]. In the covert storage chan-
nel, a process writes the secret message into a storage location, while another process reads that
message from that location. In the covert timing channel, a process modulates its own usage of
the system resources, such as the CPU time, so that another process can observe the effect from
the actual response time, and thus the secret communication is achieved. The difference between
these two kinds of channels is the method of the modulation. The research of convert channel
mainly focuses on the construction of the channel, the way to distinguish whether it is a convert
channel or not, the estimation of the channel bandwidth, and the elimination of the convert chan-
nel.
5.2 Multimedia information hiding
Multimedia information hiding[120] is the technology to treat a multimedia signal as the host for
embedding secret information. In the technology, the redundancy of the multimedia data and the
redundancy of human visual/auditory characteristics are employed.
From the view point of visual/auditory science and signal processing, information hiding can
be regarded as adding a weak signal (the hidden message) to a strong background (the original
image/audio/video signal, etc.). Due to the limited resolutions of human auditory system (HAS)
and human visual system (HVS), as soon as the amplitude of the added signal is lower than the
detectable threshold of HAS/HVS, it will not be detected by the HAS/HVS. So it is possible to
embed some information without changing the auditory/visual performance by performing lim-
ited modifications on the original image/audio/video signals. Since every kind of embedded
message can be converted into a binary sequence, the embedded secret information can be of
variable formats, including random sequence, data, words, image/graphics, speech/audio, video,
etc.
There are different requirements for the hidden information in the case of different applications.
The technology of multimedia information hiding can be divided into two branches, Stegonogra-
phy[121] and digital watermarking[122]. Steganography can be further divided into two kinds: secret
steganography and naive steganography. The former focuses on information faking so that the
secret information is transmitted. The latter solves the transparent labeling. Similarly, digital wa-
termarking can be divided into two kinds, robust watermarking and fragile watermarking. The
former is used for copyright protection of multimedia, while the latter for content authentication.
According to the data hiding protocols, information hiding can be also divided into no-key in-
formation hiding, private key information hiding and public key information hiding.
The basic requirements for information hiding include robustness, undetectability, payload,

292 SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298
and computation complexity. Among them, robustness, undetectability (including HAS/HVS im-
perceptivity and statistical undetectability) and payload are the three most important factors.
These three factors are contradictory in technical realization. The trade-off in these factors is
normally made in different applications. For secret steganography undetectability should be con-
sidered as most important while Naive steganography requires large information hiding capacity.
Robust watermarking mainly concerns robustness and the fragile watermarking has double re-
quirements for the robustness, fragile to malicious modifications and robust to common signal
processing. Except for visible watermark, undetectability is the general requirement for all the
hiding techniques.
Security and robustness are the critical issues that affect the technology towards applications.
For secret steganography, its security relies on how to resist the statistical detection. For Stega-
nalysis[122,123], it is the problem of how to detect the secret information under a small hiding
amount. In digital watermarking, protecting the watermark from being illegally detected or modi-
fied, and robustness to malicious attacks, such as geometrical attacks, are the urgent problems
that needs to be solved.
The embedding space and embedding method in information hiding are two basic issues in
hiding algorithms. The embedding space for hiding information is called working domain. Ac-
cording to the working domains, multimedia information hiding algorithm can be concluded as
two kinds, those based on time/space domain[124] and those based on transform domains[125]. The
former directly embeds the processed secret information into the time/space domain. Due to the
HAS/HVS characteristics, the amplitude of the embedded signal should be relatively low, and
thus the robustness is usually poor. The latter performs some mathematical transform (usually
orthogonal transform is adopted.) on the multimedia signal, the carrier host, and then embeds the
secret information into the transform coefficients, finally performs the inverse transform. The
orthogonal transforms commonly used in information hiding are discrete wavelet transform
(DWT), discrete Cosine transform (DCT), discrete Fourier transform (DFT), etc. Because or-
thogonal transform/inverse transforms have the characteristics that can change the distribution of
the signal energy, the algorithms based on transform domains can expand the energy of the em-
bedded signal over the time/space domain, and thus effectively solve the contradiction between
undetectability and robustness. Similar to the other compressed domain processing technologies,
the secret information bits can be embedded directly into the compressed domain avoiding the
decompression-recompression computation. This is called information hiding in the compressed
domain. It is a special kind of transform domain algorithms.
Regardless of in the algorithms based on time/space domain or transform domains, data em-
bedding is an important step. It is realized by the embedding formula. There are two conventional
embedding methods, additive embedding and mapping embedding. In the additive embedding,
the secret data is embedded as a weak signal to the host signal or its transform coefficients by
addition. At the receiving end, the detector, used for extracting the embedded information, is de-
signed by optimal detection and estimation theory. In the mapping embedding, the coefficients of
the host signal are mapped by a mapping function to the features determined by the bit to be em-
bedded. The extraction is done by the mapping function. The least significant bit (LSB) substitu-
tion and quantization index modulation (QIM)[126] are the typical methods of the mapping em-
bedding. Mapping embedding has distinct advantage in realizing the blind extraction of the em-
bedding information bits.

SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298 293
The research work in multimedia information hiding includes the basic theory and methods of
information hiding, digital watermarking algorithm of information hiding, steganalysis and digi-
tal watermarking attacking, information hiding protocols, applications of information hiding, etc.
The possible applications of information hiding include convert communications, multimedia
copyright protection, multimedia authentication, implicit notation of the information, etc. Due to
its wide prospects in military, security and industry, Multimedia information hiding has attracted
more and more attention.

1 Shen C X. Thinking on the Enhancement of Information Security Assurance Architecture, Literary of Information Security
(in Chinese). Wuhan: Hubei Science and Technology Press, 2002
2 Zhang H G, Wang L N, Huang C H. Research and practice for information security discipline construction and personnel
training. In: Symposium on Deans of Computer Institute of China. Beijing: Higher Education Press, 2005
3 Pfleeger C P, Pfleeger S L. Security in Computing. 3rd ed. NJ: Prentice Hall, 2003
4 Stallings W. Cryptography and Network Security-Principles and Practices. 4th ed. Pearson Education, 2006
5 Qin S H, Liu W Q. Operating System Security (in Chinese). Beijing: Qinghua University Press, 2004
6 Schneier B. Applied Cryptography, Protocols, Algorithms and Source Code in C. New York: John Wiley & Sons, 1996
7 Mao W. Modern Cryptography: Theory and Practice. HJ: Prentice Hall PTR, 2003
8 Feng D G. Research state and development trend of cryptology in and abroad. J Commun (in Chinese), 2002, 23(5): 18―26
9 Cao Z F, Shui XQ. Development direction and latest progress for cryptology. Comput Edu (in Chinese), 2005, 19―21
10 Wang X Y, Feng D G, Lai X J, et al. Collisions for Hash functions MD4, MD5, HAVAL-128 and RIPEMD, Cryptology
ePrint Archive: Report 2004/1999, Aug. 2004
11 Wang X Y, Lai X J, Feng D G, et al. Cryptanalysis of the Hash function MD4 and RIPEMD. In: Advance in Cryptol-
ogy-Eurocrypt’05, LNCS 3494. Berlin: Springer-Verlag, 2005. 1―18
12 Wang X Y, Yu H B. How to break MD5 and other Hash functions. In: Advance in Cryptology – Eurocrypt’05, LNCS 3494.
Berlin: Springer-Verlag, 2005. 19―35
13 Wang X Y, Yu H B, Yin Y L. Efficient collision search attacks on SHA-0. In: Advance in Cryptology – Crypto 05, LNCS
3621. Berlin: Springer-Verlag, 2005. 1―16
14 Wang X Y, Yin Y L, Yu H B. Finding collisions in the full SHA-1. In: Advance in Cryptology –Crypto 05, LNCS 3621.
Berlin: Springer-Verlag, 2005. 17―36
15 Federal Information Processing Standards Publication (FIPS 197) Advanced Encryption Standard (AES), Nov. 26, 2001
16 Zhang H G, Feng X T, Qin Z P, et al. Research on evolutionary cryptosystems and evolutionary DES. Chin J Comput (in
Chinese), 2003, 26(12): 1678―1684
17 Meng Q S, Zhang H G, Qin Z P, et al. Design bent function using evolving method. Acta Elect Sin (in Chinese), 2004,
32(11): 1901―1903
18 Zhang H G, Wang Y H, Wang B J, et al. Evolutionary random number generator based on LFSR. Wuhan Univ J Natur Sci,
2007, 12(1): 179―182
19 Luo Q B, Zhang J Z, Zhou J. Complexity Analysis of the Chaotic Key Sequence, CHINACRYPT’2006 (in Chinese). Bei-
jing: Press of Science and Technology of China, 2006
20 Ding C S, Xiao G Z, Shan W J. The stability theory of stream ciphers. Lecture Notes in Computer Science 561. Berlin:
Springer-Verlag, 1991
21 Diffie W, Hellman M E. New directions in cryptography. IEEE Trans Inform Theory, 1976, iT-22(6): 644―654
22 Rivest R L, Shamir A, Adleman L. A method for obtaining digital signatures and public key cryptosystems. Comm ACM,
1978, 21: 120―126
23 Merkle R C, Hellman M E. Hiding information and signatures in trap dorr knapsacks. IEEE Trans Inform Theory, 1978,
24(5): 525―530
24 Rabin M O. Digitalized signatures and public key functions as intractable as factorization. Technical Report LCS/TR212,
Cambridge MA (1979), MIT
25 ElGamal T. A public key cryptosystem and signature scheme based on discrete logarithms. IEEE Trans Inform Theory,
1985, IT-31(4): 469―472
26 Koblitz N. Elliptic curve cryptosystem. Math Comput, 1978, 48: 203―209

294 SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298
27 McEliece R J. A public key cryptosystem based on algebraic coding theory. DSN Progress Rep. 42-44, Jet Propulsion Lab,
1978, 114―116
28 Tao R J, Chen S H. A finite automaton public key cryptosystem and digital signatures. Chin J Comput (in Chinese),1985,
8(6): 401―409
29 Cao Z F. The multi-dimension RSA and its low exponent security. Sci China Ser E-Tech Sci, 2000, 43(4): 349―354
30 Cao Z F. A threshold key escrow scheme based on public key cryptosystem. Sci China Ser E-Tech Sci, 2001, 44(4):
441―448
31 Cao Z F. A public key cryptosystem based on a conic over finite fields Fp. In: Advances in Cryptology-Chinacrypt’98 (in
Chinese). Beijing: Science Press, 1998. 45―49
32 Cao Z F. Conic analog of RSA cryptosystem and some improved RSA cryptosystems. J Heilongjiang Univ (in Chinese),
1999, 16(4): 15―18
33 Cao Z F, Zhang Biao. MC public key cryptosystem based on Chinese remainder theorem. In: Advances in Cryptol-
ogy-CHINACRYPT'2000 (in Chinese). Beijing: Science Press, 2000. 29―33
34 Zhou Y, Cao Z F, Chai Z C. Construct secure proxy cryptosystem. CISC 2005, Lecture Notes in Computer Science, Vol.
3822. Berlin: Springer-Verlag, 2005. 150―161
35 Cao Z F. Public Key Cryptosystem (in Chinese). Harbin: Helongjiang Education Press, 1993
36 Schnorr C P. Efficient identification and signature for smart cards. J Cryptograp, 1991, 4(3): 161―174
37 NIST. Digital Signature Standard (DSS), Federal Information Processing Standards Publication, 186
38 Mambo M, Usuda K, Okamoto E. Proxy signatures: delegation of the power to sign messages. IEICE Trans Fundam, 1996,
E79-A(9): 1338―1354
39 Shao J, Cao Z F, Lu R X. Improvement of Yang et al.’s threshold proxy signature scheme. J Systems Software, 2007, 80:
172―177
40 Chaum D. Blind signatures for untraceable payments. In: Crypto’82. New York: Plenum Press, 1993. 199―203
41 Cao Z F, Zhu H J, Lu R X. Provably secure robust threshold partial blind signature. Sci China Ser F-Inf Sci, 2006, 49(5):
604―615
42 Liang X H, Cao Z F, Chai Z C, et al. ID-based threshold blind signature scheme from bilinear pair. In: ChinaCrypto’2006,
2006. 244―252
43 Boneh D, Gentry C, Lynn B, et al. Aggregate and verifiably encrypted signatures from bilinear maps. In: Advances in
Cryptography – Eurocrypt 2003, LNCS 2656. Berlin: Springer-Verlag, 2003. 416―432
44 Chaum D, van Antwerpen H. Undeniable signatures. In: CRYPTO’89, LNCS 435. Berlin: Springer-Verlag, 1989.
212―216
45 Lu R X, Cao Z F, Zhou Y. Threshold undeniable signature scheme based on coni. Appl Math Comput, 2005, 162(1):
165―177
46 Bellare M, Miner S. A forward-secure digital signature scheme. In: CRYPTO’99, LNCS 1666. Berlin: Springer-Verlag,
1999. 431―448
47 Chai Z C, Cao Z F. Factoring-based proxy signature schemes with forward-security. In: Zhang J, et al. eds. CIS’2004,
Lecture Notes in Computer Science, Vol. 3314. Berlin, Heidelberg: Springer-Verlag, 2004. 1034―1040
48 Dodis Y, Katz J, Xu S, et al. Strong key-insulated signature schemes. In: Public Key Cryptography - PKC 2003, LNCS 2567.
Berlin: Springer-Verlag, 2003. 130―144
49 Shamir A, Tauman Y. Improved online/offline signature schemes. In: Proceedings of Advances in Cryptology: Crypto’01,
LNCS 2139. Berlin: Springer-Verlag, 2001. 355―367
50 Desmedt Y. Society and group oriented cryptography: a new concept. In: Crypto’87, LNCS 293. Berlin: Springer-Verlag,
1988. 120―127
51 Wang L C, Cao Z F, Li X X et al. Simulatability and security of certificateless threshold signatures. Inform Sci, 2007,
177(6): 1382―1394
52 Shao J, Cao Z F. A traceable threshold signature scheme with multiple signing policies. Comput Secur, 2006, 25(3):
201―206
53 Rivest R, Shamir A, Tauman Y. How to leak a secret. In: ASIACRYPT 2001, LNCS 2248. Berlin: Springer-Verlag, 2001.
552―565
54 Lu R X, Cao Z F, Dong X L. Pairing-based proxy ring signature scheme with proxy signer privacy protection. In: China-
Crypto’2006, 2006. 1―10

SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298 295
55 Jakobsson M, Sako K, Impagliazzo R. Designated verifier proofs and their applications. In: EUROCRYPT'96, LNCS 1070.
Berlin: Springer-Verlag, 1996. 143―154
56 Lu R X, Cao Z F, Dong X L, et al. Designated verifier proxy signature scheme from bilinear pairings. In: The International
Multi-Symposiums in Computer and Computational Sciences (IMSCCS 06), June 20―24. 2006. 40―47
57 Chaum D. Designated confirmer signatures. In: Eurocypt’94, LNCS 950. Berlin: Springer-Verlag, 1995. 86―91
58 Wang G. Bibliography on signatures. Available at: http://icsd.i2r.a-star.edu.sg/staff/guilin/bible.htm.
59 ITU-T, Rec. X.509 (revised) the Directory – Authentication Framework, 1993, International Telecommunication Union,
Geneva, Switzerland
60 Shamir A. Identity-based cryptosystems and signature schemes. In: Advances in Cryptography – Crypto’84, LNCS 196.
Berlin: Springer-Verlag, 1984. 47―53
61 Boneh D, Franklin M. Identity-based encryption from the Weil pairing, SIAM. J Comput, 2003, 32(3): 586―615
62 Barreto P S L M. The Pairing-Based Crypto Lounge, http://paginas.terra.com.br/informatica/paulobarreto/ pblounge.html
63 Lu R X, Cao Z F, Dong X L. Efficient ID-based one-time proxy signature and its application in E-cheque. In: 5th Interna-
tional Conference on Cryptology and Network Security - CANS 2006, Lecture Notes in Computer Science, Vol. 4301. 2006,
153―167
64 Duan S S, Cao Z F. Efficient and provably secure multi-receiver identity-based signcryption. ACISP 2006, Lecture Notes in
Computer Science, Vol. 4058. 2006, 195―206
65 Baek J, Zheng Y. Identity-based threshold decryption. In: Practice and Theory in Public Key Cryptography-PKC'2004,
Singapore (SG), March 2004, LNCS 2947. Berlin: Springer-Verlag, 2004. 262―276
66 Al-Riyami S S, Paterson K G. Certificateless public key cryptography. In: Advances in Cryptology – Asiacrypt’2003,
LNCS 2894. Berlin: Springer-Verlag, 2003. 452―473
67 Nan X H. Identity Authentication Based on CPK (in Chinese). Beijing: National Defense Industry Press, 2006
68 Goldwasser S, Micali S. Probabilisticencryption. J Comput Syst Sci, 28(3): 270―299
69 Dolev D, Dwork C, Naor M. Non-malleable cryptography. SIAM J Comput, 2000, 30(2): 391―437
70 Goldwasser S, Micali S, Rivest R. A digital signature scheme secure against adaptive chosen-message attacks. SIAM J
Comput, 1988, 17(2): 281―308
71 An J, Dodis Y, Rabin T. On the security of joint signature and encryption. In: Advances in Cryptology - EUROCRYPT'02,
LNCS 2332. Berlin: Springer-Verlag, 2002. 83―107
72 Pointcheval D. Provable Security for Public Key Schemes, http://www.di.ens.fr/~pointche/pub.php?reference=Po04
73 Bellare M, Rogaway P. Entity authentication and key distribution. In: Advances in Cryptology - Crypto 1993, LNCS 773.
Berlin: Springer-Verlag, 1993, 110―125
74 Bellare M, Rogaway P. Provably secure session key distribution: The three party case. In: 27th ACM Symposium on the
Theory of Computing. New York: ACM Press, 1995. 57―66
75 Bellare M, Pointcheval D, Rogaway P. Authenticated key exchange secure against dictionary attacks. In: Advances in
Cryptology – Eurocrypt 2000 LNCS 1807. Berlin: Springer-Verlay, 2000. 139―155
76 Canetti R, Krawczyk H. Analysis of key-exchange protocols and their use for building secure channels. In: Advances in
Cryptology – Eurocrypt 2001 LNCS 2045. Berlin: Springer-Verlag, 2001. 453―474
77 Choo K K R, Boyd C, Hitchcock Y. Examining indistinguishability-based proof models for key establishment protocols. In:
Advances in Cryptology - Asiacrypt 2005, LNCS 3788. Berlin: Springer-Verlag, 2005. 585―604
78 Choo K K R. Provably-Secure Mutual Authentication and Key Establishment Protocols Lounge, http://sky.fit.qut.edu.au
/~choo/lounge.html
79 Lu R X, Cao Z F. Simple three-party key exchange protocol. Comput Secur, 2007, 26(1): 94―97
80 Lu R X, Cao Z F. Efficient remote user authentication scheme using smart card. Comput Networks, 2005, 49(4): 535―540
81 Shao J, Cao Z F, Lu R X. An improved deniable authentication protocol. Networks, 2006, 48(4): 179―181
82 Bellare M, Rogaway P. Random oracles are practical: a paradigm for designing efficient protocols. In: Proc. of the 1st ACM
Conference on Computer and Communication Security. New York: ACM Press, 1993. 62―73
83 Fiat A, Shamir A. How to prove yourself: {Practical} solutions to identification and signature problems. In: Advances in
Cryptology--Crypto '86. Berlin: Springer-Verlag, 1986. 186―194
84 Canetti R, Goldreich O, Halevi S. The random oracle methodology, revisited. In: Proceedings of the 30th Annual Sympo-
sium on the Theory of Computing (STOC’98). New York: ACM Press, 1998. 209―218
85 Cramer R, Shoup V. A practical public key cryptosystem provably secure against adaptive chosen ciphertext attack. In:

296 SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298
Advance in Cryptology-Crypto’98, LNCS 1462. Berlin: Springer-Verlag, 1998. 13―25
86 Water B. Efficient identity-based encryption without random oracle. In: Advances in Wyptology CRYPTO 2004 LNCS
3152. Berlin: Springer-verlag, 2004. 443―459
87 Boneh D, Boyen X. Short signatures without random oracles. In: Advance in Cryptology- Eurocrypt’04, LNCS 3027, 2004.
56―73
88 Zeng G H. Quantum Identity Authentication Without Lost of Quantum Channel. In: CHINACRYPT’2004. Beijing: Sci-
ence Press, 2004
89 Xiao G Z, Lu M X. DNA computing and DNA code. J Engin Math, 2006, 23(1): 1―6
90 Department of Defense Computer Security Center. DoD 5200.28-STD. Department of Defense Trusted Computer System
Evaluation Criteria [S]. USA: DOD, December 1985
91 National Computer Security Center. NCSC-TG-021. Trusted Database Management System Interpretation [S]. USA: DOD,
April 1991
92 National Computer Security Center. NCSC-TG-005. Trusted Network Interpretation of the Trusted Computer System
Evaluation Criteria [S]. USA: DOD, July 1987
93 Trusted Computing Group. TCG Specification Architecture Overview [EB/OL]. [2005-03-01].
https://www.trustedcomputinggroup.org/
94 The Open Trusted Computing (OpenTC) consortium. General activities of OpenTC [EB/OL]. [2006-3-1].
http://www.opentc.net/activities/
95 Microsoft. Trusted Platform Module Services in Windows Longhorn [EB/OL]. [2005-4-25].
http://www.microsoft.com/resources/ngscb/
96 Intel Corporation. LaGrande Technology Architectural Overview [EB/OL]. [2004-5-1]. http://www.intel.com/technology/
security/
97 Avizienis A, Laprie J -C, Randell B, et al. Basic concepts and taxonomy of dependable and secure computing. IEEE Trans
Depend Secure Comput, 2004, 1(1): 11―33
98 Zhang H G, Luo J, Jin G, et al. Development of trusted computing research. J Wuhan Univ (Nat Sci Ed) (in Chinese), 2006,
52(5): 513―518
99 Zhang H G, Wu G Q, Qin Z P, et al. A new type of secure microcomputer. In: Proc. of First Chinese Conference on Trusted
Computing and Information Security. J Wuhan Univ (Nat Sci Ed) (in Chinese), 2004, 50(s1): 1―6
100 Zhan H G, Liu Y Z, Yu F J, et al. A new type of embedded secure module. In: Proc. of First Chinese Conference on Trusted
Computing and Information Security. J Wuhan Univ (Nat Sci Ed) (in Chinese), 2004, 50(s1): 7―11
101 Yan F, Zhang H G, Sun Q, et al. An improved grid security infrastructure by trusted computing. Wuhan Univ J Nat Sci, 2006,
11(6):1805―1808
102 Patel J, Luke T W T, Jennings N R, et al. A probabilistic trust model for handling inaccurate reputation sources. In: Trust
Management, Third International Conference, iTrust 2005. Paris, France, May 23-26, 2005. 193―209
103 Beth T, Borcherding M, Klein B. Valuation of trust in open network. In: Proceeding of the European Symposium on Re-
search in Security (ESORICS). Brighton: Springer-Verlag, 1994. 3―18
104 Tang W, Chen Z. Research of subjective trust management model based on the fuzzy set theory. J Software (in Chinese),
2003, 14(8): 1401―1408
105 Audun J. An algebra for assessing trust in certification chains. In: The Proceedings of NDSS’99, Network and Distributed
System Security Symposium. San Diego: The Internet Society, 1999
106 Yuan L L, Zeng G S, Wang W. Trust evaluation model based on Dempster-Shafer evidence theory. J Wuhan Univ (Nat Sci
Ed) (in Chinese), 2006, 52(5): 627―630
107 Qu Y W. Software Behavior (in Chinese). Beijing: The Electronic Industry Press, 2004
108 Lin C, Peng X H. Research on trustworthy network. Chine J Comput (in Chinese), 2005, 28(5): 751―758
109 Chen H W, Wang J, Dong W. The high trusted software engineer. Acta Elect Sin, 2003, 31(12): 1933―1938
110 Feng D G. Network Security-Principle and Technology (in Chinese). Beijing: Science Press, 2003
111 Feng D G. Research State and Development Trend of Information Security Technology in and abroad (in Chinese). 2005,
Qinghua University Press, 2006. 236―256
112 Feng D G, Wang X Y. Progress and Prospect on Information Security Research in China. J Comput Sci Tech, 2006, 21(5):
740―755
113 Shim S S Y, Gong L, Rubin A D, et al. Securing the high-speed internet. IEEE Comput, 2004, 37(6): 33―35

SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298 297
114 Anderson J P. Computer security technology planning study. ESD-TR-73-51, Vol. II, Electronic Systems Division, Air
Force Systems Command, Bedford, MA, USA
115 Carle J, Simplot-Ryl D. Energy-efficient area monitoring for sensor networks. IEEE Comput, 2004, 37(2): 40―46
116 Enz C C, El-Hoiydi A, Decotignie J, et al. WiseNET: an ultralow-power wireless sensor network solution. IEEE Comput,
2004, 37(8): 62―70
117 Simmons G J. The prisoner’s problem and the subliminal channel. In: Advances in Cryptology: Proceedings of
CRYPTO’83. NY: Plenum Press, 1984. 51―67
118 Wang Y M, Zhang T, Huang J W, et al. Information Hiding-Theory and Technology (in Chinese). Beijing: Tsinghua
University Press, 2006
119 A guide to understanding covert channel analysis of trusted systems. National Computer Security Center. NCSC-TG-030
120 Petitcolas F A P, Anderson R J, Kuhn M. G. Information hiding―A survey. Proc. IEEE, 1999, 87(7): 1062―1078
121 Anderson R, Petitcolas F A. P. On the limits of steganography. IEEE J Select Areas Commun, 1998, 16(4): 474―481
122 Swanson M D, Kobayashi M, Tewfik A. H. Multimedia data embedding and watermarking technologies. Proc. IEEE, 1998,
86(6): 1064―1087
123 Johnson N F, Jajodia S. Steganalysis of images created using current steganography software. In: Proc. of 2nd International
Workshop on Information Hiding. LNCS 1525, 1998. 273―289
124 Bender W, Gruhl D, Morimoto N, et al. Techniques for data hiding. IBM System J, 1996, 35(3/4): 313―337
125 Cox I J, Killian J, Leighton F T, et al. Secure spread spectrum watermarking for multimedia. IEEE Trans Image Proc, 1997,
6(12): 1673―1687
126 Chen B, Wornell G W. Quantization index modulation: A class of provably good methods for digital watermarking and
information embedding. IEEE Trans Inform Theory, 2001, 47(4): 1423―1443

298 SHEN ChangXiang et al. Sci China Ser F-Inf Sci | June 2007 | vol. 50 | no. 3 | 273-298

You might also like