You are on page 1of 20

April 2005 ISSN 1353-4858

Featured this month
Vulnerability assessment tools: the end of an era?
Vulnerability assessment tools, traditionally, amass a monumental amount of flaws. A continuously growing number of vulnerabilities means that the tools need to be constantly updated.
This means that the number of vulnerabilities appears overwhelming. Also, not all of these flaws are of significance to security. Host-based patch management systems bring a coherence to the chaos. The clear advantages of using these tools question the value of traditional vulnerability assessment tools. Andrew Stewart describes the advantages of using patch management technologies to gather vulnerability data. He proposes a lightweight method for network vulnerability assessment, which does not rely on signatures, or suffer from information overload. Turn to page 7....

Tips to defeat DDoS Qualys ticks compliance box Russian hackers are world class 2 2 3

Inside out security:de-perimeterisation 4

A contemporary approach to network vulnerability assessment 7

Crypto race for mathematical infinity 10

Tips to defeat DDoS
From the coal face of Bluesquare
Online gambling site, Bluesquare, has survived brutal distributed denialof-service attacks, and CTO, Peter Pederson presented his survival checklist at a recent London event.
Pederson held his ground by refusing to pay DDoS extortionists who took Bluesquare's website down many times last year. He worked with the National HiTech Crime Unit to combat the attacks and praised the force for its support. Speaking at the E-crime congress, Pederson, played a recording of the chilling voice of an extortionist, who phoned the company switchboard demanding money. After experiencing traffic at 300 Megabits per second, Pederson said he finds it amusing when vendors phone him with sales pitches boasting that they can stop weaker attacks. He has seen it all before. Story continued on page 2...

Biometrics: the eye of the storm 11

Proactive security
Proactive security: vendors wire the cage but has the budgie flown... 14

Managing aspects of secure messaging between organizations 16

RFID: Misunderstood or untrustworthy 17

Network Security Manager’s preferences for the Snort IDS and GUI add-ons 19

RFID – misunderstood or untrustworthy?
The biggest concern with RFID is the ability to track the location of a person or asset. Some specialized equipment can already pick up a signal from an RFID tag over a considerable distance.
But an RFID tag number is incomprehensible to a potential attacker without access to a backend database. The problem is that an attacker may get access to such a database. Bruce Potter examines if RFID really is a sinister security nightmare. Turn to page 17...

News in brief 3

ISSN 1353-4858/05 © 2005 Elsevier Ltd. All rights reserved This journal and the individual contributions contained in it are protected under copyright by Elsevier Ltd, and the following terms and conditions apply to their use: Photocopying Single photocopies of single articles may be made for personal use as allowed by national copyright laws. Permission of the publisher and payment of a fee is required for all other photocopying, including multiple or systematic copying, copying for advertising or promotional purposes, resale, and all forms of document delivery. Special rates are available for educational institutions that wish to make photocopies for non-profit educational classroom use.

perform an annual self-assessment and quarterly network scan. "The payment card industry's security requirements (PCI, SDP, Visa CISP) apply to all merchants with an Internet facing IP, not just those doing E-commerce, so the magnitude of retailers this program affects is significant," said Avivah Litan, vice president and research director at Gartner. Qualys says it achieved compliance status by proving their ability to detect, identify and report vulnerabilities common to flawed web site architectures and configurations. These vulnerabilities, if not patched in actual merchant websites, could potentially lead to an unauthorized intrusion. "The payment card industry's security standards are converging, which will simplify the compliance process, but achieving compliance with these standards can still be very costly for both merchants and acquiring banks. The more the process can be streamlined and automated, the easier it will be for everyone," said Litan.

Editorial office: Elsevier Advanced Technology PO Box 150 Kidlington, Oxford OX5 1AS, United Kingdom Tel:+44 (0)1865 843645 Fax: +44 (0)1865 853971 E-mail: Website: Editor: Sarah Hilley Supporting Editor: Ian Grant Senior Editor: Sarah Gordon International Editoral Advisory Board: Dario Forte, Edward Amoroso, AT&T Bell Laboratories; Fred Cohen, Fred Cohen & Associates; Jon David, The Fortress; Bill Hancock, Exodus Communications; Ken Lindup, Consultant at Cylink; Dennis Longley, Queensland University of Technology; Tim Myers, Novell; Tom Mulhall; Padget Petterson, Martin Marietta; Eugene Schultz, California University, Berkeley Lab; Eugene Spafford, Purdue University; Winn Schwartau, Inter.Pact Production/Design Controller: Esther Ibbotson
Permissions may be sought directly from Elsevier Global Rights Department, PO Box 800, Oxford OX5 1DX, UK; phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail: permissions@elsevier. com. You may also contact Global Rights directly through Elsevier’s home page (http://, selecting first ‘Support & contact’, then ‘Copyright & permission’. In the USA, users may clear permissions and make payments through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA; phone: (+1) (978) 7508400, fax: (+1) (978) 7504744, and in the UK through the Copyright Licensing Agency Rapid Clearance Service (CLARCS), 90 Tottenham Court Road, London W1P 0LP, UK; phone: (+44) (0) 20 7631 5555; fax: (+44) (0) 20 7631 5500. Other countries may have a local reprographic rights agency for payments. Derivative Works Subscribers may reproduce tables of contents or prepare lists of articles including abstracts for internal circulation within their institutions. Permission of the Publisher is required for resale or distribution outside the institution. Permission of the Publisher is required for all other derivative works, including compilations and translations. Electronic Storage or Usage Permission of the Publisher is required to store or use electronically any material contained in this journal, including any article or part of an article. Except as outlined above, no part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior written permission of the Publisher. Address permissions requests to: Elsevier Science Global Rights Department, at the mail, fax and e-mail addresses noted above. Notice No responsibility is assumed by the Publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made. Although all advertising material is expected to conform to ethical (medical) standards, inclusion in this publication does not constitute a guarantee or endorsement of the quality or value of such product or of the claims made of it by its manufacturer.
02158 Printed by Mayfield Press (Oxford) LImited

Qualys ticks compliance box


Brian McKenna ulnerability management vendor, Qualys, has added new policy compliance features to its QualysGuard product. This allows security managers to audit and enforce internal and external policie on a 'software as a service' model, the company says.

In a related development, the company is trumpeting MasterCard endorsement for the new features set. Andreas Wuchner-Bruehl, head of global IT security at Novartis commented, in a statement: that: "Regulations such as the Sarbanes-Oxley Act and Basel II [mean that] much of the burden now falls on IT professionals to assure the privacy and accuracy of company data. In this environment, security managers must tie their vulnerability management and security auditing practices to broader corporate risk and compliance initiatives." Philippe Courtot, chief executive officer, Qualys said: "security is moving more and more to policy compliance. For example: are your digital certificates up to date? We offer quick deployability since we are not selling enterprise software, but providing it as service. Customers don't have software to delploy, and Qualys scans on a continuous basis". "In 2004 Sarbox was all about keeping C-level executives out of jail, but we are moving beyond that now. The opportunity is to streamline the best practices generated out of Sarbox consulting as it relates to the security in your network". The latest version of QualysGuard has been endorsed by MasterCard. The vulnerability management vendor has completed the MasterCard Site Data Protection (SDP) compliance testing process. From 30 June, this year, MasterCard will require online merchants processing over $125,000 in monthly MasterCard gross volume to

Tips to defeat DDoS
(continued from page 1) The DDoS Forum was formed in response to the extortionist threat to online gambling sites. Pederson is adamant about not paying up. Peter Pederson’s survival checklist against DDoS attacks:

• Perform ingress and egress filtering. • Consolidate logs. • Perform application level checks. • Implement IDS. • Implement IPS. • Check if 3rd party connections are open. • Capture current network traffic. • Monitor current system states. • Maintain current patches. • Put procedure and policies in place to handle DDos attacks.


Network Security

April 2005

spam and phishing attacks sent over public IM networks. Countries. General Boris Miroshnikov told the eCrimes Congress in London on 5 April. It also makes DCOM. he told the clapping audience at the start of a speech reporting on cyber crime developments in the region. and then disable the software and ports not associated with that role. such as the Dell TrueMobile 1184. he said at the same event. like Russia. it says. Microsoft has published a 19-page booklet. but was apparently stolen while he was on a field trip with his soldiers. The US and Holland are considering creating similar programmes. research from In-Stat/ MDR suggests penetration will reach 34% among mid-sized businesses. Even so. “Some of these Russian hackers have day jobs designing highly secure encryption technologies”. His department has worked closely with the UK's National Hi-Tech Crime Unit. Windows Server gets extra protection Windows Server 2003's new Service Pack 1 allows Windows servers to turn on their firewalls as soon as they're deployed. Microsoft's chairman and chief executive. The Trustworthy Computing Security Development Lifecycle. We are now getting better because the victims know who to come to and we have had no leaks of victim identity”. The government also promised a Warp to show home computer users how to improve PC security and lower the risk of them becoming staging posts for hackers attacking businesses. Don't trust hardware Hardware devices are as insecure as any IT system. SSL cryptographic accelerators are also potentially hackable. he confirmed. Alan Jebson. Microsoft's technology for distributed objects. and 43% in large enterprises. CEO of Grand Idea told delegates at the Amsterdam Black Hat conference. he said. We need to agree what is a computer crime”. R Brian McKenna ussian hackers are “the best in the world” Lt. for example. Mr Plod The British government has set up six Warps (warning advice and reporting points) to allow businesses to share confidential information about risks. IM creates instant havoc Security threats from Instant Messages have increased 250% this year. group COO at HSBC holdings. and physical characteristics are often easily stolen or reproduced. established within Russian law enforcement to deal with computer crime in 1998. More than half the incidents happened at work via free IM services such as AOL Instant Messenger. can also be hacked. that came late to the internet exhibit its problems more dramatically. "We must have comparable laws and sanctions. From 2001-3. In large networks. mobile devices. It now needs the co-operation of telecoms companies. The research tracks viruses. The storage of biometric characteristics on back-end systems also sets up avenues of attack. and end-point access and authentication. he said. the new Voice over IP Security Alliance (VOIPSA) has created a committee to define security standards for Internet telephony networks. Other topics include security technology components. SQL Server 2000 Service Pack 3 and Exchange 2000 Server Service Pack 3. Israel jails colonel for losing PC The Israeli army jailed the commander of an elite Israel Defense Forces unit for two weeks for losing a laptop computer containing classified military information.NEWS Russian hackers are world class In brief Microsoft talks up security After 25 years of complaints about the poor security of its products. vulnerabilities and emerging application attacks. echoed the Russian’s rueful ‘boast’. VoIP vulnerabilities addressed Security worries are holding up adoption of VoIP. less prone to attack. security breaches and successful countermeasures. Wireless Access Points based on Vlinux. Windows Messenger. It found reported incidents of new IM threats grew 271% so far. that outlines the "cradle to grave" procedures for a mandatory "Security Development Lifecycle" for all its Internetfacing products. as demonstrated by a recently documented attack against Intel's NetStructure 7110 devices. Network appliances. according to a report from IMlogic Threat Center. Researchers recently showed how to exploit cryptographic weaknesses to attack RFID tags used in vehicle immobilisers and the Mobil SpeedPass payment system. “I will tell them of your applause”. the firm says. network management. computer crime in Russia doubled year on year. "It used to be naughty boys who committed these crimes”. Warp speed. and to block inbound Internet traffic until Windows downloads Microsoft's latest security patches. and Yahoo Messenger. Security through obscurity is still widely practiced in hardware design but hiding something does not solve the problem. disrupting a hardware security product. "We are up against the best”. infrastructure weaknesses. The new approach comes from Bill Gates and Steve Ballmer. he said. The new process "significantly reduces" the number and lethality of security vulnerabilities. using undocumented features and invasive tampering. the legal profession. Blackhat delegates were told. worms. So far software produced using the SDL framework includes Windows Server 2003. To increase adoption rates. says the National Infrastructure Security Co-ordination Centre (NISCC). Boroshnikov is head of Department K. Joe Grand. Web server. April 2005 Network Security 3 . which is co-ordinating the scheme. ISPs. “Only in 2004 did we hold back the growth”. Attacks include eavesdropping. and to receive tailored security alerts. “but now they have grown up”. or database host. He concluded that there is a strong need in Russia for state standards that will keep out the “charlatans of computer security”. architecture and network design. RFID tokens and access control devices are all potentially at risk. The laptop should have been locked away. A new security configuration wizard detects a server's role as a file server. MSN Messenger. and law enforcement to tackle the problem. He reported that when Department K was in its infancy “80% of computer crime was out of sight. the bandwidth and time associated with routing traffic and spam creates a latency problem for VoIP traffic through the firewall.

And of course. It’s time to stop looking at security from the outside. high stone walls. The idea’s not new. No more tilting at windmills. were excellent at deflecting an enemy for a fixed period of time. it is fairly straightforward to run rampage through IT systems and cause untold amounts of havoc. Once the hard outer shell has been penetrated. global head of BT security practice Gone are the days of fortress security Ray Stanton If you’re into IT security. those fortresses. buried in a concrete bunker. The fact is we need to be modern. I wouldn’t stake my life on it…” Nor would you be able to use it. rescuing our organizations from marauding outsiders. never mind secure. More than that. Unlike fixed stone walls. new locations. the desire for the ‘Martini principle’ . the boundaries of the modern business are shifting all the time. barricading yourself behind high walls doesn’t let the good guys in. which is being led by some formidable CSOs in major blue-chips who have come together to create the Jericho Forum. Everybody seems to be talking about it – and while there are senior IT managers and security experts who are fully and publicly embracing the idea. Director. systems and applications. To do that. it’s pretty hard to avoid discussions about deperimiterisation: the loosening of controls at boundary level in favour of pervasive security throughout the network. Audit. there are also those who are feeling more than a little apprehensive about this talk of breaking down the barriers at the edge of the network. Mobile and flexible working have become a normal part of the corporate environment. be seen as a chance to stop going after the impossible. twenty-first century intelligence agents. But flexibility is what the modern business is all about. and to focus effort on achieving acceptable levels of risk. and to prove the value that it adds to the organisation. Firstly. and is surrounded by nerve gas and very highly paid armed guards. Computer Operations. therefore. Let’s face it. with their thick. doesn’t stop internal attacks from rebellious subjects. For years we have been working towards attaining the goal of a network boundary that is 100 percent secure.DEPERIMETERISATION Inside out security: de-perimeterisation Ray Stanton. illicit downloads and the latest applications. insist on making basic mistakes and leaving their passwords lying around or opening dubious attachments. this is a real opportunity to align security with overall organisational strategy. armour. The ‘trespassers will be prosecuted’ signs – along with the negative expressions and shaking heads – need to be abandoned. They want their salespeople to remain connected through their mobile devices and remote access. “ De-perimiterisation is a chance to stop going after the impossible ” security community is not that surprising. anyplace. Security managers have tended to adopt a siege mentality. additional business partners. And then there’s the weakest link of all: the people. locked in a titanium lined safe. not twelfth century warriors. Added to that of course. making it increasingly difficult to define. the fight was over within a matter of hours. This is Manning the battlements The fact that de-perimiterisation is causing some worried muttering within the 4 Network Security April 2005 . They want to collaborate easily with partners and integrate business processes with customers and suppliers. After all. The same is true of most IT networks. it’s just not safe out there – and we’ve all seen the statistics to prove it. and softer boundaries appear to be contrary to everything that we are working for. De-perimiterisation can. Even then. to promote the idea. anywhere computing. and isn’t exactly flexible. is the fact that boundaries keep moving: new devices. But once that enemy got inside the walls. Seizing opportunities This is not the time for security experts to revert to their negative. it’s time to update this self-image. and Security Technology at Purdue University put it: “The only system which is truly secure is one which is switched off and unplugged. Instead we should see these new developments as an opportunity. Employees. jackbooted stereotype. but it’s certainly a hot topic right now. No more running to stand still. we need to understand where the call for opening up the networks is coming from. But we need to stop thinking of our network as a medieval citadel under attack. 100% security of the network boundary has always been an almost impossible task. But opening up the networks provide us with opportunities as well as threats. being human. all add to the everexpanding perimeter. Although we all like to think of ourselves as knights in shining Harnessing the drivers De-perimiterisation is driven by several business needs. and focus instead on looking at security from the inside out. As Gene Spafford. Firms need to expand. After all.anytime.

DEPERIMETERISATION happening by default in many organisations. granulated access and rotating users all demand close control. we need to reverse this. using XML or Web services. but will be strategically placed inside it. and be seen as an enabler once more. will be required on a more regular basis than ever before. while keeping the doors firmly barred on others. and so flexibility. As experts. devices and documents that are springing up throughout the company. applications. Accessing applications through a broadband enabled device. Our responsibility is to make sure that everyone is aware of the risks and can make informed decisions. through the ‘always on’ connection. This is the first step in a move towards simplification of the network and enabling access for up to 90. Finally. outsourcers or suppliers require secure access to data in real time – which cannot be achieved with a tough impenetrable network boundary. De-perimiterisation is actually something of a misnomer. We also need to make sure that we still get the basics right. Identity management While firewalls may sort the ‘good’ HTTP traffic from the bad. At the same time it increases availability. But efforts to achieve it can be hampered by current security thinking. and undertake joint ventures with other firms who are partners in one region but competitors in another. Updates to policy that reflect both changes within the Some of the companies that are breaking down the barriers as members of the Jericho Forum: • Boeing • British Broadcasting Corporation • Deutsche Bank • Lockheed Martin • Pfizer • Reuters • Unilever “ Firewalls will no longer be at the edge of the network Back to basics But before we tear down the firewalls and abandon ourselves to every virus infestation out there. In the digital networked economy.000 of the oil company’s third party businesses. at device. viruses are not going to go away: there will always be new variants and new vulnerabilities. which seems to indicate that despite their prevalence. but if the contents are leaked. Companies tend to make a great deal of use of outsourcers and contractors. The difference is these will no longer sit at the very edge of the network. Viruses still counted for 70% of these. Rather than taking a ‘one size fits all’ approach. there is a need for approved third parties to gain access. not surprisingly. indeed IT as a whole. it will take a far more central role than it has enjoyed so far. After that. The 2004 edition of the DTI information breaches survey shows that a massive 74% of all companies suffered a security incident in the previous year. And not just to those in hydrocarbons. very attractive. can be brought in from the cold and get a much-needed voice at board level. which have been early adopters of de-perimiterisation – or ‘radical externalisation’ as it is known in BP – we can see clear examples of all of these drivers. reduces the costs associated with connectivity and maintenance of leased lines. data or even application level. Federated security. let’s take a look at what ‘inside out’ security really involves. This picture of a flexible. As a result they have long recognised the need to let partners have access to one part of the system. there is still a lack of maturity in incident management procedures. Firewall vendors don’t need to panic just yet – there is still going to be a need for their products in a deperimiterised system. rather than through a secure VPN. it’s about putting adequate controls in place. For example. The decision should be based upon another fundamental tenet of good security practice: thorough assessment of risk. It’s not about getting rid of boundaries altogether. That customer database from three years ago may be of limited value now. rather than the amber and green. inside out security requires us to look at protecting our information assets from the perspective of what needs to be secured and at what level. cost effective. ” organisation and to its immediate environment. Significant numbers of workers are on the road or in remote locations at any given time. Typically the hard controls around the DMZ (demilitarised zone) will move to sit between the red and amber areas. Although policy control and management has always been a fundamental factor in any security measures. The second driver is cost. the consequences could be disastrous. Rather it’s a question of re-aligning and refocusing them. If we look at the oil and gas industries. This takes us back to some basic principals of security management: deciding what bits of your systems and accompanying business processes are key and focusing on their security. and 63% had a serious incident. joint ventures. they cannot 5 April 2005 Network Security . private exchanges and even VPNs. an organisation has a more granular approach with internal partitions and boundaries protecting core functions and processes – hence the inside out approach. This shift in thinking offers us a real possibility that security. who now wish to take control and effectively manage the multitude of vendors. So instead of a single hard shell round a soft centre. collaborative working models with partners. In fact around 10% of BP’s staff now access the company’s business applications through the public Internet. and adaptable business is.

critical data will remain a basic legal requirement. rather than trying to secure an undefined group of peripheral appliances. or attempted to access. data and network connection spread over different gadgets and different locations. and to keep up with changing relationships with partners. it seems unlikely that any of us will be able to guarantee that partners have done so. Like all policies this will need to be monitored and updated to match employees moving through the organisation. What we do know is that the move to inside out security. You also need to identify what and who you trust from both internal and external sources: which of your own people should have access to what systems and processes.DEPERIMETERISATION It seems that inside out security will act as an impetus towards a more thinclient based architecture. Centralised systems are easier to secure than documents.opengroup. radical externalisation. De-perimeterisation .org/jericho About the author Ray Stanton is Global Head of Security Services at BT. It eliminates the problems associated with accessing the network with inappropriate devices. Enforcing policy at a partner organization is that much harder. That means that user authentication and identity management is going to play an increasingly important role – with two factor authentication being the bare minimum. human rights legislation. Access policies will become more precise. With the Data Protection Act.the end of fortress mentality discern the difference between authorized and unauthorized traffic. But the laptops are thin clients.Ray has worked for both government and commercial organizations in a variety of security related roles including project management. to ensure that only the parts of the system required for the job will be available. There is still a lot of work to be done on standards and interoperability of systems. A flexible working model for information security management systems that can match the flexibility of the business as a whole is also going to be vital. given that it is hard enough to ensure that your own users have configured their devices properly. security auditing. since ill-configured laptops and PDAs represent a significant security risk at both the outer edge and in the core of the network. You can never be too thin It almost goes without saying that identity management is much easier when the identities belong to an organization’s own employees. In one company that has already adopted de-perimiterisation. deperimiterisation or whatever other names it acquires. and will be a major factor in maintaining compliance. And. providing accurate audit trails of who has accessed. policy design. customers and the public to go. The debates about de-perimiterisation will doubtless continue. employees are responsible for their own laptops including the latest patches and anti-virus protection. will depend on architecting the environment correctly – and maintaining the right levels of control. European accounting standards and a dozen other rules and regulations to navigate. applications. He has over six years experience in Information Services and 21 years in IT Security. based on a ‘least privilege’ model. With a more open network. utility computing. and the development of security management strategies. which means that IT staff can focus on the security of the central server and information on it. More Information: http://www. Whether there will be a mass migration to thin client models – or even ondemand. and where you are going to allow partners. But this is crucial. which seems to be the next logical step – is impossible to predict. But what we can be pretty sure of is that security experts should prepare themselves for a fundamental change in approach. Network Security April 2005 . organisations will still have to prove that 6 confidential data on personnel or financial management has not been subject to unauthorized access. Sarbanes-Oxley. Identity management will ensure that no unauthorized personnel have access to any part of the system.

Vendors of network vulnerability assessment products have tried to address this information overload problem in several ways. it is not unusual for network vulnerability scanning tools to determine the degree of predictability in the IP identification field within network traffic that a target host generates. This was often done regardless of its utility for security. While this observation may be useful in certain circumstances. the better they ” assessment tools to incorporate ever-larger amounts of checks into their products. network vulnerability assessment products typically incorporate hundreds of such checks. In this paper I describe the advantages in using patch management technologies to gather vulnerability data. Information overload The result of these competitive drivers has been that when a network vulnerability scanner is run against any network of reasonable size. As an example. (An R&D team was also an opportunity for vendors to position and publicize themselves within the marketplace. A shift in requirements has occurred. Such a large amount of data is not only intimidating. A common practice was for vendors to create checks for any aspect of a host that can be remotely identified. such as by slanting their internal taxonomy of vulnerability checks in order to make it appear that April 2005 Network Security . The question of “where to begin?” is a difficult one to answer when you are told that your network has 10. Another approach has been to try and “fuse” data together on the basis of connectedness. In some respects this is similar to the situation today where software vendors are judged by the security community on their timeliness to release patches for security problems that are identified in their products.000 “vulnerabilities”. Andrew Stewart Modern network vulnerability assessment tools suffer from an “information overload” problem were perceived to be. but it severely limits the ability to make key insights about the security of the network. The market's desire for a comprehensive set of vulnerability checks to be delivered in a timely fashion spurred the manufacturers of network vulnerability they implemented more checks than in reality. An aggressive approach to information gathering coupled with an ever increasing set of vulnerabilities results in an enormous amount of information that can be reported. These historical goals no longer reflect the needs of modern businesses. The thinking was that the more checks that were employed by a tool. and thus the more value its use would provide. the printout of the report is likely to resemble the thickness of a telephone directory. I also propose a lightweight method for network vulnerability assessment.VULNERABILITIES A contemporary approach to network vulnerability assessment Andrew Stewart The roots of this problem lie in the fact that the competitive and commercial drivers that shaped the early market for network vulnerability assessment products continue to have influence today. and to do so with increasing rapidity. One approach has been to attempt to correlate the output of other systems (such as intrusion detection systems) together with vulnerability data to allow results to be prioritised. Some vendors even established research and development teams for the purpose of finding new vulnerabilities. the number of vulnerability “checks” that vulnerability assessment tools employed was seen as a key metric by which competing products could be judged. the more comprehensive it would be. scanner report can be as thick as a phone directory The effect of historical market forces In the formative years of the commercial network vulnerability assessment market. which does not rely on signatures. and which does not suffer from information overload issues. the pragmatic view must be that there are far more influential factors that can influence a host's level of vulnerability. The quicker a vendor could update their product to incorporate the checks for new vulnerabilities.) Vendors were said to have sometimes sought competitive advantage through duplicitous means. Nonetheless. due to the now widespread use of patch management technologies. many with similarly questionable value. however. Vendors were also evaluated on how quickly they could respond to newly publicised security vulnerabilities. in order to 7 “ A network vuln.

network vulnerability assessment tools clearly add value.) A patch management solution can determine the presence or absence of patches on hosts. just as one example. The result might be that the scan causes services to crash. Host-based patch management products such as Microsoft's SMS (Systems Management Server) and SUS (Software Update Services) are now in wide deployment. that they did not require a roll-out of host-based agents. and “Vulnerability Management”. or the vendor affected is in the process of creating the patch. the probability increases that a check might adversely affect a network service on a box. or for discovering vulnerabilities in bespoke applications (such as Web applications). the version numbers of the particular operating systems or applications that are known to be vulnerable are usually known. do network vulnerability assessment tools still have a role to play? In discovering new vulnerabilities. With an ever-increasing number of checks. (In that latter scenario. A network vulnerability scanner has to attempt to remotely infer that same information. While there is no concrete evidence for this claim. Indeed. or otherwise misbehave.) There are disadvantages to employing a host-based model. (Scanning a DHCP allocated network range provides little value if the DHCP lease time is short. PM (2005) and Chan (2004). “Security Information Management” (SIM). in part. See for example. and this task is made more difficult if the vulnerability scanner has no credentials for the target host. Another advantage to using a hostbased model for gathering patch data is that with an ever-increasing set of vulnerability checks being built into network vulnerability assessment tools. such as “Enterprise Security Management” (ESM). With the now widespread use of agent-based patch management technologies. restart. on an intuitive basis it is probably correct. The days when port scanning would crash the simplistic network stack within printers and other such devices are probably behind us. But this is somewhat of a niche market. this increased focus on patch management has diminished the traditional role of network vulnerability assessment tools. (Rate-limited and distributed scanning can help here. Table 1: Display of services running on hosts 8 Network Security April 2005 .VULNERABILITIES increase the quality of data at a higher layer. or on a regular basis. this barrier has been overcome. service telnet ssh rlogin http https ldap vnc ms-term-serv pcanywheredata irc count 20 79 3 52 26 8 9 30 2 1 A modern approach It is a widely held belief amongst security practitioners that the majority of security break-ins take advantage of known vulnerabilities. Products which require that an agent be installed on hosts have usually been seen as time-consuming to deploy and complex to manage. This is possible because of the advantages inherent in a host-based model. These approaches have spawned new categories of security product. because of the capabilities provided by modern patch management technologies. A patch management solution can therefore be used to determine vulnerability status.) The rise of patch management The widely felt impact of Internet worms has opened the eyes of businesses to the importance of patching systems. These are not activities that businesses typically wish to perform against every device within their network environment. as are other commercial and freeware tools on a variety of platforms. then there is less need to use a network vulnerability assessment tool to attempt to collect that same information (and to do so across the network and en masse). The depth of reporting that modern patch management tools provide in this area has in many respects already surpassed the capabilities of conventional network vulnerability assessment tools. But rather than add layers of abstraction (and products to buy). even if the patch itself is not yet available. and can also identify the current version number of operating systems and installed applications. the patch for a known vulnerability already exists. Given the advantages in using a hostbased model to gather patch status information. however. In most cases. In many respects. but these involve additional complexity. the impact on network bandwidth when a network vulnerability assessment tool is run also climbs. An advantage here is that it is a relatively straightforward task for a software agent running on a host to determine the host’s patch level. the solution would logically lie in not gathering so much data in the first place. If the delta between current patch status and the known set of vulnerabilities is already being directly determined on each individual host. but a business might rightly question the use of increasingly complex vulnerability checks to interrogate production systems. This has now become a viable strategy. the value proposition of network vulnerability assessment tools was.

this data was collected using simple network information gathering techniques. it would be valuable from a security perspective to investigate the two devices for Conclusions Patch management technologies and processes now deliver to businesses the core capability of traditional network vulnerability assessment tools. which itself reduces risk by allowing remediation activities to be carried out sooner. in contrast to having to wade through hundreds of pages of vulnerability assessment report. and so on. but remote administration software certainly has a security implication. and does not require a library of vulnerability checks. the question that businesses need to ask is: what data is it still valuable to gather across the network? There is little value in employing a noisy. The information that can be gathered using these relatively simple techniques has enormous utility for security. For network-wide vulnerability assessment. Similarly. 1997. the identification of vulnerabilities that are present due to missing patches. as can the impact on network bandwidth. host-based patch management tools only have visibility into the hosts onto which an agent has been installed. Most businesses employ a standard build for desktop and server machines to reduce complexity and increase ease of management. A vulnerability scanner is overkill for detecting this kind of “policy drift”. Running pcAnywhere or VNC is not a security “vulnerability” per se. Again. whereas a vulnerability scanner from the same year would be considered woefully inadequate because it has no knowledge of modern vulnerabilities. services such as file transfer are added for ad hoc purposes but not removed. This approach is well-suited for detecting the decay in security that computers tend to suffer over time. The duration of the information gathering loop is shortened. then this is the type of data that it is valuable to collect across the network. We can therefore say that the detection of a device running OpenBSD warrants investigation. namely. and therefore the two installations of pcAnywhere and the nine installations of VNC that were detected are policy violations that need to be investigated then corrected. 1998). As a further example. Note how simple it is to perform this analysis. when 9 April 2005 Network Security . but rather foundational data about the network. Well-documented techniques exist for gathering data related to the population of a network. which displays data gathered on the number of different services running on hosts within a network. the IRC server that was found on the network would probably raise the eyebrow of most security practitioners. but day-to-day administrative activities can negatively impact that base level of security. If a patch management solution is being used to detect weaknesses in the patch status of hosts. These techniques do not require a constant research effort to develop new vulnerability checks. and the identification of operating systems type “ what data is still valuable to gather across the network? ” (Fyodor. os HP embedded Cisco embedded Linux Windows OpenBSD No match count 26 33 42 553 1 2 Table 2: Number of operating systems found in a particular network However. A port scanner written in 1990 could still be used today. Patch management solutions can be used to accomplish this task by identifying the delta between the set of patches for known vulnerabilities and the current patch status of hosts within the environment. The policy on this network is to use Microsoft's Terminal Services for remote administration. Table 2 shows data on the number of operating system types found within a particular network. By employing more simplistic network information gathering techniques. Organizations still need some form of network assessment in order to detect changes that lie outside the visibility of their patch management infrastructure. This network employs both Linux and Windows machines as its corporate standard. Consider Table 1. the run time of a scan can be reduced. I suggest that this task can be accomplished using traditional network interrogation techniques.VULNERABILITIES which there was no fingerprint match. the services running on hosts within the network. This is not traditional vulnerability assessment data. and this allows results to be provided quicker. of course). bandwidth-consuming network vulnerability scan to interrogate production systems with an ever-increasing number of vulnerability checks. That is the difference between looking for specific vulnerabilities and gathering general data on the network. Temporary administrative accounts are created but then forgotten. An all-Linux organization might worry about the presence of a Windows 95 machine on its network (and vice-versa.

And cryptography papers accounted for one third of all the IT security research submitted to the journal. He believes the general decline in interest in science and maths is to the detriment of the country." Walker would like to see more young people venture into cryptography in the UK.patchmanagement. says Mike Walker.000 possible key variations.CRYPTOGRAPHY patch status data is already being collected through patch management activities. at Royal Holloway says that people will migrate away from it in the next year or so if the Chinese research is proven.patchmanagement. chief executive officer at Cryptico. and provides more intuitive results. making it difficult to break. from two Belgians in 2000 to replace the Data Encryption Standard (DES). 54. you are going to sooner or later make a big impression. Fyodor (1997). As of yet the outcome for the broken algorithm is still undecided. But no such lack of interest is evident in China. 1997. who studied cryptography at Royal Holloway College. Volume 7. But this illusion was shattered last month. “Remote OS detection via TCP/IP Stack FingerPrinting”. Phrack Magazine. London. chief scientist at RSA Security. Employing simple network information gathering techniques in this supplementary role is easier. Crypto race for mathematical infinity Sarah Hilley Sarah Hilley decade. Mailing list archive at http://www. No. September 01. The Chinese are determined to get into the subject. It was declared safe until 2010 by the US National Institute of Standard's and Technology (NIST). Such a competition generated the Advanced Encryption Standard (AES). "People didn't think this was possible. And the achievement in cracking the SHA-1 hash function is an earthquake of a result. In Network Security April 2005 . head of Research & Development at Vodafone. Available: http://www.000. But Fred Piper. "If you attract the best people from one fifth of the world's population." Shelf-life "Now there is no doubt that we need a new hash function. "The breakage of SHA-1 is one of the most significant results in cryptanalysis in the past “ It is a race between mathematicians and computers ” “ The Chinese are determined to get into the subject ” the US National Security Agency's cryptography labs. “Essentials of Patch Management Policy and Practice”. 25th December. DES was published in 1977 and had 72. 51. A group of researchers from Shandong University in China stunned the established crypto community at the RSA conference in February by breaking the integral SHA-1 algorithm used widely in digital signatures. 10 replacement for SHA-1. This SHA algorithm was conceived deep within the womb of Even more proof of the hive of crypto activity in China is that 72% of all cryptography papers submitted to the Elsevier journal." says Mette References Chan (2004). Fyodor (1998). Computers & Security last year hailed from China and Taiwan. takes less time. Phrack Magazine. Georgia. has less impact on network bandwidth. PM (2005). No such retirement plan has been concocted for SHA-1 yet. “The Art of Port Scanning”.000. No. About the author Andrew Stewart is a Senior Consultant with a professional services firm based in Atlanta.000. 1998." says Burt Kaliski. Vesterager says a competition will probably be launched to get a new A newly emergent country has begun to set the pace for cryptographic mathematicians… Chinese infosec research efforts are fixated on cryptography and researchers are already producing breakthroughs. Volume 9.000. NIST have now taken DES off the shelf. does not require a constantly updated set of vulnerability “checks”.

and 'how big are their computers?' But the general opinion is that AES was not chosen because it could be broken." says Royal Holloway's Piper. The British famously cracked the German Enigma code in World War II. which is currently the most widely used method of authenticating a user to an IT service. The search begins From the beginning. devices. however.BIOMETRICS addition the Chinese attack has repercussions on other hash algorithms such as MD5 and MD4. Ironically some may well expose the systems they are responsible for to an increased level of risk. This power isn't here yet to make the crack of SHA-1 realistic outside a research environment. and it authenticates countless computer users. However. as a knowledgeable attacker can employ a range of methods to circumvent this layer of protection. According to Moore's law. Down to earth The breakage of SHA-1 is not so dramatic in the humdrum application of real-life security through. says Vesterager. in 1940. Fortunately the crack of algorithms like SHA-1 doesn't yet affect us mere mortals. They bend and stretch the realms of mathematics and strive to create algorithms that outlive computing power and time. The dangers are much more distant. And with China pouring large amounts of energy into studying the language of codes and ciphers. And American Navy cryptanalysts managed to crack the Japanese code. who unknowingly avail of crypto to withdraw money from the ATM on a Saturday night. 11 Biometrics is often said to be a panacea for physical and network authentication.a race between mathematicians and computers. But if law enforcement can't break keys to fight against terrorism. it presents both the end users of the technology and those responsible for its implementation with a number of challenges. However experience has shown that this mechanism consistently fails to prevent attacks. In the rest of the article the range of biometric technologies on the market together with the risks and true costs of implementation that are often ignored by vendors and politicians alike will be discussed. for example. warns Piper. So far NIST hasn't even allocated a 'best before' date for the decease of AES. intelligence is lost. As biometric technologies become less costly. This is thanks to cryptographers thinking in a different time. NGS Software Mike Kemp For the last few years vendors and politicians alike have touted biometrics technology as an invaluable. But cryptographers have to think ahead in colossal numbers to keep up with the leaps in computing power.g. "The AES algorithm is unbreakable with today's technology as far as I'm aware. Piper recommends that keys have to be managed properly to guard against such loopholes in implementation. a time that is set by the power of computation. the cost of implementation means that relatively few companies are using biometric technologies to authenticate identities. It means that we don't have to worry about underlying algorithms being attacked routinely like software vulnerabilities. However. many network administrators will find themselves having to deal with a comparatively illunderstood series of authentication technologies. But there are some considerable problems with the technology. even through it is widely used. however. which target the implementation of cryptography. though. some of which can have a major impact on the security posture of the implementing organization. On a practical level. technical consultant. The AES 128 bit key length gives a total of an astronomical 3. Purple. This model of authentication has been supplemented by multi-factor authentication mechanisms that are based on something the user knows (e. transactions. April 2005 Network Security . people wonder 'what can the NSA do?'.4 x (10^38) possible keys. Time will show. is very much unknown. the NSA may want even bigger computers. What governments can and can't break these days. she adds. Typically this is a password and username combination. computer and network security researchers have sought an alternative to the unique identifier. At present. warns Kaliski. However side channel attacks must be watched out for. Kaliski rates it at a two out of 10 for impact. even preferred. practical. servers and so on. applications. It is a race . Big computers Governments have historically been embroiled in mathematical gymnastics even before cryptography became so Biometrics: the eye of the storm By Mike Kemp. this is good news. computers keep getting faster at a factor of 2 every 18 months. Cryptographers deal with theoretical danger. approach to secure authentication of identity. As cryptography is used in one and a half billion GSM phones in the world.

a token). However there is a growing shift towards adopting biometrics as a mechanism to secure authentication across a network. This is a percentile figure that represents the point at which the curve for false acceptance rates crosses over the curve for false rejection rates. For biometrics to be effective. handwriting (signature recognition). Several different technologies exist based on retinal scans. facial mapping (face recognition using visible or infrared light. and something the user is (biometrics).BIOMETRICS a password). now costs consumers and banks $2 trillion a year. Fingerprinting. and increase the efficiency of boarding gates. pattern recognition and pattern matching. the management consultancy. Recent research. however. but outside as well. There are a number of implementation and security issues that are often overlooked in the push towards new methods of authentication. password authentication is often associated with poor password policies. woman and child on the island. something the user has (e. this CER can be so high as to make some forms unusable for an organisation that wishes to adopt or retain an aggressive security posture. Print sprint Traditionally biometrics is commonly associated with physical security. Too many know how far they have had to compromise security in order to service users. according to Accenture. 12 Accuracy and security? As has already been discussed the biometrics approach to network authentication has much promise. These are now cheap and reliable enough for IBM to include one in some of its latest laptop computers as the primary user authentication device. Space invaders Some forms of biometrics are obviously more invasive of one’s personal ‘space’ than others. fingerprinting (including hand or finger geometry). One reason is that it is laden with a variety of shortcomings that need to be fixed prior to its widespread adoption as an authentication mechanism. is expected to expose a number of flaws within the use of Chip and PIN authentication mechanisms in a variety of common environments. Network Security April 2005 . The effectiveness of a biometric solution can be seen in the Crossover Exchange Rate (CER). At present it remains one of the most costly methods of authentication available. it is far from a panacea with regards the security of networks. Methods of biometric access As has been outlined earlier. Certainly. and management strategies that don’t work. London’s Heathrow airport introduced plans to conduct retinal scans in a bid to increase security. which. It is a highly sophisticated technology based on scanning. Depending upon the implementation of the chosen biometric technology. it is an as yet unrealised potential. as retinal scans are among the most invasive of biometric technologies it would be surprising if the voluntary acceptance rate is high enough to justify either the expense or efficiency improvement of the solution. and voice (speaker recognition). the number and value of card-based frauds appears to have risen since Chip & PIN was introduced. smart cards and digital certificates is becoming widely accepted. and the false positives and false negatives minimised. The financial industry in particular is resolved to reduce fraud based on stolen identities. The primary aim was to combat the growing rate of card fraud based on the manipulation of magnetic strips or signature fraud. A number of fingerprint readers are currently available that can be deployed for input to the authentication system. Biometrics is the asyet unfulfilled promise of the third pillar of authentication mechanisms. some biometrics may well meet with user resistance that company security officers will need to both understand and overcome. iris scans. There is also on-going research to reduce the cost and improve both the accuracy and security other biometric methods such as facial maps and iris or retinal scans. the measuring characteristics must be precise. National security agencies in various countries. As such. In 2005. biometrics is a means of authenticating an individual's identity using a unique personal identifier. still ongoing. biometrics may well enable network administrators to increase the security of their network environments. however. As has been widely discussed . for instance. Token-based authentication is not without its downside. In fact. referred to as facial thermography). Beginning in October 2003 the UK commenced a roll out of Chip and PIN authentication methods for transactions based on bank and credit cards. At present there are no figures on user acceptance of the scheme. At the network level. PIN-pushers The push towards biometrics comes from a variety of sources. A number of attack vectors exist for both the use of SecureIDs and the like. When a biometric authentication system rejects an authorised individual this is referred to a Type 1 error. not only in the workplace. although popular. which is currently voluntary. However.g. or indeed one’s personal finances. are also seeking reliable unique authentication systems as part of the ‘war on terror’. So far over 78 million Chip and Pin cards are in common use in the UK. led by the US immigration authorities. Judging by the developments in the field of biometrics in the last 15 years it can only be a matter of time before everyone can afford the hardware for biometric network authentication. Token of affection? The use of token-based technologies such as SecureID tokens. Many network administrators have wrestled with balancing password authentication and password policies against account user needs or demands. more than one for every man. has negative connotations because of its use in criminal detection. a Type 2 error occurs when the system accepts an impostor.

both face and iris scanners can be spoofed successfully. At the present level of technical understanding and standardisation. a photograph of the iris taken under diffused lighting and with a hole cut for the pupil can make for an effective spoofing stratagem. which transmits data that mimics that of a legitimate user. rather than the leading edge. One often-overlooked problem with the biometric approach is that. Unlike the traditional password-based model. Chip and PIN) no biometric approach relies upon something the user holds as secret. This approach is similar to the password sniffing and replay attacks that are well known and are incorporated in the repertoire of many network attackers. a substitute photograph or video of a legitimate user may be able to bypass systems. and more resilience against automated attacks and conventional social engineering attacks. He is a widely published author and has prepared numerous courses. Japanese Attack on all sides As has been outlined. in 2002. But doing so may have the undesirable side effect of actually increasing their exposure to risk. or be bypassed by attackers. when researchers revealed that some fingerprint readers could be bypassed merely by blowing gently on them. two German hackers. They have several advantages. unlike other forms of authentication. 13 April 2005 Network Security .BIOMETRICS One of the touted benefits of biometrics is that biometric data is unique.g. Attacks are not limited only to fingerprint readers (as found in the current range of network access devices). network administrators who voluntarily introduce the technology may find themselves on the bleeding edge. Currently. and who may start quoting passages from the Human Rights Act. such as users never need to remember a password. they are anything but discreet. and this uniqueness makes it difficult to steal or imitate. such as a change in the legislation. or even the token-based approach (e. • Digital spoofing. photo imaging software and graphite powder. forcing the system to read in an earlier latent print from a genuine user. Indeed. Many (if not all) are susceptible to both physical and logical digital attack vectors. It would be a brave administrator indeed that chose to embrace them blindly and without a About the author Michael Kemp is an experienced technical author and consultant specialising in the information security arena. Indeed in all the biometric technologies currently available potential attackers can see exactly what is going on. biometrics may improve as an authentication technology. and little or no standardisation of the technologies in use. Most large companies can probably afford to implement them. In particular. articles and papers for a diverse range of IT related companies and periodicals. In 2003. thus effectively bypassing all security policies and procedures that are in place. In the case of the former. researcher Tsutomo Matsumoto was able to fool 11 biometric fingerprint readers 80% of the time using 'gummy fingers'. The 2003 attack showed it could create a 'gummy finger' using a combination of latex. this makes them potentially vulnerable. When you think about implementing biometric technologies remember that they do not yet measure perfectly. the lack of standardisation and quality control remains a serious and grave concern. the answer will be to wait and see. If compromised biometric devices are a conduit into a network. biometric technologies are far from risk-free. it may be possible to manipulate stored data. Presently there is not enough hard evidence that shows the real levels of failure and risk associated with the use of biometric authentication technologies. and many signs of user resistance. but also the levels of risk they currently pose to the enterprise. For most. a lack of quality control. it can be used to bypass a number of available fingerprint biometric devices. which relies on attacks that present the biometric sensor (of whatever type) with an image of a legitimate user. Although this method may seem somewhat farfetched. Goodbye to passwords? Biometric technologies have the potential to revolutionise mechanisms of network authentication. Network administrators need to question closely not only the need for biometrics as a network authentication and access mechanism. and the amount of clear statistical research data as to its cost and benefits is Spartan. with regards to iris scanners. namely: • Physical spoofing. These fall into two distinct classes. Worse news came in 2004. lift the fingerprint. if only because politicians and fraudsters are currently driving the need for improvements. and then subsequently use it to gain entry. In the original attack vector an attacker could dust a fingerprint sensor with graphite powder. including a potential ignorance about security concerns on the manufacturer's part. Starbug and Lisa. Their attacks relied upon the adaptation of a technique that has long been known to many biometrics vendors. He holds a degree in Information and Communications and is currently studying for CISSP certification. The reasons for these shortcomings are many. degree of external coercion. he is employed by NGS Software Ltd where he has been involved in a range of security and d ocumentation projects. Obviously. There is also the sometimes onerous and problematic process of registering users who may not embrace the use of biometrics. However. Attack vectors When evaluating biometrics network administrators should consider possible attack vectors. the market for such devices is so new. demonstrated a range of biometric physical spoofing attacks at the Chaos Computer Camp event. In the coming years. and many operational and security challenges can cause them to fail.

which is affordable pre-emptive protection. then they are unworkable. Unreactive “ Proactive security has to be automatic ” lower level of cryptography and digital signatures. Indeed this is an important first step towards identifying which products are relevant. so that hackers are unlikely to try. protecting in advance against threats that are known about. IBM meanwhile has been promoting proactive security at the from the compiled version. but they have in common the necessary objective of moving beyond reaction.PROACTIVE SECURITY Proactive security latest: vendors wire the cage but has the budgie flown…. And Symantec has brought to market the so-called digital immune system developed in a joint project with IBM. which is no longer tenable in the modern security climate. Identifying such threats and putting appropriate monitoring tools in place is an important first step along the pre-emptive path. Microsoft’s work with PreEmptive Solutions springs to mind here. Of course the risk then becomes of the source code itself being stolen. but also servers that can be co-opted as staging posts for attacks. The products can only deliver if they are part of a coherent strategy involving analysis of internal vulnerabilities against external threats. Internet Security Systems has been boasting of how its customers have benefited from its pre-emptive protection anticipating threats before they happen. as the technology concerned is included with Visual studio 2005. Some of this is inevitable. making it harder for hackers to co-opt internal servers for their nefarious ends. and there is some substance behind the hype. like bolting your back door just in case the burglar comes. Stop the exploitation However some of the efforts being made will benefit everybody and come automatically with emerging releases of software. for no enterprise can make its network secure without implementing some good housekeeping measures. without its knowledge. and avoid easily identifiable vulnerabilities. while Microsoft has been working with a company called PreEmptive Solutions to make its code harder for hackers to reverse engineer 14 These various products and strategies might appear disjointed when taken together. All too often for example desktops are not properly monitored allowing users to unwittingly expose internal networks to threats such as spyware. The public key system is widely used both to encrypt session keys and also for digital signatures. After all proactivity is surely just good practice. Proactive security has to be as far as possible automatic. This technology called Dotfuscator Community Edition is designed to make the task of reconstituting source code from the compiled object code practically impossible. For example the decline in perimeter security as provided by firewalls has created new internal targets for hackers. The dedicated IT security vendors have also been at it. The latter has become a target for financial fraudsters because if they steal Network Security April 2005 . On this count some progress has been made but there is still a heavy onus on enterprises to actually implement proactive security. music or even video. To some proactive security is indeed just a rallying call. but that is another matter. Sharing private keys The principle of ducking and weaving to evade hackers can also be extended to cryptography. There is also the risk of an enterprise finding its servers or PCs exploited for illegal activities such as peer-to-peer transfer of software. If the solutions extract too great a toll on internal resources through need for continual reconfiguration and endless analysis of reports containing too many false positives. notably PCs. Vendor bandwagon Nevertheless the vendors do seem to have decided that proactive security is one of the big ideas for 2005. The crucial question is whether these initiatives really deliver what enterprises need. Philip Hunter Philip Hunter Proactive security sounds at first sight like just another marketing gimmick to persuade customers to sign for up for yet another false dawn. Similarly remote execution can be made the exception rather than the default. urging IT managers to protect against known threats. Cisco for example came out with a product blitz in February 2005 under the banner of Adaptive Threat Defence.

But here too risks can be greatly reduced through pro-activity. and the same was true for Nimda. and in some cases even suppliers of free software when there would seem nothing to gain by it. with Internet Security Systems. This left plenty of time to create patches and warn the public. A good example is the case of two-factor security. This has been the gold standard for controlling internal access to computer systems within the finance sector for well over a decade. As we just saw the Slammer worm took six months arrive. and in September 2002 distributed an update that provided protection. breaking new ground through its rapid propagation. Patch it Be that as it may the greatest challenge for proactive security lies in responding and distributing patches or updates to plug vulnerabilities within ever decreasing time windows. Ideally service providers should implement or distribute such protection automatically. but recently there have been moves to extend it to consumer Internet banking. doubling the infected population every 9 seconds at its height. but a number of vendors. Then the secret key can only be invoked. ” the participation of a number of computers. In ” address the different threats posed by Internet fraudsters. hide issues from their users. Conclusion Proactive security also needs to be flexible. the cost of implementing it across a customer base will outweigh the benefits. But there is the feeling now that in general the benefits of full disclosure outweigh the risks. This makes it harder to steal the key because all the computers involved have to be compromised rather than just one. in which static passwords are reinforced by tokens generating dynamic keys on the fly. There is however a counter argument in that public dissemination of vulnerabilities actually helps and encourages potential hackers. making it even harder to be proactive. An idea being developed by IBM involves distributing private keys among a number of computers rather than just one. thereby effecting identify theft.factor authentication to Internet banking Buglife There is also scope for being proactive when it comes to known bugs or vulnerabilities in software. So it may be that while two-factor security will reduce fraud through guessing or stealing static passwords. This leads to delay in identifying the risks. There was the potential to launch a buffer overflow attack. which did reduce the impact. One of the most celebrated examples came in July 2002 when Microsoft reported vulnerability in its SQL Server 2000 Resolution Service. but Microsoft had neglected to do so within Resolution Service. But some experts reckon this is a waste of money because it fails to “ Many suppliers hide issues from users Open disclosure Another problem is that some software vendors fail to disclose vulnerabities when they do occur. But the window has since shortened significantly – a study by Qualys. However Microsoft did spot the vulnerability and reported it in July 2002. Many such disclosures can be found on the BUGTRAQ mailing list. so the processes of developing and distributing patches need to be speeded up. These include man in the middle attacks which capture the one time key as well as the static passwords and replay both to the online bank. reported in July 2004 that 80% of exploits were enacted within 60 days of a vulnerability’s announcement. This can be prevented by code that prohibits any such overwriting. through fear of adverse publicity. was quick off the mark. It makes sense therefore for enterprises to buy software only where possible from vendors that practice an open disclosure policy. This development comes at a time of increasing online fraud and mounting concerns over the security of digital signatures. some cases now it takes just a week or two. But nobody is suggesting that proactive security avoids hard decisions balancing solutions against threats and cost of implementation. in which a hacker invokes execution of code such as a worm by overwriting legitimate pointers within an application. In practice it is likely that at least one of the computers will be secure at any one time – at least such is the theory. Then in January 2003 came the infamous Slammer Worm exploiting this loophole. given that vulnerabilities remain. The case highlighted the potential for pre-emptive action. 15 April 2005 Network Security . designed to allow multiple databases to run on a single machine. but also the scale of the task in distributing the protection throughout the Internet.PROACTIVE SECURITY someone’s private key they can write that person’s digital signature. adapting to the changing threat landscape. whether for a digital signature or to decrypt a message. which provides on-demand vulnerability management solutions. One security vendor. “ There have been moves to extend two.

and message authentication through digital signatures. One of the major criticisms is the overheads involved in certificate and key management. it is vital to confirm the identity of the key and certificate recipients. Here we take a short glimpse at some of the issues associated with Public Key Infrastructure (PKI). Gateway to gateway encryption using S/MIME Gateways An obstacle to end-to-end PKI is the burden of managing certificates. Gateways that use the Secure/Multipurpose Internet Mail Extensions (S/MIME) protocol to encrypt and decrypt messages at the organizational boundary can address these issues. For the initial issuance. network provider and end locations. Secondly. The downside is that data is protected only in transit between servers that support TLS. Furthermore. An investment to purchase or upgrade the network routers at the endpoints of the secure tunnel might not be insignificant. corporate directories contain a significant amount of information which may 16 create data-protection issues if published in full. and decrypted by the receiving server. Most of the work to implement such solutions lies in establishing the network connection. However. The same applies for new network routers as endpoints of a VPN.PKI Management aspects of secure messaging between organizations Roger Dean. unless TLS is implemented as a service in all the involved instances. Also. There are two major options: A dedicated line between the involved companies With this option all messages are normally transmitted without any protection of content. this option may be expensive. S/MIME gateways use A VPN connection between participating companies Such a connection normally employs the Internet. including Microsoft Exchange and IBM Lotus Domino. TLS does not protect a message at all stages during transport. Dedicated line and routing The underlying idea for this alternative to a fully blown PKI is to transmit messages on a path between the participating organizations that avoids the open Internet. Electronic messaging is vulnerable to eavesdropping and impersonation. as defined in the Certificate Policy and Certificate Practice Statement. Thus all information is protected by encryption. Depending on bandwidth. Business partners must have trust in each others’ PKIs to a level commensurate with the value of the information to be communicated. Typically. The level of confidentiality for intracompany traffic thus becomes the same for the intercompany traffic and for many types of information that may be sufficient. PKI Secure messaging employing end-to-end architectures and PKIs offer message confidentiality through encryption. each packet of data is encrypted by the sending server. Gateway to gateway encryption using Transport Layer Security (TLS) Internet email messages are vulnerable to eavesdropping because the Internet Simple Message Transfer Protocol (SMTP) does not provide encryption. eema Roger Dean between the networks of participants. and some less expensive options. TLS is already built into many messaging servers. however. or content. servers can use TLS to encrypt the data packets as they pass between the servers. after which they must be replaced (rekeyed). once encrypted. messages cannot be scanned for viruses. and companies that do not protect sensitive information lay themselves open to significant risk. The organisation’s corporate directory plays a critical role as the mechanism for publishing certificates. so that implementation may simply involve the installation of an X. A current trend is to employ a rigorous semi-manual process to deploy initial certificates and keys and to automate the ongoing management processes. This may be determined by the thoroughness of the processes operated by the Trust Centre that issued the certificates. To protect these messages. corporate directories usually allow wildcards in search criteria.509 server certificate and activation of the TLS protocol. spam. However. especially where messages between organizations are to be digitally signed. and a dedicated line may have a considerable lead time. Head of Special Projects. organizations may publish certificates in different locations. but an encrypted. With TLS. secure tunnel on the network layer is established Network Security April 2005 . certificates and keys are assigned a lifetime of one to three years. there are a number of implementation and operational issues associated with them. but these are unwise for external connection as they could be used to harvest email addresses for virus and spam attacks.

a specialised S/MIME client can’t normally communicate with a PGP client. RFID is probably the least understood and most feared by the public at Attachment. Initially used for proximity access control. RFID has evolved over the years to be used in supply chain tracking. desktop-to-desktop S/MIME. making large scale deployments more cost effective. EEMA is exhibiting at Infosecurity Europe 2005. Another collection is represented by file compressing tools. Key handling is cumbersome and if used extensively it may cause trouble.infosec. It comprises a number of products that can be implemented incrementally according to requirement.RFID public and private keys known as domain certificates to encrypt and sign messages that pass between domains. And businesses are worried that the current state of the technology is not sufficient to keep hackers at bay. Passive RFID tags receive their energy from a remote RFID reader. It will not decrypt the message or verify the signature. 17 April 2005 Network Security . and it can receive messages signed or encrypted with conventional. and it will deliver the message to the recipient's mailbox with the signature More information More information can be found in the full report available from EEMA. in the area of secure messaging. MS/Excel and the Adobe Family. and/or encryption intact. The tag is able to focus the radio frequency energy from the transmitting reader and uses the generated electrical impulse to power the onboard chip. encryption and compression A number of products for document storage and communication are supplied RFID: misunderstood or untrustworthy? Bruce Potter It seems that everywhere you look. to be used with discernment. which is held on the 26th – 28th April 2005 in the Grand Hall. just the attached file(s). Ultimately. Also. However. RFID has the capability to change the face of supply chain management and inventory control and we need to be prepared for that. inexpensive and easy to manage. The most common and simple is a passive tag. RFID Basics RFID (Radio Frequency IDentification) has been around for not individual users. wireless security is in the news. Olympia in London. it cannot currently sign or encrypt mail that is sent to a user in a domain that does not have an S/MIME gateway. Travellers are concerned about the privacy issues of RFID in passports. although that may change. Consumers are afraid of their buying habits being tracked. More details: www. except that the certificates are issued to domains. They have the same format as those used in desktop-to-desktop S/MIME message encryption. Bluetooth is being in integrated into all manner of device from cell phone to laptop to automobile. And now RFID tags are starting to show up in some retail stores and gaining acceptance in for use in supply chain management. But of these three technologies.20USD with readers costing as little as 30USD. Today. and even protecting automobiles. There are several types of RFID tag. but it enables the user to scale the PKI implementation from individuals up to several thousand users. it can send and receive unencrypted and unsigned messages to/from any e-mail domain. “ A major criticism of PKI is the overheads An S/MIME gateway can co-exist with unencrypted SMTP messages and with end-to-end S/MIME encryption. File compression is therefore a temporary or special solution. These allocate the smallest possible storage area for any number of files gathered. compression tools can’t normally protect the actual message. and are often equipped with advanced encryption capability. a large multi-national user organization. There are some limitations with compression tools. and the password must be delivered to the recipient separately – preferably by phone. With PGP there is no reason to hesitate to implement and make use of secure messaging capability because of cost or complexity: it’s perfectly possible for the small to medium sized company ) to create an environment which is functional. For example. WiFi networks are being deployed in homes and businesses at an astounding rate. PGP has been described as a good example of what PKI is. The cost of the chips used for RFID are now as low as 0. the latest version of WinZip is supplied with 256 bit AES encryption. with different types of confidentiality such as MS/Word. Pretty Good Privacy (PGP) The OpenPGP and PGP/MIME protocols are based on PGP and rely on MIME for message structure. Messages are signed and encrypted only while in transit between the S/MIME gateways. toll barrier control.

An attacker reading RFID’s would not know. the database knows that ID 1234 is attached to a bar of soap. we cannot always assume that an attacker will not have access to the backend database. While the RFID specifications generally deal with short ranges (a few inches to a few feet) between the readers and the tags. An attacker with knowledge of an enterprise’s KILL password can potentially terminate all the RFID’s they are within range of. So many enterprises have all their RFID chips created with the same KILL password. In a short period of time. For instance. The idea is that the reader interfaces with some backend system and database for all transactions. The vast majority of RFID tags on the market require no authentication to read the information on them. it ceases to respond to requests from RFID readers. Unfortunately. When a tag receives a KILL command. To date. Attacks against RFID tags are trivial and privacy concerns are everywhere. thereby wreaking havoc with their RFID system. these concerns have not outweighed the advantages to businesses in need of RFID technology and the rate of adoption is accelerating. The other types of RFID involve using a battery for some part of the RFID transaction. if an RFID tag is designed to be 18 Killing a tag One of the primary privacy concerns regarding RFID is the ability for a consumer to be tracked once they have bought an item that contains an RFID tag. Semi-passive tags use a small onboard battery to power the chip. One method of terminating a tag used for retail sales is to simply change the info on the tag to random data when the item is sold. A KILL command actually terminates the RF capability of the chip. even if the data on the chip is random. RFID tags have more in common with 20 year old memory card technologies than contemporary wireless systems. without access to the database. Unlike old memory cards. the About the author Bruce Potter is currently a senior security consultant at Booz Allen Hamilton. As the last decades of network security have demonstrated. One concern of interest is the ability to track the location of a person or asset by an unintended actor. Until new standards and more advanced chips can be made. an attacker can be two orders of magnitude farther away than intended and still read data. it poses a massive security risk. Further. an attacker may be able to be 100 ft away and still interact with it. they can basically be thought as a simple memory chip. with random data. RFID tags fall prey to the same problem. Further. it would be nearly impossible to retag all items in response. Normally a WLAN is only effective for a user within 100m or so. an attacker can render hundreds of thousands of tags completely useless. An attacker could theoretically overwrite values on the RFID tags used by the enterprise. This is a similar problem to that with wireless LAN’s. Unfortunately. Parting shot As RFID tags get cheaper. The reader receives this information and can then act upon it. To overcome this fear. Passive tags can be manufactured thinner than a piece of paper and have been integrated into everything from shipping labels to clothing. but rely on the energy from the reader for powering the tag’s antenna for transmission. While this is good from a privacy perspective. Semi-active tags turn this concept around. to read the data on an RFID chip. RFID tags are accessible from a great distance given advanced wireless equipment. an active tag can have kilobytes (if not megabytes) of memory. The KILL command is protected by a password on the chip. RFID tags typically only contain a unique number that is useless on its own. The database stores the information that ties the unique ID to something of interest. idea is the RFID information can no longer be tied to a value in the database. an attacker or even just a competitor. An attacker is still able to physically track the tag. vendors and enterprises have devised various ways to attempt to terminate the tag. And once the database tying the unique ID’s to physical items has been compromised. specialized equipment can pick up a signal from an RFID tag much farther away. many tags have the capability to write information to the chip without authentication. So some tags also have the concept of a KILL command. and even store data on it if they so desired. While the amount of memory in the non-active tags is limited to generally a few hundred bytes (if that). That way a store’s security system knows the item has been sold and does not sound an alarm when the item leaves. read at 1 foot. there is no capability to change the KILL password once a chip has been fabricated. The drawback of any of the powered tags is that eventually the battery dies and the tag becomes useless. An Active tag uses a battery for both the chip and the transmission of data on the antenna. Security concerns There are a wide variety of security concerns with RFID tags. RFID chips are very primitive. But an attacker with powerful antennas can be more than 10km away and still access the network. For instance. backend systems are often all too easy a target for an attacker. Further. The problem with this method is that there is still an RFID chip active in the item.RFID These RFID chips are very simple and may have as few as 400 logic gates in them. they will be integrated into more and more systems. This allows anyone. This is especially troubling for enterprises relying on RFID for things like supply chain management. While an incredible tool for supply chain management and asset tracking. what ID 1234 is. The chip then responds with a short burst of information (typically an ID unique to the chip) that is transmitted by the antenna on the RFID tag. These tags use the battery for powering the antenna but the chip relies on the RF energy from the reader. Network Security April 2005 . RFID tags will remain easy targets for attackers determined to cause havoc or commit crimes. but many have also been ported to operate in Windows. PostgreSQL (http://www. most UNIX platforms.3% of network security administrators surveyed stated they use Snort and 19 April 2005 Network Security .org/). and the second category of add-ons are those that are designed to ease the tediousness of configuring Snort and maintaining its vast signature ruleset. 4000 University Drive. In the sample. content searching/matching and can be used to detect a variety of attacks and probes. This analysis will also look at which Snort add-on products are favoured by network security managers.the first category are those add-ons that organize Snort's output into a structured set of reports and attack trend indicators. capable of performing real-time traffic analysis and packet logging on IP networks. 2001). Grimes. Oracle (http://www. has spawned a near cottage industry among Snort developers who have created a myriad of graphical user interfaces (GUIs) in an attempt to provide an easier means for network security managers to fully configure and use Snort.[1] According to DataNerds. It is extremely versatile and configurable. 45. SMB probes. Snort has a real-time alerting capability as well. Although the security marketplace has no shortage of good.Snort. Snort is a fairly difficult product to use fully because of the stark command line interface and the un-ordered scan and attack data. OS fingerprinting attempts.mysql.SNORT Network security managers' preferences for the Snort IDS and GUI add-ons Galen A. It can perform protocol analysis. or WinPopup messages to Windows clients using Samba's had large networks comprised of more than 1000 workstations. Snort uses a flexible rules language to describe traffic that it should collect or pass. a UNIX socket. and Windows. is extremely versatile and configurable. The difficulty associated with its command line interface. Penn State McKeesport. a user specified file. org/). it is nonetheless somewhat tedious and difficult to use. and runs on Linux.snort.2% of this population. The network administrators were first asked "Do you use the Snort Intrusion Detection system?".postgresql.0% had small networks comprised of less than 1000 workstations. Rogers & Sharp. Because of the tediousness of working with a command-line version of Snort. stealth port scans. Snort is one of the most widely used Intrusion Detection System (IDS) products currently on the market (Northcutt & Novak. the legion of Snort devotees and developers have created a near cottage industry around developing and improving front-end GUI interfaces to complement Snort. In the sample. incorporating alerting mechanisms for syslog.S. and much more. Snort is a command line intrusion detection program based on the libpcap packet capture library (http://www. microsoft. These interfaces can mainly be divided into two broad categories . and runs on Linux. This improvement in the user interface has greatly expanded the use of Snort to non-developers since it not only makes this powerful program more accessible but also more efficient and easier for non-developers to understand the alerts generated by the IDS (Preece. CGI attacks. as well as a detection engine that utilizes a modular plug-in architecture. one open source product still manages to hold a very prominent position in the security manager's arsenal . and Windows. 17. While the program is very robust and versatile in its ability to detect more than 1200 different types of real-time scans and attacks. And there is even a port of Snort to the Mac OSX platform that uses the now familiar Mac OSX GUI interface.conf. reliable intrusion detection systems. "Snort is a lightweight network intrusion detection system. Its output however can be made more organized and structured by employing a commonly used database plug-in (add-on) and directing the output to one of several supported SQL database such as MySQL (http://www. The sample size was comprised of 27. USA Snort. 2002). States).com/). Most of the front-end interfaces were originally designed to operate on a Linux/UNIX platform. The survey was an attempt to determine whether network security administrators use Snort and any of the available add-on products and what factors contributed to their decision to use the particular addon selected. while 83." (DataNerds. most UNIX platforms. one of the most widely used Intrusion Detection System (IDS) products on the market. The choice of colleges and universities was arbitrarily selected from a fairly even distribution of 40 states and the District of Columbia listed in the total of the 6814 colleges and universities in the Yahoo search directory (By Region > U. McKeesport. Who uses Snort? In this study a population of 195 network security managers from US colleges and universities were surveyed. PA 15132. however. such as buffer overflows. Snort employs a rather cryptic command-line interface and all program configurations are done by manually editing the one configuration file . 2002). or MS SQL Server (http://www. Snort outputs its detected scans and probes into an unordered hierarchical set of directories and text files.

• Don't use open source (6. 12. Redmond-Pyle. which includes ACID but only operates on Windows OS. • Snortsnarf .: John Wiley & Sons. This study also shows that network security administrators also strongly favor the Snort/ACID combination in operation on a Linux platform (78.3%). Y. Graphical User Interface Design and Evaluation.1%). J. A.http://www. • Did not have time to install/setup Snort (6.pandora. output plug-ins and updating Snort's rules files. Examples of interfaces are as follows: • Analysis Console for Intrusion Databases (ACID) http://www.9%). In this study 79.3%). Pulat & Nwankwo. com None of the surveyed network security managers use it • SnortFE . /~mike/short. Interfaces for organizing Snort's output It is not surprising that the vast majority of the front-end interfaces for Snort are designed to help users organize and display Snort's voluminous output into coherent reports. & Novak. In addition to more user-friendly interfaces many of the developer sites are also now offering installation assistance for Snort. (1995). This choice of Snort add-ons also suggests that most security administrators are using Snort more as an attack trend analysis tool rather than as a real-time intrusion indicator. mean security administrators now possess a much wider choice for how they might want to deploy Snort-based sensors on their networks.4%). Among the security managers who reported using Snort.7% of all network security managers who use Snort use and/or have tried IDScenter. Inc. Even on a small to medium-sized network or network segment it is not unusual for Snort to generate between 15 and 20 thousand legitimate alerts each month.0% of the network security managers who use snort use one or more of the configuration add-ons (category2). This could possibly explain the poor showing of the use of References: Northcutt.2% of the responding network managers who use Snort both use and have tried Hen • Snort not robust enough ( has been shown in numerous other studies to improve operator efficiency (Mann & Schnetzler. Interaction Design: Beyond Human Computer Interaction.intersectalliance. (2002). & Moore. Allen. London: Prentice Hall Interfaces for configuring Snort Some Snort developers have concentrated on developing an easier to use Snort configuration environment for configuring Snort's network settings. Note: [1]There is also a port of Snort for the Mac OS called Hen Wen 20 Network Security April 2005 . This study seems to suggest that this category of add-ons is not nearly as popular as the first category. • SnortCenter .silicondefense..5% administer large networks (>1000 workstations and/or host computers) and 12.Not used by any survey respondents.3% of the network security managers who use Snort say they also use SnortCenter.http://www. (2001).demarc.html Preece. preprocessor controls.available at http:// users.. or any of the other add-ons mentioned in this • Hen Wen (MAC OSX) . Hoboken. • Snort is not as useful as a commercial IDS product (24. N..html Only 8.2% of all network security managers who use Snort use one or more of the report/trend analysis add-ons (category 1) while only 25. But the development of the variety of GUI front ends described in this article and the added usability they present.. Conclusion In this study it appears that as network size increases network security managers appear much more likely to make the decision to include an IDS such as Snort in their security arsenals as suggested by security best practices (Allen.not used by survey respondents. 1995). security managers are not limited or restricted to the choice and number of front-end products they can deploy and can place any number of Snort sensors on their network in any combination of the front-end products previously listed.7% of all network security managers who use Snort say they also use ACID • PureSecure . • IDScenter -available free at: http://www. • Razorback .8.http://www.66. Of those security managers who do not use Snort.5% of network security managers use it. • Use IPS instead (10. 1987) and few will deny that the addition of GUI front-ends and report generators have made Snort a more viable product for a larger target audience since the interfaces make the product more usable (Redmond-Pyle & Moore. This study also shows that the decision to use Snort as their IDS of choice also includes the choice of which GUI front-end to use and overwhelmingly the network security managers represented in this study chose ACID. 87. S.5% administer small networks (<1000 workstations and/or host computers).engagesecurity. IDScenter. (2001) CERT Guide to System and Network Security Practices.cert.html . Indianapolis: Addison-Wesley Pearson Education. Since an IDS is a passive device with low CPU overhead. they gave the following reasons why: • Don't use any IDS system (44. The addition of a GUI interface such as ACID.3%).com 16. & Sharp. • Snort installation/setup procedure too complicated (6. Network Intrusion Detection: An Analyst's Handbook. Indianapolis: New Riders DataNerds http://www. D.9%). 2001). 1986.9%). Rogers.datanerds. H.SNORT the vast majority of the network security managers who use Snort use it on a Linux platform (78.