You are on page 1of 195

Final Report 2.

PSTP08-0107eSec Combating Robot
Networks and Their Controllers:

06 May 2010
Version Control

Version Date Notes

Initial Classified draft for review by

0.1 01 Mar 2010

1.0 25 March 2010 Final QA

1.1 26 March 2010 Final Draft Report Approved

Reviewed by DRDC, RCMP, CSEC,

26 April -05 May

Redaction (56 pages) and

2.0 06 May 2010

Combating Robot Networks

and Their Controllers
Executive Summary

This report is written as a comprehensive reference ‘how to’ Combat Robot Networks and Advanced
Persistent Threats on a national scale. It should serve as an excellent resource to anyone involved in cyber
security and high-tech crime. It is of particular relevancy for police, intelligence & threat analysts, security
architects, and policy makers. The chapters guide the reader through a deep-dive study into:

• advanced Botnet tradecraft and Advanced Persistent Threats (APT);

• quantitative evidence of ongoing attacks, the threat agents and prevailing uses of Botnets to
support criminal activity against Canadian interests;
• a discussion of the legal and privacy concerns related to information collection on Botnet activity
and the issues related to proactive defence measures against Botnets;
• effective architectural solutions to mitigate the risks posed by Botnets;
• strategic business transformation roadmap for police, intelligence, defence and public safety
agencies; and
• advanced tools and techniques that can be used by Law Enforcement Agencies (LEA) to monitor
Botnet activity and to gather evidence and actively pursue criminal activity using Botnets.

Cyber Crime is big business

“Cyber crime is now the most significant challenge facing law enforcement organizations in Canada” were
the headlines of a nationwide survey, commissioned by the Canadian Association of Police Boards (CAPB)
in 2008. The mischievous, thrill seeking hackers of the 1980’s has given way to a sophisticated breed of
cyber criminal who has the resources and technical capability to conduct large scale criminal activity over
the Internet. Today, the tool of choice for these criminals is the robot network or botnet where home and
office computers are hijacked, often without the knowledge of their owners, and programmed to serve a
botnet controller for illegal purposes such as: espionage, fraud, identity theft, bulk email or spam and
distributed denial of service attacks.

Botnets are a global phenomenon and Canada is no exception. Whether the domestic issue is terrorism,
organized crime or integrity of government, botnets play an increasingly important role. This report,
therefore, is intended to inform the overall Public Security Technical Program initiative which focuses on
Critical Infrastructure Protection, Surveillance, Intelligence, and Interdiction, and Emergency Management
and Systems Integration.

The cyberthreat
This report represents a departure from traditional cyber security studies that have relied on interviews to
canvas opinions about the cyberthreat. This report is informed by actual data, in addition to a case study of
botnet activity during Vancouver’s 2010 Olympic Games. Cyber intelligence estimates for this study are

Combating Robot Networks

and Their Controllers
derived from a network sample of 839 petabytes 1 of communications traffic examined over the period of a
year, covering 70% of all Internet communications traffic in Canada. Detailed threat analysis was
performed on a malicious traffic sample size of 200 Petabytes. This represents the largest statistically valid
sample set of cyber threat activity in Canada to date, upon which police can rely on evidence-based
decision-making to combat botnets and their controllers.

In the course of this study, there was evidence found of extremely large distributed denial of service
attacks, sophisticated foreign controlled robot networks, spynets and high volumes of cybercrime in both
private and public sectors.

Conductance of Risk
The velocity and magnitude of evolving risks propagated through cyberspace is a function of connectivity
and conductance through critical infrastructure interdependencies and their risk conductors. Cyberspace is
the nervous system that binds all critical operations. Quantitative research shows that telecoms is a super-
critical infrastructure – as important to public safety as electricity. The Internet is an amazing environment,
quite unlike anything our civilization has experienced before. It offers a wealth of information and
capabilities in various forms from email, to ecommerce, to streaming video, file transfers and voice
communications and it has become a nearly indispensable part of our daily lives. Unfortunately, the
applications that are used to access the Internet, namely, web browsers and file transfer and email
applications contain many exploitable flaws that allow an attacker to assume control and use the computer
for unauthorized and mostly illicit purposes.

Attack Vectors and Inadequate Safeguards

Cyberspace is expanding beyond billions of computers and other Internet-aware devices, all of which are
highly exposed to hijacking-malware that can assimilate them into a larger criminally controlled robot
network. The Internet is on the brink of exploding in size with the roll out of wireless 4G devices and later

Unfortunately there are many different attack vectors that can be used to infect a host. A host can be
infected by a malicious link embedded in a web page or email or by downloading and executing a
maliciously crafted file or through an infected USB stick, CD or DVD into a computer or by simply
accessing the Internet without anti-virus or firewall protection or with programs that have not been kept up
to date with the latest security patches. Some of these infections can be prevented by adhering to good
basic security practices such as scanning files, employing trusted anti-virus software before executing,
maintaining security patch levels in operating systems and applications and by avoiding dangerous web
sites such as porn sites that are a haven for malware such as those offering salacious or contraband
goods and services. But even these safeguards are no match for the more sophisticated botnet threats.

Most of the attack vectors are based on zero-day exploits, which have not been identified or remediated by
the application or security product vendors and can infect even a well protected host. Furthermore, once
infected, the host will download bot agents that resist detection and continue to morph in order to evade
updated signatures by anti-virus and firewall vendors, or simply disable AV capabilities on the host. These
kinds of infections can only be detected by observing the bot agent behaviour, which is characterized by
mass port scanning over all possible network addresses and “beaconing” or attempts to communicate with
the botnet command and control (C&C) servers. The C&C servers also employ counter detection
capabilities such as rapidly changing domain names and IP addresses to conceal their activities but this
behaviour can be monitored as well and can be used to track down and neutralize the servers and
hopefully bring their human controllers to justice.

Most organizations use standard security architecture practices to secure their networks. However, as
shown conclusively in this study, there is still a considerable amount of botnet traffic that can be seen
going to and from corporate networks. The way and means in which organizations manage the security of
infostructures must fundamentally evolve to include the carrier cloud. National carriers and large service

Represents Netflow on 839 Petabytes of raw traffic ith a 10000:1 packet sampling rate reduced to 8.3Terabytes for forensic analysis

Combating Robot Networks

and Their Controllers
providers are in best position to detect and act to combat botnet traffic. Notwithstanding, police have the
mandate to make arrests. National public policies, legislation, and regulation appear to have had no
measurable effect on the volume of malicious international traffic flooding into Canada.

Intelligence led policing

Timely and accurate information becomes critical in the fight against botnets. There are many excellent
sources of botnet intelligence and these can be used to identify the infected hosts and the C&C servers.
However, the task of gathering intelligence is difficult one, which must contend with often outdated
legislative and economic models.

The botnet interdiction strategy outlined in this report addresses the aforementioned issues. Information
sharing is, therefore, a critical component of botnet defence as no single agency, has the complete picture
of botnet activity. Thus, private/public partnerships within Canada and internationally are requisite part of
an effective botnet interdiction strategy. In addition, information sharing will assist in identifying potential
victims in advance of harm being done.

This report also suggests that a proactive cyber defence is the most effect strategy to employ against the
botnet threat. Thus, law enforcement agencies must transition from a cyber crime strategy which
emphasizes prosecutions to one which includes prevention using an intelligence-led policing. In order to
safely and effectively combat criminal botnets, a fundamental business transformation is required involving
people, processes, technology and culture. Based on these key areas, this study outlines a strategic
roadmap to achieve discrete tactical successes in the areas of integrity of systems, support for
infrastructure and, ultimately, fighting cybercrime.

Private-public-partnerships using tax credits to accelerate initiatives proactively and outsourcing some of
the investigative work in an efficacious manner, appears to be some of the best options to fight cybercrime,
with police free to build an indigenous capability to combat the more sophisticated threats as they emerge.

All of these topics are covered in much greater detail in this study and we hope that the readers find that
this report is truly “an instrument of learning and an instrument of change”.

Combating Robot Networks

and Their Controllers
This page intentionally left blank.

Combating Robot Networks

and Their Controllers
Table of Contents
VERSION CONTROL.............................................................................................................................................. I
EXECUTIVE SUMMARY.......................................................................................................................................II
TABLE OF CONTENTS ....................................................................................................................................... VI
SECTION 1 INTRODUCTION .........................................................................................................................1
SECTION 2 BACKGROUND .............................................................................................................................3
SECTION 3 STATEMENT OF LIMITATIONS ...........................................................................................5
3.1 SCOPE..............................................................................................................................................................5
3.2 OUT OF SCOPE................................................................................................................................................5
3.3 INTELLECTUAL PROPERTY ...............................................................................................................................6
3.4 RISKS ..............................................................................................................................................................6
3.5 DATA MINING .................................................................................................................................................6
3.6 FOREIGN INTELLIGENCE .................................................................................................................................6
3.7 MANDATE, LEGAL, POLICY AND PRIVACY .....................................................................................................6
3.8 SENSITIVITY ....................................................................................................................................................6
SECTION 4 BASE CONCEPTS ........................................................................................................................7
4.1 INTRODUCTION TO BOTS AND BOTNETS ......................................................................................................7
4.2 PROMINENT BOTNETS IN 2009 & 2010 .....................................................................................................9
4.2.1 Zeus & Variants .................................................................................................................................9
4.2.2 Torpig ..................................................................................................................................................10
4.2.3 Waledac ..............................................................................................................................................11
4.2.4 Cutwail (aka Pushdo, Pandex) ...................................................................................................11
4.2.5 Conficker / Downup........................................................................................................................11
4.2.6 Rustock ...............................................................................................................................................11
4.2.7 Grum....................................................................................................................................................12
4.2.8 Maazben .............................................................................................................................................12
4.2.9 Festi......................................................................................................................................................12
4.2.10 Mega-D (aka Ozdoc)......................................................................................................................12
4.3 OTHER BOTNETS OF NOTE ..........................................................................................................................12
4.3.1 Storm (aka Nuwar) ........................................................................................................................12
4.3.2 Srizbi....................................................................................................................................................12
4.3.3 Xarvester............................................................................................................................................12

Combating Robot Networks

and Their Controllers
4.3.4 Donbot.................................................................................................................................................13
4.3.5 Bobax / Kraken ................................................................................................................................13
4.3.6 Asprox Botnet...................................................................................................................................13
4.3.7 Coreflood ............................................................................................................................................13
4.3.8 Gheg.....................................................................................................................................................13
4.3.9 Warezov..............................................................................................................................................13
SECTION 5 TECHNOLOGY.............................................................................................................................15
5.1 TECHNOLOGY OVERVIEW .............................................................................................................................15
5.1.1 Lifecycle of a Botnet.......................................................................................................................15
5.1.2 Botnet Families ................................................................................................................................16
5.2 CHARACTERISTICS OF A BOTNET ................................................................................................................16
5.2.1 Propagation Characterization .....................................................................................................18
5.2.2 Command and Control Structure Characterization ............................................................20
5.2.3 Self Defence......................................................................................................................................23
5.2.4 Payload / Payload Objectives .....................................................................................................26
5.2.5 Botnet Parameters..........................................................................................................................28
5.3 SAMPLE IN-DEPTH ANALYSIS......................................................................................................................30
5.3.1 Zeus .....................................................................................................................................................30
5.3.2 ZeuS Detailed Analysis during the Olympics........................................................................37
5.4 EMERGING TRADECRAFT ..............................................................................................................................37
5.4.1 Kneber (Zeus) in Depth................................................................................................................37
SECTION 6 THREAT ASSESSMENT ..........................................................................................................42
6.1 SUMMARY - THREATS, AGENTS AND CRIMINAL USE OF BOTNETS ..........................................................42
6.2 DDOS ATTACKS ...........................................................................................................................................44
6.3 MONEY BOTS ............................................................................................................................................46
6.4 THREAT AGENT ANALYSIS ...................................................................................................................51
6.4.1 Proliferation of Cyber-terrorism. ...............................................................................................55
6.4.2 Cyber criminals ................................................................................................................................55
6.4.3 E-telligence........................................................................................................................................55
6.4.4 Russian Business Network...........................................................................................................56
6.4.5 CHINA..................................................................................................................................................59
6.4.6 AURORA ..............................................................................................................................................62
SECTION 7 BOTNET DETECTION AND MITIGATION STRATEGIES .......................................65
7.1 DETECTION....................................................................................................................................................65
7.1.1 Threat Networks Provided Trusted Channels .......................................................................66
7.1.2 Open Sources Program .................................................................................................................66
7.1.3 Honey Pots, Honey Nets and Tarpits.......................................................................................66
7.1.4 Sensors ...............................................................................................................................................67
7.1.5 DNS ......................................................................................................................................................67
7.1.6 Dark Space ........................................................................................................................................67
7.1.7 P2P (Peer-to-peer) .........................................................................................................................68
7.2 MITIGATION..............................................................................................................................................69
7.2.1 Basic Strategies ...............................................................................................................................69
7.2.2 Workable strategies .......................................................................................................................71
SECTION 8 ISSUES AND CHALLENGES.................................................................................................80
8.1 TACTICAL CHALLENGES...............................................................................................................................80
8.1.1 Extracting C&C Information from Malware ...........................................................................80
8.1.2 Information Exchange...................................................................................................................81

Combating Robot Networks

and Their Controllers
8.1.3 Barriers to sharing..........................................................................................................................81
8.2 LEGAL, PRIVACY AND ETHICAL CONCERNS ................................................................................................82
SECTION 9 RECOMMENDATIONS...........................................................................................................105
9.1 THE CANADIAN GOVERNMENT NCSS ......................................................................................................105
9.1.1 Peace, Order and Good Government.....................................................................................105
9.1.2 Tax Credits for National Cyber Security...............................................................................105
9.1.3 Program Management and Timelines for delivery............................................................106
9.1.4 Cross-border context...................................................................................................................106
9.1.5 Proactive Cyber Defence ............................................................................................................106
9.1.6 Funding of research .....................................................................................................................107
9.1.7 Comments on “Building trusted and sustainable partnerships”..................................107
9.1.8 Trust and Partnership Building ................................................................................................107
9.1.9 Implement an all-hazards risk management approach..................................................107
9.1.10 Risk Management..........................................................................................................................108
9.1.11 Advance the timely sharing and protection of information among partners..........108
9.1.12 Information Sharing.....................................................................................................................109
9.2 TACTICAL SOLUTION SET FOR POLICE COMBATING ROBOT NETWORKS ..............................................109
9.2.1 Funding.............................................................................................................................................109
9.2.2 Awareness and expectations....................................................................................................109
9.2.3 P3........................................................................................................................................................110
9.2.4 Trusted partnerships. ..................................................................................................................110
9.2.5 Training. ...........................................................................................................................................110
9.2.6 Sources.............................................................................................................................................110
9.2.7 Business transformation from prosecution to prevention .............................................112
9.2.8 Corporate architecture................................................................................................................112
9.2.9 Investigative architecture. ........................................................................................................112
9.2.10 Numbered sources and undercover operatives.................................................................113
9.2.11 Operational readiness review...................................................................................................113
9.2.12 Action Summary............................................................................................................................113
9.3 BOTNET DEFENCE ARCHITECTURE WITHIN AN ENTERPRISE ...................................................................115
9.3.1 Traditional Enterprise Security Architecture.......................................................................115
9.3.2 Network Consolidation ................................................................................................................116
9.3.3 DNS ....................................................................................................................................................116
9.3.4 Patch Management.......................................................................................................................117
9.3.5 Firewall Policies..............................................................................................................................118
9.3.6 Intrusion Detection Systems ....................................................................................................118
9.3.7 Anti-Virus .........................................................................................................................................119
9.3.8 Content and Web Filtering.........................................................................................................119
9.3.9 Log Consolidation and Analysis ...............................................................................................120
9.3.10 System Restore .............................................................................................................................120
9.3.11 Security Awareness......................................................................................................................121
9.4 REMEDIATION .............................................................................................................................................121
9.5 SUMMARY OF ENTERPRISE DEFENCE MECHANISMS ................................................................................122
9.6 ENHANCED ENTERPRISE SECURITY ARCHITECTURE ................................................................................123
9.7 INTER-NETWORK SECURITY IN THE CLOUD .............................................................................................126
9.8 UPSTREAM (CLOUD) SECURITY.................................................................................................................127
9.8.1 Collection Elements of Upstream Security ..........................................................................128
9.8.2 Traffic Flow Analysis ....................................................................................................................130
9.8.3 Infrastructure Elements..............................................................................................................132
9.8.4 Upstream Security Use Cases ..................................................................................................134
9.9 MALWARE ANALYSIS TREATMENT AND HANDLING ..................................................................................139

Combating Robot Networks

and Their Controllers
9.9.1 Analysis.............................................................................................................................................140
9.9.2 Treatment ........................................................................................................................................141
9.9.3 Handling ...........................................................................................................................................141
9.9.4 Summary..........................................................................................................................................141
9.10 BUILDING AN ADVANCED INVESTIGATIVE CAPABILITY ...........................................................................142
9.10.1 Situational Understanding and a Common Operating Picture .....................................142
9.10.2 Advanced Investigations Capabilities ....................................................................................143
9.10.3 Evidence collection .......................................................................................................................144
9.10.4 Analysis. ...........................................................................................................................................145
9.10.5 Reporting and Case Management. .........................................................................................145
9.10.6 Summary of an Advanced Investigative Capability .........................................................146
9.10.7 Covert Network Access ...............................................................................................................147
SECTION 10 FUTURE WORK ........................................................................................................................150
A.1 EXTRAORDINARY CIRCUMSTANCES................................................................................................................1
A.2 OUTSTANDING NON-TECHNICAL QUESTIONS ..............................................................................................2
A.3 THE PRAGMATICS OF A NATIONAL CYBER SECURITY STRATEGY ..................................................................5
A.3.1 P3 ............................................................................................................................................................5
A.3.2 Imperatives and Impediments.....................................................................................................6
A.3.3 Cognitive dissonance .......................................................................................................................6
A.3.4 Risk and liability of inaction ........................................................................................................21
A.4 KEY CASE STUDIES ................................................................................................................................24
ANNEX B- REFERENCES .....................................................................................................................................1

Combating Robot Networks

and Their Controllers
Section 1 Introduction
Sophisticated large scale Botnet quarantine tradecraft and facilities have existed in
telecommunications networks since the 1990s. The Storm Worm began 19 January 2007 by
compromising machines, installing a kernel level rootkit and merging them into a larger robot network
(Botnet); one that has now grown to 50 million computers globally. This hijacking malware is
constantly morphing to evade antivirus and intrusion detection and the Botnet changes domains
every five minutes (fast flux). Critical sectors are currently host to a number of these Storm nodes.
According to recent statistics compiled for this study for PSTP, 1.5% of traffic of the sector under
examination was maliciously associated with a foreign controlled Botnet and 53% of all e-mail leaving
the sector was malicious.

On 27 April 2007, the Russian Business Network (RBN) launched a coordinated cyber attack against
Estonia with the support of the state. The assault began with hactivist (the non-violent use of illegal or
legally ambiguous digital tools in pursuit of political ends) undertones similar to the current e-jihadist
model (use of the Internet to perform acts of terrorism), that were intended to mask a more
sophisticated and a Distributed Denial of Service (DDoS) attack coordinated by the Служба Внешней
Разведки (SVR) through RBN. Some attacks came directly from PCs belonging to the Russian
presidential administration. These cyber attacks effectively shut down the computing facilities of the
Estonian government, media and financial sector in consecutive order. Georgia was attacked in a
similar manner in July 2008. At the time, Estonia invoked the collective self-defence provisions of
Article V of the North Atlantic Treaty. However, NATO did not, and still does not, define cyber-war as
a clear military action nor did they have the capability to respond. Global Internet Service Providers
(ISPs) and Tier 1 telecommunications firms (Telcos) came to the aid of Estonia and Georgia and
effectively shut down the attacks. Canadian CNO doctrine was under review at the time of

One million Botnet or “zombie” machines were used to conduct that attack against Estonia. Based on
current prices for Botnet “rentals”, a 2,500 machine Botnet has sufficient power to mount a highly
effective DDoS attack against a large sector at an approximate cost of $500. It is clear that the use of
Botnets by cyber criminals and terrorists can significantly enhance destructive effects, radicalization,
influence operations and clandestine communications. Cyber criminals and terrorists have also been
early adopters of web 3.0, a new, more efficient concept for storing and collating information across
the Internet, and they have already used Botnet-infested web 3.0 applications to mount an e-jihad
against counties including Canada. This trend will continue and we must conclude that any
significant terrorist cyber attack in the future will employ Botnet technology. The DDoS attack against
Estonia was measured at 70Mbps, whereas recent attacks against Canada have exceeded

Botnet metrics are collected and consumed in carrier operations at rates exceeding 1 Gbps. These
metrics are used to proactively identify and neutralize Botnet attacks by Internet service providers

Combating Robot Networks

and Their Controllers
across Canada. The exposure to Botnets within critical sectors is not known to any degree of
certainty on an historical basis and there are currently no security standards or policies in that deal
with Botnets.

Reliance on open-source vulnerability advisories affords organizations little choice other than to
stretch security budgets thinly across a broad defensive front; rather than steering resources
efficaciously, or interdicting the threat at a safe distance. Subjective assessments are insufficient for
establishing risk at a compelling level of detail or accuracy to predict and prevent acts of cyber crime,
espionage and cyber terrorism.

“You can’t manage what you won’t measure.”

Constant surveillance is required to stay abreast of the stealth technologies that are used by
organized crime, terrorist cells and foreign espionage programs to obfuscate their identities and their
on-line activities. These technologies include sophisticated networking techniques that allow cyber
criminals and terrorists to quickly establish and reconfigure fast flux Botnets for various illegal
purposes. This plus the fact that Botnets are often hidden deep within Enterprise networks gives
them a high degree of resiliency and resistance to detection.

A proactive defence matrix enables surveillance, intelligence, and interdiction of an emerging cyber
threat at distance in both time and space. Malicious code analysis and treatment centres can be
used to provide a virtual safe environment to block/scrub the majority of Botnet traffic from
cyberspace. Excellent results are expected with little risk given much of this knowledge is already
being collected and analyzed in the carrier ‘Cloud’.

This study represents an essential departure from relying on anecdote, doctrine and security policies
as the common means of managing risk from entities as dangerous as Botnets in the hands of
criminals and terrorists. Instead, as this study will show, science can enhance the an organization’s
ability to predict and interdict cyber attacks against Canada. Enhanced cyber security intelligence
collection and analysis can deliver precise, timely and actionable guidance to the “business of
security” and the “security of the business”.

Combating Robot Networks

and Their Controllers
Section 2 Background
The Centre for Security Science (the Centre) is a joint endeavour between Defence Research and
Development Canada (DRDC) and Public Safety Canada. It provides science and technology (S&T)
services and support to address national public safety and security objectives. It is part of the
Government of Canada’s approach to public security science and technology (PSST) and is one of
seven research centres within DRDC, an agency of the Department of National Defence (DND).

The Centre’s mission is:

Through science and technology, to strengthen Canada’s ability to prepare for, prevent,
respond to, and recover from high-consequence public safety and security events.

The Centre's capabilities lie in leading and administering research, development, testing, and
evaluation of technologies, and in identifying future trends and threats. The Centre is also helping to
develop a network of national and international S&T partners within the public safety and security

The Centre works with DRDC, Public Safety Canada, and 19 other science-based federal
departments and agencies involved in protecting the safety and security of Canadians. The Centre’s
goal is to deliver timely and relevant S&T research in support of an all-hazards approach to natural
and accidental disasters, and terrorist and criminal acts.

The work of the Centre is focused on four mission areas.

1. Chemical, Biological, Radiological-Nuclear, and Explosives (CBRNE): Capabilities to

prevent, prepare for, and respond to CBRNE threats to public security, whether they originate
from terrorist or criminal activities, natural causes, or accidents.

2. Critical Infrastructure Protection (CIP): Capabilities to ensure the robustness, reliability,

and protection of physical and information technology (IT) facilities, networks, services, and
assets, which if disrupted or destroyed would have a serious impact on the health, safety,
security, economic well-being, or effective functioning of the nation.

3. Surveillance, Intelligence, and Interdiction (SI2): Capabilities to identify and stop terrorists
or criminals and their activities, including surveillance, monitoring, disruption, interdiction, as
pertaining to border and transportation security.

4. Emergency Management Systems Integration (EMSI): Capabilities to ensure the

performance, integration, and interoperability of national and international public security and

Combating Robot Networks

and Their Controllers
emergency management capabilities and supporting systems, including the enabling
standards, and vulnerability and systems analyses.

The CBRNE mission area is administered through the CBRNE Research and Technology Initiative
(CRTI). The Public Security Technical Program (PSTP) broadens the Centre’s scope with the three
additional mission areas described above, Critical Infrastructure Protection (CIP), Surveillance,
Intelligence, and Interdiction (SI2), and Emergency Management and Systems Integration (EMSI).

The study charter PSTP08-0107eSec-Combating Robot Networks and Their Controllers is part of
the overall PSTP initiative. It will primarily support the CIP mission but will also provide valuable input
to the SI2 and EMSI missions.

This study will:

• identify knowledge and situational awareness gaps and understand the frequency of
occurrence and severity of cyber attacks involving Botnets with particular scrutiny on
advanced and evolving tradecraft amongst dangerous threat agents, particularly cyber
criminals; and

• provide pragmatic and explicit designs to provide an order of magnitude more network
assurance for the an organization. It will provide critical information for risk management,
intelligence analysis and computer network operations that will prevent an Estonian type

Combating Robot Networks

and Their Controllers
Section 3 Statement of
This section provides an overview of the report and will include inherent constraints, assumptions,
issues of scope, risk, security, intellectual property, data classification or liability.

3.1 Scope
The scope of this report is to provide the PSTP with an understanding of robot networks with
particular emphasis on advanced and evolving tradecraft amongst dangerous and sophisticated
threat agents. The report will:

a. provide a detailed description of Botnet tradecraft using information obtained from open
source literature and unpublished information obtained from the Internet core in Canada and
from other subject matter experts;
b. describe the threat agents and prevailing uses of Botnets to support criminal activity;
c. discuss the legal and privacy concerns related to information collection on Botnet activity and
the issues related to proactive defence measures against Botnets;
d. recommend architectural solutions to mitigate the risks posed by Botnets; and
e. recommend tools and techniques that can be used by Law Enforcement Agencies (LEAs),
security, intelligence and defence agencies to monitor Botnet activity and to gather evidence
and actively pursue Botnets.

3.2 Out of Scope

The following items are not within the scope of this effort but they may be the subjects of follow-on
work to this study:

a. a review of the current state of security systems architecture within the GC and their ability to
withstand Botnet attacks;
b. a gap and options analysis of the GC related to the implementation of the recommended
architectural solutions in this study;
c. detailed design of Botnet defence architecture, darkspace analysis, malcode analysis
treatment and handling facilities; and
d. the implementation of the architectural solutions that are described in this report.

Combating Robot Networks

and Their Controllers
e. the government will not provide any information in support of this independent study, nor
would GC systems be accessed.

3.3 Intellectual Property

PSTP has unrestricted non-exclusive rights to reproduce, publish and distribute this study in whole or
in part. The intellectual property and copyrights are retained by the respective authors referenced

3.4 Risks
In addition to general project risks (budget, schedule) managed through a Project Manager, the
following non-exhaustive list of risks and mitigating strategies apply to this study.

3.5 Data Mining

There is a vast quantity of transient data that is consumed in raw format. Collecting, visualizing and
mining this data represent the majority of work in providing quantitative evidence supporting the
research. New systems may need to be built to collect and process this data. If such is the case the
data will not be available for the study, but the solution will be presented in next steps.

3.6 Foreign Intelligence

Investigating threat actors is problematic and potentially dangerous. Sources, methods and identities
will be protected.

3.7 Mandate, Legal, Policy and Privacy

Legal and policy concerns have proven to be the greatest inhibitor to a Botnet defence capability in
many organizations. These issues will not affect the outcome of the study but the raw data and
means of collection may have to be suppressed.

3.8 Sensitivity
The tradecraft in operation by carriers is considered to be exceptionally sensitive and is handled in a
compartmented fashion. Full disclosure of techniques and proactive defence measures may not be
possible. The main body of this report will be unclassified. Sensitive information has been placed in a
classified annex and handled through the appropriate channels. Notwithstanding, significant findings
remain classified. The full report is classified CONFIDENTIAL / CTI / PROTECTED C CRITICAL
The Emergency Management Act (EMA) includes a consequential amendment to the Access to
Information Act (ATIA) that allows the Government of Canada to protect from disclosure specific
critical infrastructure / emergency management (CI/EM) information supplied in confidence to the
government by third parties.

Combating Robot Networks

and Their Controllers
Section 4 Base Concepts

4.1 Introduction to Bots and Botnets

A bot (or robot) is a compromised host connected to the Internet that has been infected with malicious
code installed by a hacker or a self-propagating worm [Reference 5].

A botnet is a collection of such bots connected to a command and control (C&C) channel that runs
autonomously. Frequently implicated in data theft (including the use of screen-capture, packet-
sniffing, and key-logging techniques), botnets harvest user accounts and financial information.
Botnets are operated by “bot herders” who seek to ‘enslave’ computing resources. Bot herders use
their “enlisted” army to profit from phishing, relaying spam (some estimate over 70 percent of Internet
spam is due to bots), click fraud, hosting warez sites (used mostly for illegal information/malware
sharing), and distributing malware.

There are many methods of infecting a host. Infections can be inadvertently loaded as part of
software or media files obtained from a file sharing network, from an infected media device such as a
USB key or CD, from a malicious link or maliciously crafted file, e.g. Adobe .pdf, embedded in an
email or through a “drive by download” where a person visits a web site and runs a malicious script or
file that is embedded in a web server page. Infections can occur without user intervention if the host
is susceptible to exploits, e.g. unpatched Windows security flaws, and is accessible from other
infected hosts either from the local (Enterprise) network or the Internet.

The process of compromising a host and adding the host to a botnet is described below and
illustrated in Figure 1:

• Step 1: The bot herder loads remote exploit code on an “attack machine” which might be
dedicated to this purpose or an already compromised host. Many bots use file-sharing and
RPC ports to spread. Initial infection vectors ensure victim machines have sufficient
configuration information to contact the bot controller when compromised.
• Step 2: Attack machines scan for unpatched targets and launch attacks. An unpatched
machine becomes a victim to the exploit.
• Steps 3 and 4: The victim machine is ordered to download binaries from another server
(frequently a compromised web or FTP server).
• Step 5: The binaries are run on the victim machine and convert it to a bot. The victim
connects to the bot controller and “reports for duty.”
• Step 6: The bot controller issues commands to the victim. These instructions may include
commands to download new modules, steal account details, install spyware, attack other
machines, and relay spam.

Combating Robot Networks

and Their Controllers
• Step 7: The bot herder controls all bots by issuing commands via the bot controller(s).

Figure 1: Anatomy of a Typical Host Infestation

Bot herders (persons running the botnet) use many different communications channels to issue
commands and new malware to bots through bot controllers (host devices used by bot herders).
Popular Internet communication channels include TCP protocols such as HTTP, FTP and IRC that
are allowed through many corporate firewalls. Internal channels include RPC and NetBios (for
Windows systems) that allow the bots to compromise additional hosts and multiply. Social
networking and file sharing sites such as Twitter, Skype and BitTorrent have also been used to
distribute C&C commands and new malware to botnet members.

The use of twitter as a distribution point was noted by Arbor Networks when they made a connection
between botnets and Twitter after spotting a strange-looking feed on the social network. As it turns
out, what appeared to be scrambled status updates were in fact a series of obfuscated links to
malicious software updates for a relatively new botnet. Following the links, which redirected through
the URL-shortening service, resulted in users downloading a compressed file. Bot operators
moved away from public command-and-control channels because security researchers have had too
much success analyzing the botnets that use such communications as Internet relay chat (IRC).

The propagation mechanisms used by bots is a leading cause for "background noise" on the Internet
[Reference 4]. Botnet malware scans large network ranges for new vulnerable computers and infect
them in a fashion similar to a worm or virus. An analysis of the traffic captured by the German

Combating Robot Networks

and Their Controllers
Honeynet Project [Reference 4] shows that most malware targets the ports used for resource sharing
on machines running all versions of the Microsoft Windows operating system:

• Port 445/TCP (Microsoft-DS Service) is used for resource sharing on machines running
Windows 2000, XP, or 2003, and other CIFS based connections, e.g. to connect to file
• Port 139/TCP (NetBIOS Session Service) is used for resource sharing on machines running
Windows 9x, ME and NT. This port is also used to connect to file shares.
• Port 137/UDP (NetBIOS Name Service) is used by computers running Windows to find out
information concerning the networking features offered by another computer. The information
that can be retrieved this way includes system name, name of file shares, and more.
• Port 135/TCP is used to implement Microsoft Remote Procedure Call (RPC) services. An
RPC service is a protocol that allows a computer program running on one host to cause code
to be executed on another host.

Almost all bots use a collection of exploits to spread further. Since the bots are constantly attempting
to compromise more machines, they generate noticeable traffic within a network. Network attack
monitoring sites such as myNetWatchman, a community-based collaborative firewall log correlation
system, ( provides up to date listings of the most commonly TCP and
UDP attacks (see Figure 2). DShield ( is a subsidiary service to the SANS Internet
Storm Center. These reports can be used to identify botnet behaviour on a local Enterprise network
or on a service provider infrastructure.

Figure 2: Sample Report of TCP/UDP Attacks from MyNetWatchman

4.2 Prominent Botnets in 2009 & 2010

This section identifies the most prominent botnets of 2009 in terms of the number of bots under their
control and describes their characteristics [Reference 10].

4.2.1 Zeus & Variants

The Zeus botnet emerged in 2009, operated by a sophisticated cyber crime syndicate. While more
information on this particular botnet is provided in Section 5.3.1 below, Zeus is a crimeware toolkit
botnet operated by a fairly sophisticated cyber crime syndicate. Essentially, it is a toolkit for creating

Combating Robot Networks

and Their Controllers
botnets, that is both available for sale and for free in the underground networks. Because of the use
of the framework for Zeus as a commercial toolkit, the number of variants to this particular botnet is
extremely high – currently above 70K [Reference 16]. Given its popularity in the underground,
security experts have gone so far as to create a tracking tool to monitor the changes in the Zeus
botnet, which can be found at . The use of this
botnet toolkit is relatively inexpensive and provides two options: a web admin portal, which provides
the functionality to customize the payload, and set the distribution parameters or an exe package
builder, for customized versions of the botnet. Kneber
At the time of writing of this report, a new variant of the Zeus botnet emerged, entitled Kneber by the
security community [Reference 15]. The name Kneber comes from the email used to register the
initial domain, used in the campaign -

Kneber is a sub-variant of the Zeus network, leveraging the Zeus framework for targeted keystroke
logging, in the case of Kenber for login credential grabbing for email and social networking sites,
rather than Zeus’ typical focus of banking credential banking. As with the other sub-variants of Zeus,
Kneber is operated by sub-factions of the Zeus cyber crime syndicate. According to Netwitness
analysis of the Kneber variant, this variant primarily affects Windows XP and Vista machines, and the
Kneber disbursal is international, with Egypt, Mexico, Saudi Arabia, and Turkey are the countries
primarily affected by this botnet. It has managed to successfully infect 75,000 hosts within over 2,500
organizations internationally.

Also of note is that more than half of the computer systems in the Kneber botnet also have the
Waledac Trojan embedded into them.

4.2.2 Torpig

While not new, the Torpig botnet is notable because of its continue pervalance in detected activity.
Torpig, also known as Sinowal or Anserin (mainly spread together with Mebroot rootkit), is a type of
botnet spread by a variety of trojan horses which can affect computers that use Microsoft Windows.
Torpig circumvents anti-virus applications through the use of rootkit technology and scans the
infected system for credentials, accounts and passwords as well as potentially allowing attackers full
access to the computer. It is also purportedly capable of modifying data on the computer. - Wikipedia

As of November 2008 it has been responsible for stealing the details of about 500,000 online bank
accounts and credit and debit cards and is described as "one of the most advanced pieces of
crimeware ever created" Torpig is a malware that has drawn much attention recently from the security
community. On the surface, it is one of the many Trojan horses infesting today’s Internet that, once
installed on the victim’s machine, steals sensitive information and relays it back to its controllers.
However, the sophisticated techniques it uses to steal data from its victims, the complex network
infrastructure it relies on, and the vast financial damage that it causes set it apart from other threats. -
Your Botnet is My Botnet: Analysis of a Botnet Takeover, Department of Computer Science,
University of California, Santa Barbara

Torpig turns off anti-virus applications, allows others to access the computer, modifies data on the
computer, steals confidential information (such as user passwords) and installs more malware on the
victim's computer. TorPig is now using the MBR rootkit, meaning the malware now runs outside of the
Windows file system on unallocated sectors of the hard drive. Thus it is very, difficult to detect and

Combating Robot Networks

and Their Controllers
Case in point, a primary mail server of a sector was compromised in 2009 using the Torpig, and
appeared to be controlled by Russian organized crime. The two most dangerous networks on the
Internet are RBN and Nevacon, as both of them have been regularly blacklisted.

4.2.3 Waledac
Emerging late in 2008, and growing through 2009, the Waledac is primarily a http spam run botnet,
with a peer to peer backup command and control infrastructure. This being said, the Waledac
framework also includes the capabilities to perform data mining, DDOS, act as a network proxy, as
well as a malware dropper [reference 18]. It is for the latter component that many other botnets use
the Waledac network as a parasitic host in order to drop their own bots and malware, including kreber
[reference 16], Peacomm [reference 18], and Conficker [reference 18]. Waledac uses both spam
runs and vulnerability exploitation as is primary methods of propagation.

Symantec has an excellent paper on the technical composition of Waledac that can be found at

A waledec tracking page can be found at

4.2.4 Cutwail (aka Pushdo, Pandex)

Cutwail consisted of 1 million to 1.5 million bots throughout the year, and was responsible for 17% of
all spam. It was responsible for the surge in Bredolab malware, spoofed greetings card emails
containing malicious hyperlinks, phishing activities, pharmaceutical spam and spam peddling
counterfeit watches.

4.2.5 Conficker / Downup

While not currently issuing known payload, the Conficker worm/botnet has infected an estimated 3
million Windows based hosts according to the Conficker working group (some estimates are as high
as 15 million hosts) which gives it the potential to be the largest botnet of all time. The Conficker
worm spreads itself primarily through a buffer overflow vulnerability in the Server Service on the
Windows 2000, Windows XP, Windows Vista, Windows Server 2003, Windows Server 2008, and the
Windows 7 Beta operating systems that have not been patched with MS08-067. More advanced
variants such as Confiker.B and Confiker.C also propagate through autorun Trojans on removable
media such as USB drives. Updates are performed by using HTTP pull, P2P push/pull and NetBios
push techniques. Confiker survives on host PCs by blocking access to DNS, anti-malware web sites
and the Microsoft update site and by disabling anti-malware software.

An excellent technical description of Conficker can be found at

4.2.6 Rustock
Rustock, a spam generating botnet, frequently sends spam at full capacity for short periods, and then
ceases its activity often for days at a time. Between August and September 2009, it controlled
between 1.3 million to 2 million bots. Rustock accounted for approximately 10-20% of all spam for
much of the year, but by the end of 2009 it had increased its dominance and stabilized its output to
approximately 18% of all spam. By the end of 2009, Rustock was mostly sending pharmaceutical and
medical spam.

Combating Robot Networks

and Their Controllers
A good paper on the technical composition of Rustock from Sandia Labs can be found at

4.2.7 Grum
Grum, a spam generating botnet, was busiest between June and September, when it was sending
more spam than any other botnet (20% of all spam). The number of bots it controls ranges from
600,000 to 800,000, and they are charged with sending out mostly pharmaceutical spam.

4.2.8 Maazben
A newcomer among botnets, the spam generating botnet Maazben was first detected in March. By
the end of 2009, it controlled 200,000-300,000 bots. Responsible for 2% of all spam, it focused mainly
on French and German language casino related and gambling spam.

4.2.9 Festi
Another spam generating botnet newcomer, Festi emerged in August 2009 - by the end of the year, it
controlled approximately 100,000-200,000 bots which send out counterfeit watch and fake fashion
accessories spam.

4.2.10 Mega-D (aka Ozdoc)

At the beginning of 2009, Mega-D was the main spamming botnet and emerged after the McColo
closure as the most active botnet, comprising of an estimated 300,000-500,000 bots. However, as the
year progressed, Mega-D’s estimated size plummeted to less than 100,000 bots. In January, it was
responsible for 58.3% of all spam, but it was almost eradicated on 4 November as the result of
community action to disrupt the botnet, and its output fell drastically. Mega-D returned on 13
November using a different collection of bots, sending between 4-5% of spam, mostly pharmaceutical
and some phishing activity.

4.3 Other Botnets of Note

Although no longer as prevalent as the newer botnets described in Section 4.2, the following botnets
are still active, and worthy of note.

4.3.1 Storm (aka Nuwar)

The Storm botnet, also referred to as Nuwar, Peacomm, and Zhelatin, made its appearance in 2007,
and was the first of its kind in terms of size, capability, and self defence techniques. At its height, it
was estimated that over a million machines were infected with the Storm bot/malware.

4.3.2 Srizbi
Emerging in June of 2007, this botnet supersede the Storm botnet in terms of size and complexity. At
its peak, it was estimated this botnet was responsible for over 60% of the spam that occurred during
its peak. Because of its virulence, and the performance impacts that were occurring on the Internet
as a result of the vast quantity of spam caused by this botnet, the largest take down of a hosting
provider was coordinated. When the McColo web hosting service was taken down by decoupling it
from its two upstream providers, this botnet was effectively eradicated, with a more than 75% drop in
spam caused by this botnet achieved by the take down.

4.3.3 Xarvester
Believed to be designed and operated by the owners of the defunct Srizbi botnet, Xarvester was
closely watched and there was a lot of activity aimed at suppressing its operation. In January it

Combating Robot Networks

and Their Controllers
controlled 500,000-800,000 bots, but by the end of the year it had only 20,000-36,000 bots - less than
1% percent of all spam was sent by them (mostly pharmaceutical and medical).

4.3.4 Donbot
Donbot also appeared in the wake of the McColo closure and was quite prominent during the first
quarter of 2009 controlling an estimated 800,000 to 1.2 million bots. By the end of the year the
number of bots fell to 100,000-150,000. Its spam contained links to profiles on social networking and
micro-blogging websites, related to “make-money-working-at-home” type spam messages.

4.3.5 Bobax / Kraken

Bobax has an estimated 80,000 to 120,000 bots at its disposal, and throughout the year it increased
the rate at which each bot was sending spam. By the end of the year Bobax was responsible for 13%
of spam which was mostly related to counterfeit fashion accessories and watches.

A good technical description of this spam botnet can be found at

4.3.6 Asprox Botnet

The Asprox Botnet is an HTTP based botnet, leveraging the first of its kind automated SQL inject
attacks with fast-fluxed malicious domains and IFRAME attacks to exploit sites as redistribution points
for the bot malware. This botnet has made three primary waves of attacks, the first in late 2007, the
second in Q2 of 2008, and the third mid-2009.

This botnet is known for spam runs, but is also primarily a downer, which can be leveraged to
download other malware onto infected machines.

4.3.7 Coreflood
Although almost seven years old, this botnet made a re-emergence in 2009 thanks to enhancements
that were made to its malware to allow it to propagate like a worm. This functionality, along with its
original password stealing Trojan payload, made Coreflood one of the more deadly botnets in 2009,
as the new capability allowed it to infect whole networks in the matter of hours.

A good technical analysis of the latest version of Coreflood can be located at

4.3.8 Gheg
Gheg was at its peak in January, when it controlled 150,000 to 200,000 bots following the closure of
McColo. At the end of 2009 it had less than 100,000 bots and was linked to approximately 0.5% of all
spam - mostly Russian language dating spam, and medical spam in French, German and English.

4.3.9 Warezov
Although originally released in 2007, Warezov made a resurgence in the beginning of 2009. Warezov
is primary a malware operating system with a fast-flux hosting platform that turns compromised PCs
into web servers hosting spoofed phishing sites. Warezov installs : a reverse HTTP proxy that serves
content from an obscured master server and a DNS server that has been modified from BIND that
acts as a slave to get zone updates 2.

However, this botnet primarily ceased functioning early in 2009 [Reference 22].

Warezov Rises from the Grave, Dan Goodin:

Combating Robot Networks

and Their Controllers
Combating Robot Networks
and Their Controllers
Section 5 Technology
5.1 Technology Overview
5.1.1 Lifecycle of a Botnet
Ironically, the lifecycle of a botnet is similar to that of other software development lifecycles. While
not so much in the earlier days of botnets, the development of botnets now includes clear design,
development, QA, and deployment phases, including commercialization. While the motivation of
generalized malware authors tends to be used for a wide range of malicious intents, creators of
botnets are primarily motivated by money. Botnets themselves become highly valued tool in the
execution of that crime, given the relative ease in which it can be committed, and the increased
revenue streams (larger target base = larger number of victims). Thus it behoves botnet creators to
ensure the stability of the botnet platform, and they take great care in ensuring that the development
of the bots maximizes that stability.

Generally, the lifecycle for a botnet includes:

1. Pre-release prototype, to determine evasion capacities;

2. Limited distribution of enhanced prototype, to propagate, prior to payload insertion to
maximize distribution;
3. Initial payload insertion and botnet enhancements;
4. Service offering or toolkit GA, in order to maximize commercialization viability; and
5. Updates, enhancements and re-purposing.

There are some consecutive development that occurs, particularly where groups of individuals are
working together in order to develop the botnet. Figure 3 below illustrates.

Combating Robot Networks

and Their Controllers
Figure 3: Botnet Lifecycle

5.1.2 Botnet Families

When botnets move through their lifecycle (or due to polymorphic capabilities build in), bot agents
alter their construction through each generation, thus making it necessary to organize them into
families, which are given unique names to distinguish them from others. According to Microsoft’s
naming convention, the family name is usually not related to anything the malware author has chosen
to call the threat; researchers use a variety of techniques to name new families, such as excerpting
and modifying strings of alphabetic characters found in the malware file. Although often using
different names initially, security vendors usually try to adopt the name used by the first vendor to
positively identify a new family. The MMPC Encyclopaedia (
lists the names used by other major security vendors to identify each threat, when known.

5.2 Characteristics of a Botnet

As botnets evolve, and the number of botnets grows exponentially, the only effective means of
developing a taxonomy for botnets is to use attribute-based characterization. In this way, botnets can
be considered to have four primary attributes:
• Propagation/Distribution Methods: How the malware bots that compose the botnet are
• C&C Methods: How the malware bots are controlled by the bot operator, and how they
communicate back with the central botnet;
• Self Defence Methods: How the malware bots and the botnet itself protects itself, both from
discovery, and from eradication;
• Payload: What damage or attacks can be done by the botnet.

Combating Robot Networks

and Their Controllers
Each of these attributes can be broken down into specific characteristics, of which each characteristic
can be further refined into elements, which may be a feature of the functionality of the botnet, an
observable fingerprint of the botnet, or a quantification of value for the metric.

Table 1: Botnet Characteristics

Feature Characteristic Element Metric
Propagation: Propagation Techniques Direct exploitation of Yes / No
Characterization vulnerabilities
Drive-by downloads Yes / No
SPAM Yes / No
Network shares Yes / No
Autorun media Yes / No
Parasitic Yes / No
Bot agent Infection Sequence of Infection Observable Fingerprint
Design Complexity Programmatic Design Observable Fingerprint
Delectability H/M/L
Propagation speed H/M/L
Population Current # of bots Value Quantification

Command & Control: C & C Topology Centralized / Star Yes / No

Characterization Distributed Yes / No
Hierarchical Yes / No
Communications Protocol IRC Yes / No
Usage HTTP Yes / No
P2P Yes / No
Custom Type
Communications Efficiency H/M/L
Effectiveness Bandwidth Usage H/M/L
Robustness H/M/L

Self Defence AV Evasion techniques Service Corruption Yes / No

Code Metamorphism Method / None
Noise Insertion (Code) Method / None
Noise Insertion (Binary) Method / None
Communications IP Flux Method / None
Resiliency Domain Flux Method / None
Blind Proxy Redirection Method / None
Encrypted Comms Method
Payload Obfuscation / Compiler Settings Variable
Anti-reversing Payload Encryption Yes / No
Tripwiring Method / None

Payload Objective DDOS Yes / No

Spam Run - Phishing Yes / No
Spam Run – Spear Yes / No
Spam Run - DoS Yes / No
Spam – Reputation Yes / No
Mal Code Dropping Yes / No
Spyware Code Dropping Yes / No
IRC Subversion Yes / No
Online Gaming Yes / No

Combating Robot Networks

and Their Controllers
Feature Characteristic Element Metric
Distributed Processing Yes / No

Botnet Profile Source Information ASN Number

C&C Address Range CIDR
Target Information Geolocation Country
Industry Target Industry Vertical
Business model Private Yes / No
Commercial / For Hire Yes / No
DIY Toolkit Yes / No

The sections below further described these botnet characteristics and their associated elements.

5.2.1 Propagation Characterization

First and foremost, bots require a means by which to grow and propagate. A botnet is only as good
as the number of infected hosts, and without undetected propagation, a botnet’s effectiveness is

In order to effectively characterize the propagation of a botnet, one must look at a variety of
characteristics, including:

• Propagation techniques,
• Local Infection technique,
• Programmatic design of the bot agent,
• Design complexity of the bot agent, and
• Current population of the bot. Propagation Techniques

Fundamentally, botnets and their bot agents are simply another type of malware, and the propagation
techniques used in order to distribute bot agents in order to grow the bot net are similar to that of
malware. These may include one or more of a variety of techniques, as described below.
• Direct Exploitation of Vulnerabilities: The direct, and local exploitation of vulnerabilities in
the location machine are intended to gain access to the target host in order to install the bot
agent onto the target host. On such example would Conficker’s exploitation of MS08-67.
• Drive-by Downloads: Similar to above, drive by downloads occur when a victim visits a web
site hosting malcode (either unintentionally, having been exploited itself by an SQL inject
attack, or intentionally operated as a malware serving host). The site then exploits
vulnerabilities on the local target’s browser configuration in order to exploit the local host.
• Social Engineering / Spam / Phishing / Spearphishing: In this propagation technique,
spam email is sent to the target, with embedded links to malware serving hosts (see above),
or with malicious attachments (such as malcrafted PDF files, MS documents with malcrafted
macros, etc.). This propagation technique relies on some form of user interaction – either to
click on the link, or open the attachment, in order to be successful.
• Open Network Share Distribution: In this propagation technique, the underlying malware
of the bot agent attempts to connect to the ADMIN$ share on other computers within the
network that the local target is connected to, first as the local user, and then using known
account names (guest, admin, etc) using a set of known weak passwords. Should an open

Combating Robot Networks

and Their Controllers
share be found, the share is then infected, any infectable files within that share, and shares
connected to the infected share.
• Autorun Media: An autorun.inf file of a removable drive or storage device (USB Key, flash
card, etc) is modified or dropped that displays a malformed “View Folder” option, which in fact
instead installs the malware.
• Parasitic Distribution: In some cases, certain botnets have not included the ability to
propagate, instead, leveraging the propagation abilities of other malware, and instead
“piggyback” on other malware distributors. In some cases, the botnets embedded modified
versions of propagating malware, and in other cases it leverages other existing botnets (See
Section 4.2.3 Waldec). Local Host Infection Technique

In addition to the means by which propagation of bot agents occurs, within families of botnets the
local infection generally occurs in a similar fashion (although with polymorphic self defence
techniques as described in Section these might have minor variations), and thus the
specific sequence of infection can be used as a means of fingerprinting a particular botnet.

This infection sequence generally includes one or more of the following actions:

• Modification or insertion of registration keys,

• Insertion of a browser help object,
• Malcode (objects, web controls, executables) insertion and execution,
• Code injection into system service,
• Dropping and executing a hidden service,
• New Active X Class Identifiers (CLSIDs ) created, and
• Malicious dlls associated to new CLSID dropped and executed.

For malware that affects windows based operating systems, malware authors make use of
HKey_CLASSES_Root Class Identifiers (HKCR/CLSID) in order to register the services of their bot
agent. One means of fingerprinting a particular piece of malware is to identify the program identifiers
and malicious .dlls that have been entered into the Registry. Botnets will have an SLSID and
associated InProct Server32 pointing towards the malicious .dll resource. Design Complexity

The complexity of the core programming of a bot agent is indicative of the motivation of botnet owner
(bot herder). In the case of highly sophisticated bots, considerable time and effort was put in, likely
including a lengthy prototyping period (see Section 5.1.1).
• Programmatic Design: Programmatically, bot agents are constructed the same as other
forms of malware, but with additional code and hooks to allow for the C&C communications.
As with other forms of malware, programmatic elements that are unique to the design of the
bot agent can be used as unique fingerprinting identifiers for most forms of bots (with the
exception of those that make use polymorphic self defence techniques (see Section 3.1).
• Detectability: In the design of the botnet, during the prototyping the susceptibility to
detection by AV products generally tested through a short term “soak” on a limited number of
infected hosts to monitor the number of “detect/kills” the prototype triggers with the local AV.
However, a truly determined botnet developer will also include as part of the prototyping pre-
testing of their malcode using publicly available AV sandboxes to in order to determine its
detectability by a variety of AV products. Thus the level of detectability can be used as an
element of design complexity.

Combating Robot Networks

and Their Controllers
• Propagation speed: The speed at which propagation occurs is an indicator as to the
purpose of the botnet. Fast propagation typically comes at the cost of an increased risk of
detection, and thus bot agents with a particularly fast propagation are typically used for
“smash and grab” type attacks – DDOS, credential grabbing, etc. “Low and slow” type
propagators tend to be those botnets that wish to have a particularly lengthy residency on the
infected host, such as for information exfiltration, keystroke logging, and other highly
sophisticated targeted attacks.

5.2.2 Command and Control Structure Characterization

One of the differentiators between a bot agent and other malware such as viruses [Reference 2], is
the bots ability to communicate with a Command-and-Control (C&C) infrastructure. C&C allows a bot
agent to receive new instructions and malicious capabilities which could include denial of service,
spam generation or capture of account credentials for banks or other financial institutions. The
compromised host can then be used as a participant in Internet crime as soon as it is receives the
appropriate instructions from the botnet controller. C&C Topology

The criminals who actively control botnets must ensure that their C&C infrastructure is sufficiently
robust to manage tens-of-thousands of globally scattered bot agents, as well as resist attempts to
hijack or shutdown the botnet. Botnet controllers have consequently developed a range of
technologies and tactics to protect their C&C investment. The precise C&C topology selected by a
botnet controller often reflects that botnet controllers perceived risk to continued command access
and the financial business model of that botnet. C&C topologies (see Figure 4) can be grouped into
the following categories:

• Centralized / Star Configuration

• Distributed / Multi-server Mesh
• Hierarchical
• Random

Combating Robot Networks

and Their Controllers
Figure 4: C&C Topologies Centralized / Star Configuration

The Star topology relies upon a single centralized C&C resource to communicate with all bot agents.
Each bot agent is issued new instructions directly from the central C&C point. When a bot agent
successfully breaches a victim computer, it is normally preconfigured to “phone home” to this central
C&C, whereupon it registers itself as a botnet member and awaits new instructions.

The direct communication between the C&C and the bot agent means that instructions (and stolen
data) can be transferred rapidly. However, the botnet can be neutralized if the central C&C is blocked
or otherwise disabled. Distributed / Multi-Server Mesh

Multi-Server C&C topology is a logical extension of the star topology, in which multiple servers are
used to provide C&C instructions to bot agents. These multiple command systems communicate
amongst each other as they manage the botnet. Should an individual sever fail or be permanently
removed, commands from the remaining servers maintain control of the botnet.

Combating Robot Networks

and Their Controllers
Intelligent distribution of the multiple C&C severs amongst different geographical locations can speed
up communications with similarly located bot agents. Also, C&C servers simultaneously hosted in
multiple countries can make the botnet more resistant to legal shutdown requests. Hierarchical
A hierarchical topology reflects the dynamics of the methods used in the compromise and subsequent
propagation of the bot agents themselves. Bot agents have the ability to proxy new C&C instructions
to previously propagated bot agents. However, updated command instructions typically suffer latency
issues making it difficult for a botnet controller to use the botnet for real-time activities.

In a hierarchical botnet no single bot agent is aware of the location of all the other bot agents. This
topology makes it difficult for botnet investigators to estimate the overall size of the botnet. The
hierarchical structure also facilitates the division of larger botnets into smaller botnets for sale or lease
to other botnet controllers. Random
Botnets with a random topology (i.e., a dynamic master-slave or peer-to-peer relationship) have no
centralized C&C infrastructure. Instead, commands are injected in to the botnet via any bot agent.
These commands are often “signed” as authoritative, which tells the agent to automatically propagate
the commands to all other agents.

Random botnets are highly resilient to shutdown and hijacking because they lack centralized C&C
and employ multiple communication paths between bot agents. However, it is often easy to identify
members of the botnet by monitoring a single infected host and observing the external hosts it
communicates with. Command latency is somewhere between the star topology (lowest) and the
hierarchical topology (highest). Communications Protocol Usage

The underlying communications structure used within a botnet typically is unvarying, relying on one
primary method. There are a variety of communications mechanisms used by botnets, including http,
peer to peer (P2P), IRC, or custom protocol usage. Communications Effectiveness

A Taxonomy of Botnet Structures 3qualifies the effectiveness of a botnet is an estimate of overall
ability to accomplish a given purpose. This effectiveness is measured through three elements –
efficiency of communications, bandwidth usage, and robustness of communications [Reference 29].
• Efficiency: Communications efficiency relates to the number of logical degrees of separation
between a bot agent and the C&C server to which it is communicating. The larger the
number of segments, the less efficient the communications, and the more likely the
communications will be detected.
• Bandwidth Usage: By bandwidth, it is not the total bandwidth usage of the botnet, or even
of a single segment within the botnet, but rather, the potential bandwidth usage a botnet
operate could consume under ideal circumstances. Thus the idea bandwidth usage of a
botnet that is primarily targeted at home users, typically with DSL or dial-up connections, is
going to be lower than that of a botnet targeted at large corporate networks.
• Robustness: The stability of the communications channel and its ability to maintain the
required time to live for the communication path is critical to the operation of a botnet.


Combating Robot Networks

and Their Controllers
Without a robust covert communication channel, file transfers for bot agent package updates,
malware downers, etc. would become ineffective.

5.2.3 Self Defence

Self Defence is a necessary function of botnet design. The goal of an effective botnet is longevity in
order to grow to a substantial size but, without embedded self defence techniques, botnets would
quickly collapse as they are detected. AV Evasion Techniques

The philosophy behind anti-virus evasion is based for the most part on the virus signatures that are
used by most anti-virus and intrusion detection systems. These signatures identify characteristics of
a virus that are specific to the underlying executable. Botnet developers use code metamorphism,
noise insertion (source code and/or binary) and modified compiler settings to change the malware
executable without changing the functionality of the malware. The resulting malware is categorized
as a serial variant modified malware [Reference 1]. Services Corruption

As observed with the Storm botnet, a powerful capability of botnet malware is the disruption or
corruption of native security services on the affected host. Affected services can include AV/IDS/IPS
exes, dlls, sysfiles. On an end device, the malware merely corrupts the service to make it appear that
it is operational; typically the subversion of the service is detected when patch updates are
unsuccessful. On enterprise AV servers, the self defence mechanism will continue to allow the
service to run, but will have the AV server push scans that have been corrupted, typically either to
detect either no malware, or only malware that is not part of the botnet’s “family”.

In addition to security service disruption, many of the newer forms of botnets will also disrupt the
services of other bots that it detects on the host it is attempting to subvert, as first seen with
Conficker. In this particular botnet, not only would the botnet shut down and clean other pieces of
malware it found on the host it was corrupting, but it would also apply applicable patches to prevent
re-infection by other pieces of malware. Code Metamorphism

Code metamorphism (including polymorphism) is a common tactic that attempts to bypass pattern
recognition systems employed by antivirus, intrusion prevention systems (IPS) and data leakage
prevention (DLP) technologies. Automated tools manipulate the structures of the source code of the
bot malware – altering their “shape” – by reordering and replacing common programmatic routines,
e.g. use of equivalent code, changing loop types, changing the order of code and anything else that
will change the binary generated by the complier and linker but not alter its functionality. Two primary
forms of polymorphism exist [Reference 28]:
• Server-side polymorphism, where slightly different version of the bot agent is delivered
each time by the botnet distribution server, thus resulting in a highly inflated number of
samples, all with different hash values, but identical functionality and code families when
• Malware polymorphism, where the bot agent changes slightly every time during
propagation, through randomization of file names, registry key settings, encryption hashes,
etc. Noise Insertion (Code)

Noise code that either does nothing or is interpreted as doing nothing is designed to help the malware
bypass signature-based detection systems and may also be used to thwart basic behavioural and
heuristics based detection systems, e.g. by identifying the construct or operating system in which the

Combating Robot Networks

and Their Controllers
malware is executed within, and performing different functions depending upon the specific
environment. Noise Insertion (Binary)

Malware authors can also insert noise into a compiled malicious binary. This noise is typically applied
to the beginning or end of the binary, rather than the functional malicious code in the middle. This
binary noise (often non-interpretable garbage) is very easy to add and can be used to grow a
malicious binary to any size a botnet controller wants. For example, the author may want to seed
Torrent networks with fake movies that happen to be exactly the same size as the real ones, but
actually contain the malicious bot agent. Communications Resiliency

While technically a self defence capability, the ability for a bot agent to locate C&C infrastructure is a
critical requirement for maintaining control of botnets that rely upon centralized C&C. If the C&C
server cannot be found, a bot agent will not be able to receive new instructions. While some bot
agents may opt to function in an alternative autonomous “zombie” mode – reverting to embedded
instructions for propagation and infection – most bot agents will continue to harvest local host
information and poll the missing C&C at regularly scheduled times.

Botnet controllers use a number of technologies to increase the probability that bot agents will be able
to locate the central C&C infrastructure. These techniques also make botnets more resilient to
tracking and shut-down either by law enforcement agencies or other botnet controllers.

The technology that currently provides C&C servers with location resolution and failover resilience is
referred to as “fluxing”. There are two main types of fluxing:

• IP fluxing; and
• Domain fluxing

Both techniques are used extensively by botnet controllers. IP Flux
IP flux refers to the constant changing of IP address information (e.g. related to a
particular, fully-qualified domain name (FQDN), e.g. Botnet controllers abuse
this ability to change IP address information associated with a host name by linking multiple IP
addresses with a specific host name and rapidly changing the linked addresses. This rapid changing
aspect is more commonly referred to as “fast-flux”.

There are two types of fast-flux – “single-flux” and “double-flux”. Single-flux is characterized by
having multiple (hundreds or even thousands of) IP addresses associated with a domain name.
These IP addresses are registered and de-registered rapidly – using a combination of round-robin
allocation and very short Time-to-live (TTL) values – against a particular DNS Resource Record (i.e.,
A records).

Double-flux is a more advanced evolution of Single-flux. Double-flux not only fluxes the IP addresses
associated with the FQDN, but also fluxes the IP addresses of the DNS servers (e.g., NS records)
that are in turn used to lookup the IP addresses of the FQDN. Domain Flux

Domain flux is the opposite of IP flux and involves the constant changing and allocation of multiple
FQDNs to a single IP address or C&C infrastructure. Techniques applicable to Domain flux include
domain wildcarding and newer domain generation algorithms

Combating Robot Networks

and Their Controllers
Domain wildcarding causes a DNS to wildcard (e.g., *) a higher domain such that all FQDNs point to
the same IP address. This technique is most commonly associated with botnets that deliver spam
and phishing content, e.g. the botnet controller creates and uses a random FQDN to uniquely identify
a victim and track success using various delivery techniques, and bypass anti-spam technologies.

Domain generation algorithms are a more recent addition to bot agents. These algorithms create a
dynamic list of multiple FQDNs each day which are then polled by the bot agent as it tries to locate
the C&C infrastructure. Since the created domain names are dynamically generated in volume and
typically have a life of only a single day, the rapid turnover makes it very difficult to investigate or
block every possible domain name. Blind Proxy Redirection

Both IP flux and domain flux provide advanced levels of redundancy and resilience for the C&C
infrastructure of a botnet. However, botnet controllers often employ a second layer of abstraction to
further increase security and failover – blind proxy redirection.

Redirection helps disrupt attempts to trace or shutdown IP flux service networks. As a result, botnet
controllers often employ bot agents that proxy both IP/domain lookup requests and C&C traffic. These
agents act as redirectors that funnel requests and data to and from other servers, i.e. the servers
providing the malicious content that are under the botnet controller’s control. Payload Obfuscation / Anti Reversing

Payload obfuscation is necessary to protect the capabilities of the bot client from being detected, and
removed from the infected host. Bot clients use a variety of techniques to prevent detection through
obfuscation of its payload and to prevent reverse engineering of the bot client to prevent detection of
its C&C methods. The following are the most predominant elements of payload obfuscation.

• Run-Time Packer Type: Originally developed to reduce the size of a compiled executable to
minimize its footprint in hopes of reducing the risk of detection, packers are now primarily
used to introduce obfuscation to the internal coding of malware. The aim of this obfuscation
is to render internal case statements to appear non-malicious by AV products by clouding any
of the internal algorithms or data sets within the executable. Some packers are polymorphic
themselves, in that they are capability of generating different variants of the same file upon
unpacking. Some of the currently top used packers include UPX, PE-Compact2,
ASProtect.b, and Upack2
• Layered Complier use: In many cases, the malware designers will use multiple compilers, in
order to layer code within code, to make the reverse engineering problematic, if not
• Payload Encryption: In addition to utilizing muti-layer compiling, malware authors will now
use asymmetric (public key) encryption for the inter layers of code framework, and only
decrypting the internal payload at execution. In these cases, part of the C&C design provides
instructions to where to go to fetch the public key used to encrypt the payload.
• Trip-wiring: Lastly, in many cases, the malware is embedded with a “kill bit” as a last ditched
effort to prevent anti-reverse. When triggered, the trip-wired code is set to delete the local
machine. Post Execution Behaviour

Post execution behaviour is also a self defence characteristic that can be considered unique to a
particular bot agent, given that many bot agents execute, and the self-delete. But how this final

Combating Robot Networks

and Their Controllers
sequence occurs is individualistic, and can be finger printed. This best described in an SANS Diary
entry on Categories of Common Malware Traits [Reference 31], which asks:
• How does that self-deletion happen?
• Does it drop a batch file or execute a shell command?
• Does the file remain memory resident or does it terminate after adding a Scheduled Tasks
.job file, so it "wakes up" periodically to ensure the payload is still installed?

In some cases, post execution behaviour may be the only indicator that any infection had occurred as
it is one of the few processes that occurs outside of obfuscation.

5.2.4 Payload / Payload Objectives

While the other three characteristic areas (propagation, C&C, and self defence) can be considered
the construct of a bot agent, the payload of a bot agent payload of the bot client is the malware
component of it. It contains the set of executables which give the bot agent its capabilities, which
may include:

• sniffing network traffic;

• capturing keystrokes, data, or screenshots;
• locating and transmitting files matching particular key words;
• deleting, modifying, or otherwise corrupting data;
• executing binaries or opening backdoors;
• acting as a jump point for attacking other systems; amongst others.

The payload of any given bot client is highly variable, as most bot clients are designed to allow for
simple repurposing of the payload as needed, particularly in the case of commercial for hire botnets
(see Section Targeted Attacks / Advanced Persistent Threat (APT)

Targeted attacks, or Advanced Persistent Threats (APTs) [Reference 11] are the fastest growing
category of botnet attacks at this time. APTs are focused malware attacks that attempt to obtain
sensitive corporate or government information by targeting individuals instead of hosts. Although
these attacks do not compromise as many hosts as other botnets (normally less than 200), the results
can be much more lethal.

APTs use a similar approach to that described in Section 4.1 but with some significant differences.
Instead of using volumes of spam email to infect any host, the APT targets are pre-selected persons
with access to sensitive material. The attackers then attempt to compromise the users through a
directed email (spear phishing) that contains a maliciously crafted file or a link to a malicious web site.
Once infiltrated the targets host is loaded with malware disguised as innocuous utilities that are not
usually detected by anti-virus software or intrusion detection systems (rates of detection less than
25%) such as Windows compression (.rar), file transfer and email (MAPI) clients. APTs have also
been known to use extremely small “stubs” that reside in memory and are used to dynamically
download executables from the C&C server for various purposes such as network reconnaissance,
packet sniffing and file exfiltration.

Once installed, the malware moves laterally within the enterprise, attempting to infect domain
controllers or other sources of authentication credentials. The credentials obtained by the malware
are disguised as common files (e.g. compressed .rar files) and passed to the C&C server. The APT
attacker then reverse engineers the login credentials from the encrypted or hashed authentication
files and uses these credentials to access local hosts, file servers and email accounts in the targeted
enterprise. The information it obtains is disguised as before and either encrypted or sent in the clear
to the C&C server using techniques, e.g. HTTP POST requests, or SSL sessions, that evade most

Combating Robot Networks

and Their Controllers
anti-malware systems. These methods were successfully used in recent “Aurora” attacks on Google
and Adobe.

This form of botnet attack is very difficult to detect using conventional security safeguards as it is
characterized by outbound only connections to the C&C (once the host is infected) and uses
techniques that cannot be detected with IDS signatures at the network or host level. The malware is
sophisticated and uses many of the methods described previously to update itself and evade
detection. Distributed Denial-of-Service

A DDoS attack is an attack on a computer system or network that causes a loss of service to users,
typically the loss of network connectivity and services by consuming the bandwidth of the victim
network or overloading the computational resources of the victim system. The most common attack
vectors are TCP SYN and UDP flood attacks and DNS reflection and amplification attacks [Reference
6]. The DNS attacks rely on cache poisoning techniques and loosely configured “open resolver” DNS
servers [Reference 8].

Higher-level protocols can be used to increase the load even more effectively by using very specific
attacks, such as running exhausting search queries on bulletin boards or recursive HTTP-floods on
the victim's website where the bots start from a given HTTP link and recursively follow all links on the
provided website.

DDOS attacks can be accompanied by extortion requests, i.e. “pay us to stop the DDOS” as was the
case in the recent attack against the massively multiplayer online role-playing game (MMORG)
auction sites at and Spamming
While there are many forms of spam, e.g. instant messaging spam, mobile phone spam and
newsgroup spam, the most common form of spam is email spam. The Spamhaus project defines
email spam as follows:

"Spam as applied to Email means "Unsolicited Bulk Email" (UBE). Unsolicited means that the
recipient has not granted verifiable permission for the message to be sent. Bulk email means that the
message is sent as part of a larger collection of messages that all have substantively identical
content. A message is spam only if it is both Unsolicited and Bulk.”

A bot agent that has the ability, e.g. through a malware download, to open a SOCKS v4/v5 proxy can
send email anonymously. This effect is multiplied by the number of bots available to a bot controller
and can result in the generation of massive amounts of bulk email. Email spam can be used for
unsolicited advertising or it can contain malicious files or URLs that infect unprotected hosts and allow
the botnet to spread.

The SOCKS proxy can also be used for Instant Messaging, HTTP (web) sessions and file (FTP)
transfers. Phishing / Information Gathering and Identity Theft

Sensitive information typically targeted by bot nets includes login credentials to financial institutions or
to local servers containing valuable information, personal identifying information (PII) such as social
insurance numbers, email address lists, and proprietary information and intellectual property from
software and hardware vendors, government contractors and government agencies.

Combating Robot Networks

and Their Controllers
Login credentials and PII can be used to access bank accounts and to obtain credit cards which can
then be used to ex. Proprietary information and intellectual property can be sold for profit or used as
a source of intelligence in business or diplomatic negotiations. Email address lists are used for
spamming, phishing and spear phishing attacks.

Bot agents use packet sniffers and key stroke loggers to obtain local authentication credentials,
sensitive clear text information and information regarding the enterprise topology, e.g. location of
domain controllers. Users can be tricked into disclosing authentication credentials though malicious
redirection, e.g. through a cross site scripting exploit, to a bogus but authentic looking web site. Installing Advertisement Add-ons and Browser Helper Objects

Botnets can also be used to exploit the common web site practice of embedding advertisements in
web pages. In this scenario, the hosting company pays a web service provider to put an
advertisement on a web page and then pays the service provider for each “click” on the ad. The
individuals involved in this scam create a bogus website and negotiate a deal with hosting companies
to display their advertisements. The botnet controller then instructs the bots to click on the pop up
advertisements as they appear on the fake web site and fraudulently bill the hosting company for the
fictitious user clicks. These clicks can be automated and executed each time the victim uses the
browser. Attacking IRC Chat Networks

Botnets are used for attacks against Internet Relay Chat (IRC) networks. A popular attack is known
as the "clone attack" where the botnet controller orders each bot to connect a large number of clones
to the victim IRC network. The victim is flooded by service request from thousands of bots or
thousands of channel-joins by these cloned bots. These actions create a denial of service condition in
the IRC network and the network is brought down. Ironically these actions could disable other
botnets that use IRC networks to communicate. Manipulating Online Polls and Games

Online polls and games can easily be manipulated with botnets. Assuming that there is no
authentication, each bot has a unique IP address and a vote registered by a bot will have the same
authenticity as a vote from a real person. These tactics could be used to manipulate popular TV
reality series such as “American Idol”. Online games can be manipulated in a similar way by using
botnets to farm for resources much more quickly than a human player and exhaust resources for
other players.

5.2.5 Botnet Parameters

Botnet profiles are characterized by “what, where, and why” , i.e. the source (who), the target (what),
and the business model of the botnet (why). Source Information

Understanding the source of the botnet – who is running it, and from where are they running it,
provides the ability to implement defensible technologies in place to block the traffic from source of
inbound attacks from the botnet (at the router, firewall, or IDS). This is useful when either the internal
enterprise AV system has been either overwhelmed by the attack, or no current signature exists to
block and prevent the inbound attack. ASN
During propagation and initial distribution, C&C communications tend to source from clusters of
ASNs, which are used either because the ASN has been stood up for that specific purpose, or the

Combating Robot Networks

and Their Controllers
ASN is known to have loose registration and monitoring policies, and thus ideal for use from a bot
owner's perspective. In this stage, identifying the ASN in use not only aids with understanding the
approach to take with regards to taking down the C&C infrastructure, it also provides clues as to the
individual or groups running the botnet. Additionally if a large numbers of packet from large numbers
of hosts are detected, but these from many different ASN globally, this could be an indication that an
attack is underway, assuming thresholds for variation taken into consideration. C&C Address Range (IP or URL)

As with the ASN, the specific address range(s) that the C&C provide a means by which to respond to
a botnet. In many cases, botnet activity is not limited to specific ASN’s and then it becomes
necessary to understand the specific addresses from which either communications are being
performed, or where malware is being served from is a critical characteristic need in order to perform
remediation. Target Information Geolocation
Barring any specific geolocation clustering due to language coding within the bot, the geolocation of
both the C&C servers and the primary activity of a particular bot can tell a lot about the purpose of a
particular botnet. While many are located internationally, certain purpose drive botnets can be found
with particular geo-specific zoning [Reference 28]:
• Online Gaming credential stealing malware tends to occur around the Pacific Rim Regions
(China and Korea);
• Information and credential grabbing trojan’s, and rogue software is predominant in the US
and UK;
• While active in all regions, online banking credential grabbing is particularly high in Brazil. Business Model

Although typically malicious in nature, the primary motivation for the development and deployment of
a botnet is generally for a well formed and crafted purpose. Understanding the motivation and
business model of a botnet is key to understanding its overall characterization, and means by which
to combat it.
• Commercial / For Hire: For hire botnets were the first business model of botnets to arise as
the profitability of the use of botnets was understood by criminal elements. Commercial
botnets are single botnets, available for rent, and repurposed as needed.
• DIY Toolkit: Like Zeus, there is a set of botnets that have been design specifically to allow
for “wizard” type development of botnets using a pre-developed framework. DIY botnets are
further described below.
• Private: Generally, freely offered botnets are ones that were originally purposed by DIYers
who have crafted a botnet, either using a underground commercially available DIY toolkit, or
self developed framework. The use of private botnets is typically for a single purpose,
specifically targeted intentions. The motivation for the use of these may either be profit
driven, or malevolent targeting. Botnet “Do It Yourself” Toolkits

The botnet development effort has been greatly assisted by the emergence of “Do It Yourself” (DIY)
botnet toolkits that can be used to create variants of existing botnets. For example [Reference 3], the
Zeus malware creator kit ($400 to $700) allows an existing or aspiring botnet controller to build a
custom variant of a Zeus bot agent with customizable parameters such as C&C channel, encryption
keys, administration passwords and methods of communication.

Combating Robot Networks

and Their Controllers
With these kits malware creators can build multiple variants of an infector agent such as Zeus and
release them sequentially at a pace slightly faster than antivirus vendors can release new signature
updates. Ironically, malware creators can make use of many publicly available tools to quickly verify
whether a particular malware sample can be detected and the different vendors that have identified
its signature. More advanced tools can automatically modify malware samples to evade antivirus

All subsequent botnets share the same malware family name – even if they are operated by different
botnet controllers using different C&C channels. However, the same remediation processes can
normally be used for all malware created by a specific version of the same creator kit – regardless of
who controls the botnet based on that kit.

In order to achieve higher resiliency a botnet controller may use multiple DIY botnet kits to create bot
agents families that are based on different attack vectors. Using this approach, the botnet will survive
until all attack vectors have been detected and neutralized.

5.3 Sample In-Depth Analysis

This section will provide an in-depth review of two or three prominent Botnets. The descriptions will
include a detailed technical analysis and a presentation of metrics related to the activity of each

The analysis methodology used for the in-depth analysis can be found in the Appendices.

5.3.1 Zeus
Designed for stealth, and for targeted information grabbing, the botnet’s primary objective is for
capture of market share and “commercial” viability. It’s designed to allow packages to be defined,
and redefined, depending on the needs of the botnet operators, as well as to allow for the
underground commercialization of the botnet. This botnet is rented or repackaged depending on the
needs of the cyber crime syndicate’s customers.

This botnet also includes a layer of sophistication in that unlike traditional keystroke logging malware,
the bots can be given variable instructions on keystroke logging (such as capture only if these
variables exists on the infected machine, and only capture these types of data on those specifically
targeted machines). This ability makes detection much more difficult because it does attempt to “fly
under the radar” to the greatest extent possible.

The crimeware kit contains the following modules:

• A web interface to administrate and control the botnet (ZeuS Admin Panel)
• A tool to create the trojan binaries and encrypt the configuration file (called exe builder)

Normally, a ZeuS host consists of three components / URIs:

• a configuration file (mostly with file extension *.bin)

• a binary file which contains the newest version of the ZeuS trojan
• a dropzone (mostly a php file)

Some features of ZeuS are:

Combating Robot Networks

and Their Controllers
• Capture credentials out of HTTP-, HTTPS-, FTP- and POP3-traffic or out of the bot's
protected storage (PStore).
• Group the infected clients into different botnets
• Integrated SOCKS-Proxy
• Web form to search the captured credentials
• Encrypted configuration file

Function to kill the Operating System (see "When a Botmaster goes REALLY mad) ZeuS Detailed Technical Analysis

The following information was obtained from Fortinet [Reference 66]. ZeuS Distribution and Installation

The ZeuS bot has no built-in capability to spread to other computers. In most cases a spam campaign
is used to distribute it, either as an attached file or a link. Some type of social engineering within the
spam message is used to trick the victims into executing the bot. A wide variety of these tricks have
been seen, often in forms that are persuasive and difficult to detect. The large number of social
engineering tricks is a result of many individuals attempting to seed their own botnet, using the
common ZeuS platform.

The lack of worm-like spreading capabilities makes the bot suitable for targeted attacks, since the bot
is less visible and less likely to be detected. In targeted attacks, it can be sent to the intended victim
in various disguises until success is achieved.

When the bot is executed on a victim's computer it goes through a number of steps to install and
configure itself, and to connect to the botnet. The filenames given here are for the tested version, and
sometimes are changed in new versions. Outlined below are the steps taken upon initial execution:

1. The install function searches for the "winlogon.exe" process, allocates some memory within it
and decrypts itself into the process.

2. The bot executable is written to the hard drive as "C:\WINDOWS\system32\sdra64.exe". The

directory "C:\WINDOWS\system32\lowsec\" is created. This directory is not visible in
Windows Explorer but can be seen from the command line. Its purpose is to contain the
following files:

• local.ds: Contains the most recently downloaded DynamicConfig file.

• user.ds: Contains logged information.
• user.ds.lll: Temporarily created if transmission of logs to the drop server fails.

3. The Winlogon ("HKLM/SOFTWARE/Microsoft/WindowsNT/CurrentVersion/Winlogon")

registry key's value is appended with the path of the bot executable:
C:/WINDOWS/system32/sdra64.exe. This will cause the bot to execute when the computer

4. The Windows XP firewall is disabled. This causes a Windows Security Center warning icon to
appear in the system tray, the only visible indication that the computer has been infected.

5. The bot broadcasts an "M-SEARCH" command to find UPnP network devices. This may be
an attempt to access and reconfigure local routers.

6. The bot sends an HTTP GET command to the configured botnet server to get the latest
DynamicConfig file.

Combating Robot Networks

and Their Controllers
7. The bot begins capturing and logging information from the infected computer. The
DynamicConfig file largely determines what information is collected.

8. The bot sends two HTTP POST commands to upload log (user.ds) and stat information to the
botnet drop server.

9. Three timers are set to values in the StaticConfig, each executing a function on time-out:
• Get new configuration file (DynamicConfig) from server (default 60 minutes).
• Post harvested data (user.ds) to server (default 1 minute).
• Post statistics to server (default 20 minutes).

10. If a web page that is viewed from the infected computer is on the injection target list in the
DynamicConfig, the additional fields from the list are injected into the page.

11. If the HTTP "200 OK" reply to a POST contains a hidden script command, the bot executes it
and returns a success or failure indication along with any data (see Communication section
below). ZeuS Botnet Communications

All Zeus botnet communications pass between the bots and one or more servers. Only one physical
server is needed, but additional ones can be used to distribute bot file updates and fallback
configuration files.

Data sent through the Zeus botnet is encrypted with RC4 encryption. In this implementation a key
stream is generated from the botnet password, and is XORed with the data. The same password is
used to encrypt all data that is passed through the botnet. Changing the botnet password requires
that all of the bot executables be updated to a build that includes the new password. The dynamic
configuration file also must be updated and the server password changed from the Control Panel.

When a computer is infected with the Zeus bot, its first communication with the server is a request for
the dynamic configuration file. Unlike other data sent through the network, the configuration file has
already been encrypted by the Builder application and can be sent without further processing. Figure
5 shows the configuration file being requested by the bot, returned by the control server.

Combating Robot Networks

and Their Controllers
Figure 5: The ZeuS bot gets the Dynamic Configuration File from the Server

When the configuration file has been received, the bot will retrieve the drop server URL from it. The
bot then HTTP POSTs some basic information about itself to the drop server, to log in and indicate
that it is online. As long as it is running, the bot continues to HTTP POST encrypted logs and
statistics to the server at timed intervals. By default logs are sent at 1 minute intervals and statistics
are sent every 20 minutes.

Figure 6: The ZeuS bot POSTs basic information

Combating Robot Networks

and Their Controllers
Once a bot posts data (see Figure 6) to the server, the server replies with an HTTP/1.1 200 OK
response. The Zeus server conceals an encrypted message as data within the response. This data
field is used to send commands (scripts) to the bot. Figure 7 is an example of the default data when
no command is being sent, which is the most common case.

Figure 19
Figure 7: The Server Reply to the POST Contains Some Data

When a command script is being sent by the server, the data size will be considerably larger than the
standard response. Figure 8 shows a sent getfile command, resulting in 82 bytes of encrypted data.

Combating Robot Networks

and Their Controllers
Figure 8: The Server Reply to the POST Contains a Command Web Page Injection

One important feature of the ZeuS bot is its ability to dynamically inject dynamic content into web
pages viewed from an infected computer. This is done on-the-fly, as data passes from the server to
the client browser. A snippet of the configuration data for this is shown below. It does a fairly straight
forward search and insert operation:

set_url GP
<tr><td>PIN:</td><td><input type="text" name="pinnumber" id="pinnumber"

The set_url parameter identifies the page to be attacked, data_before contains the text to search for
before the injection point and data_inject has the text that will be injected. Figure 9 shows a login
page before and after injection.

Combating Robot Networks

and Their Controllers
Figure 21
Figure 9: Login Form Before and After Injection

This is a just simple demonstration. In practice more elaborate deceptions can be created, for
example the injected changes could pretend to deny access and ask victims to confirm their identity
by filling in additional fields.

Below is the HTML source before injection. The data_before search text is highlighted.

<TD><INPUT id=username name=username></TD></TR>
<TD><INPUT type=password name=password></TD></TR>
<TD colSpan=2><INPUT type=submit value=Submit></TD></TR>

The following is the HTML source after injection, with the code inserted from the data_inject field
discussed above.

<TD><INPUT id=username name=username></TD></TR>
<TD><INPUT type=password name=password></TD></TR>
<TD><INPUT id=pinnumber name=pinnumber></TD></TR>
<TD colSpan=2><INPUT type=submit value=Submit></TD></TR>

The currently distributed configuration file contains default settings for injection attacks on more than
100 URLs. A well executed attack can be very difficult for a victim to distinguish from a genuine web

Combating Robot Networks

and Their Controllers
5.3.2 ZeuS Detailed Analysis during the Olympics

The following charts represent active Zeus command and control traffic in Canada during the
Olympics. One sector in particular contributed to 17-50% of the malicious traffic load.
ZeuS Summaries (AS577) March 9 - 19, 2010

8,000 Kbps


/0 10 00

/0 10 00

10 00


/0 10 00

/0 10 00

/0 10 00
13 /20 6: 0

13 /20 2: 0

13 201 4: 0
11 /20 0: 0

11 20 2: 0

11 20 8: 0

12 /20 0: 0

12 20 8: 0

19 201 6: 0

19 /20 2: 0
14 20 0: 0

19 20 4: 0

14 /20 3: 0

14 /20 6: 0

15 20 2: 0

15 /20 8: 0

16 /20 2: 0

16 /20 8: 0

16 20 0: 0

16 /20 6: 0

17 /20 2: 0

17 /20 8: 0

17 /20 6: 0

18 /20 2: 0

18 20 8: 0

18 /20 4: 0

18 20 0: 0

0 2:

20 10:






3/ 0 1


3 01

/0 10

/0 10
/0 10

/0 10

/0 10

/0 10

/0 10

/0 10

/0 10
/0 10

/0 10

/0 10

/0 10

/0 10

/0 10

/0 10

/0 10

/0 10

/0 10

/0 10

/0 10

/0 10
/0 10

/0 10

/0 10

11 /20

12 20

15 20

17 20























Figure 10: ZeuS Activity during Vancouver Olympics

5.4 Emerging Tradecraft

This section will discuss new developments and an assessment of future directions in Botnet

5.4.1 Kneber (Zeus) in Depth

by Nart Villeneuve, Chief Research Officer, SecDev

Targeted attacks, known as “spear phishing,” are increasingly exploiting government and military
themes in order to compromise defense contractors in the Unites States. In 2009, the Washington
Post reported that unknown attackers were able to break into a defense contractor and steal
documents pertaining to the Joint Strike Fighter being developed by Lockheed Martin Corp. Google
was compromised in January 2010 along with other hi-tech companies and defense contractors. The
problem is becoming increasingly severe. In fact, the Department of Defense recently released a
memo with plans to protect unclassified information passing through the networks of various
contractors. The memo recognizes the severity of the ongoing threat and seeks to: Establish a
comprehensive approach for protecting unclassified DoD information transiting or residing on
unclassified DIB information systems and networks by incorporating the use of intelligence,
operations, policies, standards, information sharing, expert advice and assistance, incident response,
reporting procedures, and cyber intrusion damage assessment solutions to address a cyber
advanced persistent threat.

Netwitness revealed the existence of a Zeus-based botnet that had compromised over 74,000
computers around the World. Zeus is not a single botnet, rather it is a malware kit that allows anyone
to easily create a botnet. It sells for $400 - $700 although there are older (and pirated) version that
cost considerably less or are publicly available for download. Typically, the Zeus malware is used to
steal banking credentials. Because of the proliferation of the Zeus kit there are a wide variety of
actors using Zeus – there is no single Zeus botnet, there is no one group behind the attacks. In fact,
botnet operators will often use multiple types of malware. Netwitness found that the command and
control infrastructure for this botnet was primarily based in China and most of the compromised
computers were in Egypt, Mexico, Saudi Arabia, Turkey and the United States. In addition to stealing

Combating Robot Networks

and Their Controllers
banking credentials, attackers are now targeting the social networking credentials of members of the
government and military as well as the employees of Fortune 500companies. Netwitness revealed
that many of the US compromises included government networks as well as Fortune 500 enterprises.
News reports revealed that ten U.S government agencies were compromised and several high profile
companies were named including Merck, Cardinal Health, Paramount Pictures and Juniper Networks.

The use of crimeware infrastructure for spear phishing attacks is certainly not a new development.
Anti-Virus (AV) companies and members of the security community have downplayed the Kneber
botnet suggesting that there has long been AV protection for this type of attack and that there is
nothing particularly new about this botnet. Furthermore, they argue that Kneber is not a particularly
large Zeus-based botnet either, implying that the Kneber botnet is not deserving of the attention it has
received. While the media attention paid to the Kneber botnet has often been alarmist and sometimes
inaccurate, the anti-virus coverage of the malware used in this attack was low (18/41 on Virustotal) --
despite the fact that it was the well known Zeus malware kit. The way in which some are suggesting
that AV has long protected users from this threat is troubling. Moreover, focusing solely on
Zeus and not additional malware downloaded after Zeus obscures the relationship between generic
and targeted attacks.

These events indicate that attacks that are often considered to be criminal in nature, such as the
targeting of banking credentials of individuals, also pose persistent threats to those in the government
and military sectors. Moreover, it is well understood that these attackers aim to maximize their
financial gain from such attacks. If the data ex-filtrated is not simply bank account and credit card
numbers but also credentials that can be used to access the internal networks of the victims, why
wouldn't they also sell that information? As Netwitness states:

They are well organized, have demonstrated technical sophistication on par with many intelligence
services and do not forgo the opportunity for financial gain with the information they collect. If they are
collecting network credentials, it means they are using or selling them in an active underground
economy – which may include sponsoring foreign intelligence services.

Moreover, Netwitness suggests that the attackers may have been after data other than simply
banking, credit card or social networking credentials. In response to the critique from the security and
AV community, Netwitness stated that “trivializing the damage done is simply disingenuous by
anyone who has seen the types of data stolen from threats such as these.” [16] This implies that the
data ex-filtrated by the attackers may have been particularly sensitive. In fact, the Wall Street Journal
reported that:

At one company, the hackers gained access to a corporate server used for processing online credit-
card payments. At others, stolen passwords provided access to computers used to store and swap
proprietary corporate documents, presentations, contracts and even upcoming versions of software

One can understand the AV and security community’s scepticism. Zeus, after all, is very well known.
However, our investigation found that not only were there high profile compromises, as suggested by
Netwitness, but that the focus of the attack appears to have been the extraction of sensitive
information, not just banking credentials.

Our investigation focused on a spear phishing campaign that is linked with the Kneber botnet that
represents only a small portion of the Kneber botnet. We focused on a case in which the attackers
took portion of blog posts by authors Brian Krebs and Jeff Carr (two prominent members of the
security community) and used them as the content of their malicious emails. Numerous individuals
with .gov and .mil email addresses were sent these spoofed emails that prompted them to download
a security fix for Microsoft Windows. Our investigation revealed that Zeus was being used to infect
targets within the government and military sectors with second instance of malware designed to ex-
filtrate data from the compromised computers.

Combating Robot Networks

and Their Controllers
Instead of simply stealing banking, credit card and social networking credentials, the Zeus malware
downloaded an additional piece of malware on to the compromised machines which focused on ex-
filtrating sensitive documents. We found that at least 81 compromised computers that had uploaded a
total of 1533 documents to the drop zone. We found sensitive contracts between defense contractors
and the U.S. Military, documents relating to, among other issues, computer network operations,
electronic warfare and defense against biological and chemical terrorism. We found the security plan
for an airport in the Unites States as well as documents from a foreign embassy as well as a large
UN-related international organization. In addition, the personal computers of employees with security
clearances who work for a variety of companies and government agencies were compromised.

The sensitive data obtained from these attacks will likely be used to exploit these targets further as
well as those within the targets' social network. The contact information and documents obtained by
the attacker will likely be used for further “spear phishing” attacks. But these attacks may signify the
growing involvement of crimeware in targeted malware attacks for the purposes of extracting
sensitive information that can be exploited for intelligence purposes . The profile of the organizations
that were compromised and the nature of the ex-filtrated data indicate that the goal of these attacks
was not simply stolen banking credentials - the typical target of the Zeus malware.

Furthermore, this case poses challenges to methods of attribution that interpret the geo political
motivation of the attackers and assess the geographic location of the attackers' command and control
infrastructure. Were these attacks simply part of an ongoing Zeus crimeware campaign? Or does the
composition of the targets and the content of the ex-filtrated data indicate that this is less a case of
crimeware and more a case of espionage? There is no easy answer.

“Attribution is difficult because there is no agreed upon international legal framework for being able to
pursue investigations down to their logical conclusion, which is highly local.” - Dr. Rafal A.
Rohozinski, Secdev

A more detailed examination of our investigation

On February 6, 2010, Brian Krebs reported that attackers using the Zeus trojan targeted a variety of
.gov and .mil email addresses in a spear phishing attack that appeared to be from the National
Security Agency and enticed users to download a report called the “2020 Project.”

Following the publication of the article by Brian Krebs, attackers took portions of his article and used
them as lure in further spear phishing attacks. Sophos Labs analyzed the sample that used Kreb's
post. [20] A post on by Jeff Carr regarding the spear phishing attack was also used in
another attack. [21] The attackers used the blog posts of these individuals and spoofed their email
addresses in order to make their malware seem convincing to the recipients of the spear phishing

Russian spear phishing attack against .mil and .gov employees

A "relatively large" number of U.S. government and military employees are being taken in by a spear
phishing attack which delivers a variant of the Zeus trojan. The email address is spoofed to appear to
be from the NSA or InteLink concerning a report by the National Intelligence Council named the
"2020 Project". It's purpose is to collect passwords and obtain remote access to the infected hosts.

After running the executable, attempts are made to connect with a command and control server
located in China over HTTP.

Combating Robot Networks

and Their Controllers
Registration Information WHOIS Information

Name: Sport Co LTD

Organization: Sport Com LTD
Address: Volodarskiy
City: Izjevsk
Province/state: IZJEVSK
Country: CN
Postal Code: 519000
Phone: +84.4562425583

Netname: YYNET
Descr: Beijing qi shang zai xian rate communications
Technology Co., Ltd. Langfang Branch
Descr: West Side to the da guan di ,Langfang Development
Country: CN

The command and control server is a known Zeus C&C server. There are a wide variety of malware
kits and associated domain names hosted on this server, as well as several neighbouring servers.

Dancho Danchev has linked the email address “” to a variety of criminal
enterprises including “money mule recruitment” operations. Netwitness indicated that there is a link
between the “Kneber” botnet. The Knerber botnet is named after the email address used to register
the command and control domain names, “”. This email address has been
linked to past crimeware activity as well. The link between the domains registered to
“” and those registered to “” appears to be a
common command and control infrastructure.

There are two domain names and both resolve to which is where portions of the Kneber botnet are hosted. These domain names are also
hosted on which is where is hosted. There are also domain
names registered by both email addresses hosted on the same IP addresses.
netname: VolgaHost
descr: PE Bondarenko Dmitriy Vladimirovich
country: RU
netname: CRGdSzS
country: CN
descr: China Railcom Guangdong Shenzhen Subbranch

There are a variety of other interesting connections between “stallvars” domain names and other
email addresses which indicate that there are further connections between the domain names and IP
infrastructure used by the attackers. [28] This particular botnet extends beyond just the domains
registered by “”

While we did not find any classified data, there was sensitive information regarding contracts with
private firms as well as government/military entities and project information including budgets and

Combating Robot Networks

and Their Controllers
supplementary documentation from government/military sources. The data includes unclassified, but
sensitive, documents on latest threats from law enforcement services around the world. There were
also procedural documents, such as an airport's security plan.

Despite the fact that no classified information appears to have been obtained, the data captured is
valuable to the attackers. At a minimum the attackers can use the contacts and information in these
documents to further exploit the targets. Social engineering, rather than technical proficiency, is what
enables attackers to compromise these high value targets. Expect to see these documents used as
malicious exploits targeted those who would be familiar with or interested in them.

The identity of the targets compromised in this attack, the focus on ex-filtrating data, and the content
of the documents indicates that crimeware may be moving into the espionage industry or providing
command and control infrastructure for those engage in such activities. While Zeus is normally
associated with capturing banking and other credentials, it is being used to deliver a payload that
focuses on extracting sensitive data. The use of a well known malware kit such as Zeus and crime-
focused command and control infrastructure may be obscuring the nature and intent of the attackers.
If this trend is in fact occurring, the use of crimeware infrastructure significantly impacts traditional
methods of determining motivation and attribution in espionage investigations.

Combating Robot Networks

and Their Controllers
Section 6 Threat
6.1 Summary - Threats, Agents and Criminal Use of Botnets

The following is a summary of a National intelligence estimate on the cyber threat, which provides a
common operating picture (COP) for police. The following estimates are derived from a network
sample of 300Gbps over the period of a year. (The Internet in Canada is growing at 30%

• 53 Gbps at any given time is malicious/illicit traffic (approx 17% of traffic load). This
represents $2B value in bandwidth alone;
• 80,000 zero-day exploits are discovered every day. Although traditional AV is only 20%
effective in detection;
• 94% of all e-mail is spam or malicious;
• 1.5 million machines were compromised (5% of population) by botnets at any given time.
Although some other studies put this number at 15% of the population (4.5Million machines) ;
• $100Billion 4 revenue at risk in Canada related to cybercrime;
• We have seen first hand, DDoS attacks upward of 40Gbps and involving 2 Million machine
• Inbound/outbound analysis shows that are on average Internet connections of corporations
and government are only 30% clean and running at 80% availability (from the customer
premises) compared with 99.9995% available at the provisioning end;
• Organized Crime accounts for >99% of all botnet attacks with State Sponsored attributed to
<1%. The tradecraft and sophistication is the same;
• Primary targets are money, services, and identities of private sector >99%; and
• Classified material of governments represents <1% of exploitation attempts.

Proactive Cyber Defence and the Perfect Storm, Dave McMahon, Bell Canada 2009. The book goes into detailed calculations as to estimate
expected losses and revenue at risk. Essentially, the total amount of malicious traffic and compromised computers have been measured to a high
degree of accuracy. The total loss equation is the sum of the value of the bandwidth losses, the degregation of availability/throughput applied to
EFT/GNP on national scale, the average cost or reparation per infected machine and the average value of data loss/incident.

Figure 11: YuriNamestnikov – Kaspersky Lab

It is our assessment that a skilled resource could amass a country killing million machine botnet by
jacking criminal bots in less than a week. Harvesting as many zombies independently would take
less time. The total cost would be less than 100k.

A Canadian ISP VELCOM ASN 30407 is listed as the worst ISP by HostExploit in their cybercrime
series publication, Top Bad Hosts of 2009. Other Canadian ISPs: Iweb-as, Netelligent hosting; and
Netnation are in the top 50. Canadian is listed #2 worst hosting nation after Russia. Small
unregulated ISPs offering the cheapest bandwidth are a large source of the botnet problem in
Canada. Internet connectivity is not a commodity to be taken lightly. Enterprises and government
need to conduct business only over a trustworthy network provider, as realized in the US
Comprehensive National Cyber Security Initiative.

6.2 DDoS Attacks

A large telecommunications carrier observes botnet enabled DDoS attacks round the clock. So much
so that a 300Mbps DDoS attack is still classified as a low level threat for a tier 1 carrier. Now
appreciate that a 10-100Mbps attack is sufficient to knock most large organizations off the Internet.
The largest attack recorded against Canada is a sustained 40GBps. To put this in this is perspective
this is 100x greater than the attacks that took down the country Estonia.

Figure 12: Cyber Attacks on Estonia from Merike Kaeo of doubleshotsecurity

The following examples are presented for illustration purposes only, and do not accurately represent
all of the DDoS attacks directed at Canadian CIs currently. Attacks similar to and more severe than
Estonia are happening here in Canada - daily.

In August 2009, three major DDoS attacks were launched directly at the Internet gateways of the a
critical sector. The sector’s Internet access was 46 Mbps and the attacks came in between 1.7GBps
and 2.014Gbps.

The malicious traffic could not be identified with the current firewall logs. There were no deep packet
inspection systems or DDoS protection mechanisms in place at that time to monitor or stop the traffic
on the affected network perimeter.

Filters were added upstream and 18 Gigabytes a day of malicious traffic was captured. Further
investigation showed 15 IP addresses responsible for the malicious traffic hitting the Internet border

Botnet activities against the Olympic infrastructure were measured at 100 times the volume for
normal malicious activity. During this same period there were a large number DDoS attacks launched
against the Olympic Infrastructure using BotNets. No malicious attacks were successful.


This section is an extract of Botnet related material from the Symantec Global Internet Security
Threat Report, Volume XIV, Published April 2009. The full report is available from the Symantec web

Attackers are concentrating on compromising end users for financial gain. In 2008, 78 percent of
confidential information threats exported user data, and 76 percent used a keystroke-logging
component to steal information such as online banking account credentials. Additionally, 76 percent
of phishing lures targeted brands in the financial services sector and this sector also had the most
identities exposed due to data breaches. Similarly, 12 percent of all data breaches that occurred in
2008 exposed credit card information. In 2008 the average cost per incident of a data breach in the
United States was $6.7 million.

The most popular item for sale on underground economy servers in 2008 was credit card information.
Owing to the large quantity of credit card numbers available, the price for each card can be as low as
6 cents when they are purchased in bulk.

A prime example of this type of underground professional organization is the Russian Business
Network (RBN). The RBN reputedly specializes in the distribution of malicious code, hosting
malicious websites, and other malicious activity. The RBN has been credited with creating
approximately half of the phishing incidents that occurred worldwide last year. It is also thought to be
associated with a significant amount of the malicious activities on the Internet in 2007.

Changes in the current threat landscape—such as the increasing complexity and sophistication of
attacks, the evolution of attackers and attack patterns, and malicious activities being pushed to
emerging countries—show not just the benefits of, but also the need for increased cooperation
among security companies, governments, academics, and other organizations and individuals to
combat these changes.

In 2008, bot networks were responsible for the distribution of approximately 90 percent of all spam

In 2008, Symantec observed underground economy advertisements for as little as $0.04 per bot. This
is much cheaper than in 2007, when $1 was the cheapest price advertised for bots. Bot-infected
computers with a decentralized bot C&C model are favoured by attackers because they are difficult to
disable, and most importantly, can be lucrative for their controllers.

One result of all the activity in 2008 is that this shows that botnets can be crippled by identifying and
shutting down their bot C&C server hosts, but that this strategy is difficult to implement given the
various global hosting options that botnet controllers have at their disposal.

Botnet owners are moving away from traditional IRC-based botnets since they are easier to detect,
track, filter, and block than botnets based on HTTP traffic. HTTP communications can be used to
disguise botnet traffic among other Web traffic in order to make it difficult to distinguish malicious
traffic from legitimate HTTP traffic. To filter the traffic, organizations would have to inspect the
encrypted HTTP traffic and identify and remove bot-related traffic while still allowing legitimate traffic
to pass through. Because of this, it is very difficult to pinpoint and disable a bot C&C structure. It is
also unreasonable to block HTTP traffic since organizations depend on legitimate HTTP traffic to
conduct day-to-day business.

Note: Symantec observed an average of 75,158 active bot-infected computers per day in 2008
Worldwide. Bell Canada reported measuring 1.5 million a day in Canada.

Underground economy servers are black market forums for the promotion and trade of stolen
information and services. This information can include government-issued identification numbers,
credit cards, credit verification values, debit cards, personal identification numbers (PINs), user
accounts, email address lists, and bank accounts. Services include cashiers, scam page hosting, and
job advertisements such as for scam developers or phishing partners. Much of this commerce occurs
within channels on Internet Relay Chat (IRC) servers. For an in-depth analysis of how the
underground Internet economy functions, please see the Symantec Report on the Underground
Economy, published November 2008.

In 2008, the most frequently advertised item observed on underground economy servers was credit
card information, accounting for 32 percent of all goods.

Credit cards may also be popular on underground economy servers because using fraudulent credit
card information for activities such as making online purchases is relatively easy. Online shopping
can be easy and fast, and a final sale often requires only basic credit card information.

Another factor that contributes to the popularity of credit cards is that they are typically sold in bulk
packages on underground economy servers. Not only do advertisers offer discounts for bulk
purchases or include free numbers with larger purchases, but having an extensive list of cards
enables individuals to quickly try a new number if a card number does not work or is suspended.
Also, having a larger number of credit cards numbers included should theoretically increase the
likelihood of having active/valid cards in the bulk package.

The popularity of bank account credentials may be due to a shift toward online banking. Bank account
credentials are attractive to attackers because they offer the opportunity to withdraw currency directly.
Beyond straightforward account cash outs, bank accounts can also be used as intermediary channels
to launder money or to fund other online currency accounts that only accept bank transfers for

Email accounts were the third most common item advertised for sale on underground economy
servers in 2008 according to Symantec, making up 5 percent of all advertised goods. Gaining
possession of email passwords can allow access to email accounts, which can be used for sending
out spam and/or for harvesting additional email addresses from contact lists. Moreover, along with
email, many ISPs include free Web space in their account packages, which many people rarely
access. Once the ISP accounts are compromised, these free spaces can be used to host phishing
sites or malicious code without the knowledge of the victims.

The advertised prices of email accounts depended on the ISP of the account; larger ISPs that offered
large amounts of Web space were advertised at higher prices than ones with smaller space.

By instituting effective multi-factor authentication and multi-level security systems, banks and credit
card companies can make it more difficult for criminals to exploit stolen financial information.


Threat-Risk Convergence applies an inclusive model when classifying threats rather than one that
labels threat agents with single purpose yet multifarious motives, means and methods i.e., “a hacker
hacks, and a spy subverts in a way that defines them in a threat class.” There has long since existed
a one-to-one mapping between mode of attack and mode-of-defence. And so we have tended to set
defences along expected lines of attack. And since identified classes of threat agents owned a certain
type of attack mode, we could identify the attacker based upon the attack. Organizations have come
to expect and plan a defence against a simplistic threat vector – and this is dangerous.

The traditionalist view holds threats are most often defined by their actions. They are what they do. It
is a view that is bounded by what we perceive and expect of a threat. This leads to less than helpful
definitions: “Hackers hack and all who hack are hackers” or the national Anti-terrorism Act says
“terrorists are those who belong to terrorist organizations. The definition of a terrorist organization is
a group made up of terrorists.” One would have to discard these political definitions of threat actors as
unworkable in the craft of fighting the real threat.

In the murky netherworld of cyberspace our notions of hackers, cyber-terrorists, and spies… are
deceived by appearances and befuddled by intent.

The reality is that the cosmopolitan threat agents of 2010 have compounded agendas and common
exploitation tools at their disposal. Although this does not necessarily mean that traditional threat
agents have merged organizations nor have established lines communications; in cyberspace that
start to look similar, act the same and tread over each others conventional turf. Cyber space acts as
a confluence for the threat, and the threat like a collective.

The RAND Corporation’s predictions concerning the mapping of evolving social threat networks to
emerging interconnected computer networks was prophetic, because is showed how the threat is
both empowered and bounded by what we now see as convergence. If we can predict, then we can

The protagonists in Cyberspace are likely to amount to a set of diverse, dispersed “nodes” who share
a set of ideas and interests and who often are arrayed to act in a fully connected “all-channel”
manner. The potential effectiveness of the networked design compared to traditional hierarchical
designs attracted the attention of management theorists as early as the 1960s. Today, in the
business world, virtual or networked organizations are heralded as effective alternatives to traditional
bureaucracies because of their inherent flexibility, adaptiveness, and ability to capitalize on the talents
of all of their members. Networked organizations share three basic sets of features.

First, communication and coordination are not formally specified by horizontal and vertical reporting
relationships, but rather emerge and change according to the task at hand. Similarly, relationships are
often informal and marked by varying degrees of intensity, depending on the needs of the

Second, internal networks are usually complemented by linkages to individuals outside the
organization, often spanning national boundaries. Like internal connections, external relationships are
formed and wind down according to the lifecycle of particular joint projects.

Third, both internal and external ties are enabled not by bureaucratic fiat, but rather by shared norms
and values, as well as by reciprocal trust. Internally, the bulk of the work is conducted by self-

Proactive Defence

managing teams, while external linkages compose “a constellation involving a complex network of
contributing firms or groups. [RAND Corporation]

Analysis ought to consider one singular class of deliberate threat for the purposes of establishing an
effective integrated risk management framework. Public and private sectors need to become ‘threat
agnostic and risk aware.’ Aligning public agencies to traditional threat actors (terrorists, spies and
criminals) does not work. An Integrated Threat Assessment Centre (ITAC) representing all parties in
government by devoted solely to terrorism could have little to say about the cyber threat, regardless
of the national security implications posed by transnational crime, acts of war or espionage.

In 2010, not only do threat agents covet multiple agendas and act similarly in cyberspace, but the
tools themselves have converged. The necromancy of cyberspace has adopted tools for discovery,
manufacturing access and controlling machines. To understand weaponry, you most certainly need to
understand the technology underpinning Robot networks. These botnets are the Swiss-army-knife of
computer network exploitation and attack.

It is worthwhile decomposing conventional perspectives on those behind cyber attacks in the context
of today’s converged World.

Hackers are predisposed to exploiting technology for its own sake. Some have emerged as a
significant threat to public and private sectors. These are the pestilent cyber-vandals who wreak
havoc just for the heck of it. The result of the attacks can be annoying, sometimes embarrassing and
often costly. As a hacker gains power and notoriety, they often take on more sinister motives and are
drawn to the dark side of organized crime, contract espionage work or are preyed upon by terrorist

The glorification of the hackers in the entertainment media has morphed original meaning of the term
into something significantly different. The definition was further diluted in the usage of the verb
“hacking” to refer to all malicious activity occurring on computer networks. Conversely, anything or
everything that a hacker did was referred to hacking. Convergence of the telecommunications and
computing worlds had rendered the identity of phreakers (telecommunication hacking) and phrackers
(hacking both telecommunications and computers) largely academic if not extinct. It has been said
that when ‘something means everything, it means nothing.’ The hacker identity is now also stricken
with hypocrisy. In cyberspace, where identity and motive are obscured by anonymity, the agent for all
deliberate threat events is the hacker - or at least this is the tenet generally accepted by most people,
as a matter of convenience. Although, there are attempts to resuscitate the hacker name by
disassociating the white hats (good hackers with traditional values) from the black hats (bad hackers
or cyber-sociopaths) and labelling criminal hackers with the term ‘Crackers’.

The original hackers were characterized as “the iconoclastic pioneers of the computer revolution” not
just by their talent for expanding the limits of technology, but also by an ethic that held that
information should be freely available and centralized authority mistrusted. [The hacker has since]
lost the war of words” - Kevin Poulsen

Colloquial usage has almost taken “hacker” in a pejorative sense, where it is likely to remain.
Notwithstanding, any serious assessment of the threats to information systems needs to have an
unequivocal definition of the hacker. For this purpose you can think of a hacker as: “a person who
exploits information technology for the sake of the technology.” This distinguishes them from
criminals, terrorists of spies who may exploit a computer system using the same means but for
entirely different motives - like money or secrets. Hacking groups have adopted social and political
agendas since the beginning. The Chaos Computer Club has had several agendas in its lifetime, and
across the spectrum. The truth is that most hackers are opportunists with one-day agendas with a
force of conviction traced along a compass heading which is more often influenced by the magnetic
attraction of the moment. In this sense, many within the community view hackers as rebels without a
cause, or at most with one that is morally flexible.

Hacktivists are electronic guerrillas with political agendas ranging from ending censorship to outright
sabotage. The concept of righteous hacking is in vogue. Participants stem from two sources:
traditional crackers who are looking for altruism from otherwise illicit activities, and political motivated
activists who see hacking as a tool for advancing "the cause". As far as hacktivists are concerned,
the Internet is not only rule-free and ethically liberal but also serves as a large wall for political graffiti.
There seems to be a general acceptance (at least within these spheres) that the end justifies the
means. Hacktivism as almost died out as a meaningful threat in 2005. Recent hacktivism around the
2010 Olympics failed before it even began.

“Hacktivists are savvy, subversive and seasoned veterans of the Cola War” - Tao Collective.

On the other hand, assuming that hackers are driven to pernicious ends by the dark side of the force
and labelling those enemies of the state, is overly alarmist and erodes the credibility of the security
community. “We need to seriously question and abandon some of the language that the state uses to
demonize genuine political protest and expressions” - Electronic Disturbance Theatre.

Conveying a message through direct on-line action does not necessarily qualify as cyber-terrorism. It
would not be the first time that the security intelligence apparatus chased a ghost of a cyber-spy ring
only to find a 16 year old public school student playing on his home computer at the other end of the
line, with no real agenda, few skills and empty cans of Jolt Cola.

At the bottom of the food chain there are the script kiddies who have little understanding of computing
and a tenuous grasp of netiquette. They form the basis for the majority of the malicious activities
seen on the Internet today. Their modus operandi consists of downloading some of the many easy to
use hacking programs off the web, filling in a web address and pressing the launch attack button.
Often the script kiddie becomes enamoured with the new tool and like a kid who gets a new pellet
gun for their birthday, shoots at everything in site to see how the gun works. The targeting practices
of the kiddie can be characterized as haphazard or opportunistic at best. The kiddies themselves are
often preyed upon by meta-hackers or end up working unwittingly for organized crime syndicates,
intelligence agencies or at worst, terrorist groups. The opportunity to mount a false flag operation is
ripe given the anonymity of cyberspace. A kid can think he is earning his badge towards elitism when
in fact he is manipulated by a state sponsored terrorist group to attack an adversary. Sometimes they
are just used as an unaware ‘cut-out’ to pass information; an information mule trafficking contraband
data. Information can be traded and laundered as a commodity. After all, today, money is just
represented as numbers on a computer. In the end, guess who is going to take the fall?

At the top of the hacking food chain there are the “elite.” The elite lead the way in tools, tactics and
trends, which eventually filter down to the average hacking group. Unlike, the script kiddies, elite
activities usually go unnoticed except for the most discerning eyes. It is estimated that there are no
more than a half dozen “elite” hackers in any given Country. Far from operating under the radar,
there activities stand out. Although, many organizations spend more time “measuring the noise,
rather than the signal.” The elite are indistinguishable in their sophistication, signatures and
objectives from those engaged in state-sponsored espionage and international crime syndicates.

Cybotage and its effects cross the boundary between virtual and real worlds. Sabotage of
telecommunication and information systems is nothing new. It has occurred within Canada and the
US and has included bombing of microwave towers and cutting of trans-national fibres. Localized acts
of physical sabotage occur every day against the telecoms sector in Canada while being attacked 1
Trillion times a year virtually over the networks. 6

The gloomy report from the Center for Strategic and International Studies (CSIS) which states that:
"Cyberterrorists are plotting all manner of heinous attacks that if successful could destabilize and
eventually destroy targeted states and societies. Information warfare specialists estimate that a

National Cyber Intelligence Estimates, Bell Canada

properly prepared and well coordinated attack by fewer than 30 computer virtuosos or cyber-terrorist
group strategically located around the world, with a budget of less than $10 million, would shut down
everything from electric power grids to air traffic control centers" is categorically without substance.

It is our finding that, one of the ¬least significant network threats to Canadian interests is posed by
terrorist and extremist groups. The Internet provides the capability to disseminate propaganda and
recruit members but there is a natural aversion to mingling with the hacker community. Cyber-
terrorism simply has not materialized and the National Information Infrastructure is all but immune
from netcentric acts of cyber terrorism. Terrorist organizations collectively lack the knowledge,
capacity and will to launch significant attacks over cyberspace. All they could muster would be buried
in the noise of attacks that are repelled on the NII every morning. They also do not appear to be
partnering with entities that do have the means. A terrorist attack against cyberspace will most likely
be delivered with an improvised explosive device, constructed locally and placed near a carrier office.
The recent 2006 alleged terrorist plot in Toronto and in England 2007 support this hypothesis.

Why haven't terrorist groups used bots for fund raising or attack?

Sec Dev offers three possible answers to this question:

a) DDoS type attacks most often extort money from questionable payment systems, or online
gambling sites. We know from experience that many of the sites that were subject to these kinds of
attacks paid, without notifying authorities. While we have no evidence that militant groups have
collaborated with criminal organizations for this kind of activity, we cannot actually exclude this
possibility. Given the rise of packages, such as Zeus or orb black energy, it's quite
possible that militant groups have used these without us being aware of it;

b) Launching DDOS or phishing networks requires a degree of cognizance vis-à-vis how effective
they can be, and this is still being part of a fairly niche criminal underworld. There's a good possibility
that a generation of technically competent militants has not risen to a position where they been able
to leverage these means for fundraising. If they do engage in fundraising of a criminal nature, it's
usually more standard forms of rackets and extortion. Debriefing of former terrorists asked them why
they did not engage in cyber attacks. The answer simply was that the level of investment necessary
to carry off a significant tack was far in excess of that required for a simple kinetic attack. Therefore
there just wasn't enough bang for the buck); or

c) Most militant actors benefit from supportive communities. These are both religiously or
nationalistic motivated. They may not have need to raise money through online criminal activities as
they are not a low hanging fruit.

The Internet is an attractive medium for terrorist groups owing to the capabilities it affords them: rapid
and encrypted communications, anonymity, remote operations, direct action, and information
gathering without having a physical presence. Canada is a permissive society, which facilitates the
indoctrination home-grown terrorism with Internet borne propaganda and messaging from foreign
based terrorist organizations.

According to an extremely interesting 1999 report by the RAND Corporation entitled “Countering the
New Terrorism,” we can expect to see a devolution of state sponsored terrorism in the modern age
post 2000. Terrorists they observed, move in loose groups, constellations with free flowing structures
that are: segmented, polycentric, ideologically integrated. These map ideally to technologies and the
culture of cyberspace.

The Special Senate Committee on Security and Intelligence outlines their position on Cyber Terrorism
in the January 1999 report:

“In 1989, the Internet was not nearly as ubiquitous as it is today and the security threat it presents
was not as widespread. Today, any nation that relies on computer systems is vulnerable to cyber-

terrorism. In Canada, this includes our defence, telecommunications, energy, air traffic and banking
systems; indeed most of the governmental and private systems we rely on daily. Marshall McLuhan
wrote that "World War III would be a guerrilla information war with no division between the civilian and
military populations.”

6.4.1 Proliferation of Cyber-terrorism.

We have seen terrorist cells use the Internet to great effect, primarily for their own communications
networks. In this matter they are exceptionally skilled and practice excellent operational security. The
immediate threat to Canadians is from attraction to psychological operations (PSYOPS), influence
operations, propaganda, radicalization and recruitment. The only counter is a robust and aggressive
proactive cyber defence program with global reach. Cyber-terrorism will become much more
dangerous should an alliance with either state sponsored information warfare programs or organized
crime develop. The core concern is if terrorist groups fund criminal organizations such as Russian
Business Network (RBN) to conduct cyber attacks on their behalf. A terrorist group would want the
impact of a cyber-attack to be highly visible like an Estonian campaign, so they would need to do
some planning before exposing their hand or convincing the criminal or state organization to risk a
valuable Botnet. The precursors to such an alliance would be seen in cyber-communications, rhetoric
and scanning for targets. Once the conditions are in place, the scenario could easily develop over-
night with minimal operational indicators and early-warning. Surveillance of domestic acts of crime,
foreign terrorist cells and military programs often spans different mandates and exacerbates collation
of intelligence within a tight decision cycle. You have to have an omnipresent proactive cyber defence
apparatus working to pre-empt, interdict, and disrupt an attacking threat network first, and investigate
personalities and motives behind more thoroughly afterwards.

6.4.2 Cyber criminals

Criminal enterprises are motivated by the acquisition of wealth. As an illustration, Russian crime
groups, have a high level of sophistication, especially in the area of electronic commerce, money
laundering, extortion, drug trafficking and smuggling. This type of criminal is typically very
sophisticated, well educated, and well connected. Many were involved in the security services and
communist government of the former Soviet Union. The Russian mafia has alarming control of
Russian commercial enterprises, particularly targeting the financial sector. Russian organized crime
groups have been quick to go on line. In fact, the European Union Bank was the first e-bank in the
world operated out of Toronto - an alleged front for the Russian Mafia. The May 2007, cyber-attacks
against Estonian institutions have been paralyzing. The Russian State has been fingered as the most
conspicuous attacker. If the current Russian state-sponsored attacks against Canada where re-
vectored from compromising financial information to denial of service, what would the impact be?

David Bizeul, “Russian Business Network study” version 1.0.1, 20 November 2007. writes “Internet
service providers do not want to interfere with their users but if they don’t, these users are at risk.
ISPs will face this kind of dilemma more and more in a close future and that’s why Internet regulators
and countries have to enact rules to promote ISP filtering against dangerous zones such as RBN.
The world would live better without RBN IP range.”

6.4.3 E-telligence

Intelligence is the business of collecting, processing and reporting information. The majority of the
world’s information is stored on or communicated by computers. It stands to reason that if you were a
spy that is where you would have to look… or have someone to look for you. A Chicago Tribune Oct
2000 reported that “Canadian police are investigating whether Israel used a software-trap door to

gain access sensitive files.” Foreign Intelligence services continue to conduct information operations
including electronic warfare against backbone telecommunications infrastructures in Canada.

“...a foreign government is believed to have tasked its intelligence service to gather specific
information. The intelligence service in turn contracted computer hackers to help meet the objective,
in the course of which the hackers penetrated databases of two Canadian companies. These
activities resulted in the compromise of numerous computer systems, passwords, personnel and
research files of the two companies.” - [Economic and Information Security Public Report, Canadian
Security Intelligence Service, May 1998]

In March of 1995, France announced the creation of the Committee for Economic Competitivity and
Security whose purpose is to research, analyze, process and distribute information with the goal of
protecting economic secrets and advising French firms and the government on trade strategy. A
significant source of collection is through information operations including signals intelligence. A
German news articles discusses “France’s capacity to eavesdrop on sensitive commercial
transactions.” Furthermore, there is compelling indicators of France’s intent to use their intelligence
services to spy on telecommunications and aerospace sectors in North America to advantage French
companies competing in those sectors globally. The Comité pour la compétitivité et la sécurité
économique (CCSE) has a mission to "coordinating the government's initiative and action in the
intelligence field" and of sharing intelligence with "civilian partners.” The Association pour la diffusion
de l'information technologique (ADIT) manages information obtained by civil servants, public agencies
and attaches, and to pass it along to the private sector. An independent affiliate of the state
intelligence services INTELCO has publicly boasted that it focuses on "offensive action".

Moonlight Maze is a code word used to class the systematic coordinated attacks on computer
systems starting prior to 1998. The attacks have bee attributed to the Russian Academy of Sciences
and various Russian security and intelligence agencies. The then-Deputy Secretary of Defense John
Hamre was quoted by the Washington post as saying that “we're in the middle of a cyberwar.” A
General Accounting Office report on the Pentagon's computer security, described Moonlight Maze as
"a series of recurring, 'stealth-like' attacks . . . that federal incident-response officials have attributed
to foreign entities and are still investigating."

6.4.4 Russian Business Network

The Russian Business Network (RBN) that embodies the greatest concentration of evil in
Cyberspace. RBN is considered the most significant deliberate threat to Canadian information
infrastructures. At any given moment tens of thousands of attacks are inflicted against Canadians by
RBN. The organization, itself, supports no legitimate purpose and offers Internet access, computer
network exploitation and attack services to organized crime and state security services alike. The
leadership of RBN has close family and historical connects to the Russian state in which it operates
with complicity. Nefarious activities are not limited to: child pornography, phishing, pervasive botnet
control, distributed denial-of-service, espionage, banking fraud, identity theft, money laundering,
extortion, malware creation and distribution. These activities earn RBN at least $150m per year.
Spamhaus calls RBN “the world's worst spammer, child-pornography, malware, phishing and
cybercrime hosting networks, provide bulletproof hosting."

“There is considerable evidence available to substantiate that there is a quid pro quo between
Russian authorities and select hackers. It's clear from the pattern of prosecutions against the DOS
operators over the last 10 years that Russian authorities have a pretty good ability to track and
identify specific individuals. However, in a few very high profile cases, notably an individual involved
with credit card theft, as well as the RBN, these groups have been immune from prosecution.
Attempts at trying to initiate prosecution via MLATs or even site visits by FBI and SECO have come
to naught. the fact that Russian authorities were reluctant, unable to prosecute individuals involved in
the politically motivated DDOS attacks against Estonia in Georgia, further substantiate this point. In

general, if we accept that CNA as a capacity currently sought by most states, in places like Russia
and China it is easier to outsource this to those that have the expertise, by essentially granting them a
letter of Marque that allows them to pursue criminal activities, so long as they're also available to
work for the state. This is an increasing problem, exacerbated by the fact that there is no
international treaty governing cyberspace, or warfare in cyberspace.” – SecDev

VeriSign iDefense research has attributed phishing, malicious code, botnet command-and-control
(C&C), and denial of service (DoS) attacks on every single server owned and operated by RBN. This
finding is confirmed by Tier 1 telecom carriers and chartered banks globally. The organization offers a
complete infrastructure for malicious activities and offensive operations on a global scale. RBN is
utterly illegal.

The RBN Autonomous System consists of shadowy networks and subnets that are evolving as the
proactive defence systems of major Internet providers and telecom carriers are brought to bear on the
threat. The exact composition of RBN and their affiliations in virtual and real space have been
extensively mapped. David Bizeul publicly outed the Russian Business Network in his detailed
technical study dated 2007-11-20. Their operations have been explored at some depth by the
Washington Time Investigative reporter, Brian Krebs, and Verisign’s idefence.

The founder and leader of the organization is known as 'Flyman’, and is related to a powerful Russian
politician. David Bizeul explains that although “he is well known by law enforcement because of child
pornography, he has very strong political protection that can offer him to continue to develop its traffic
without being worried by police.”

There is some conjecture that the May 2007 information warfare attacks against Estonia may have
been co-ordinated by Russia and out-sourced to RBN. Nearly all major piece of computer malcode
last year has either originated from or sent compromised data back to servers at RBN.

At 7 p.m. PST 06 Nov 07, "The Russian Business Network, whose client list amounts to a laundry list
of organized cybercrime operations appears to have closed shop after a number of its main upstream
Internet providers severed ties with the group. The disappearance of RBN comes less than a month
after Brian Krebs of the Washington Post wrote a series of stories detailing the organization and
history of the shadowy ISP.”

The RBN is currently re-establishing connectivity and resuming operations against Canadians. David
Bizeul predicts that “fast flux botnets could be used and mother ship servers hosted in different
netblocks registered by RBN. RBN has succeeded in mastering the whole Internet scheme
complexity: register IP addresses range with RIPE, create a nebulous network to blur the
understanding of their activities, [they can] settle agreements with legitimate ISP and impersonate all
public information about them.”

RBN will continue to proxy cybercrime and espionage activities in 2008, operating as the gateway to

Estimates of the size of the Storm Worm Botnet ranged from 1 to 50 million computers, and is thought
to surpass the world's largest supercomputers in overall computing power. Organized crime is the
primary source and beneficiary of Storm Worm. These controllers are fully exercising the power of the
botnet with a variety of ingenious schemes. In one case, they were able to precipitate a successful
pump-and-dump spam campaign worked causing 145,000 shares of a $1 stock to jump to $1.20 in a

“Russian Military Doctrine has always included the notion of Information Weapons; a fusion of
advanced command and control, communications, intelligence systems, psychological and electronic
warfare.” - Dr. Garigue, 1994.

“We use computers to send viruses to the West and then we poach your money.” - Russian ultra
nationalist, Vladimir Zhirinovsky

“Russia, China, India and Cuba have acknowledged preparations for cyberwar and are actively
developing IO capabilities. Government departments and businesses globally have experienced both
intrusions into their networks and the loss of sensitive information…” CSIS Public Backgrounder No.
11: Information Operations 2004

The Russians have taken an aggressive and bold virtual approach to their intelligence gathering
practices. These are seen in direct contrast to traditional human source methods which have been
characterized as cautious. The vast majority of cyber-attacks against Canadian critical infrastructures
are traced to Eastern Europe. The volume, ferocity and sophistication of the attacks are distressing.

“There were deliberate and highly co-ordinated attacks occurring in our defense department that
appear to be coming from one country... it are very real and very alarming.” - Curt Weldon, Chairman
of a congressional committee for research and development.

The American experience is much worse or at least better documented than Canada and the rest of
the World. Lengthy investigations (Titian Rain and Moonlight Maze) have uncovered a tangled web
of intrigue and skulduggery involving their former cold war antagonists. The Deputy Defense
Secretary in a congressional hearing stated in no uncertain terms that “we are in the middle of a
cyberwar.” Canada is now engaged in this War, and the communications and financial sectors are
fighting on the front lines.

“It is impossible to overstate the seriousness of the problem. The president is very concerned about
it.” - The White House

There are unsettling links between the evolution of the Russian Security Service (FSB) 'naya Sluzhba
Bezopasnosti’, the Federal Agency for Government Communications & Information (FAPSI)
‘Federal'naya Agenstvo Pravitel'stvennoy Svayazi i Informatsii’, the association with Relcom,
surveillance programs like Storm, connections to organized crime references to the RBN in the early
days. It begs the question as to how RBN can exist and thrive in such an environment without the
complicity of the state. The very nature and scope of RBN activities lend credence to theory of tacit
support by the state including collusion on targets of mutual interest, and espionage tasking. The
question of association between the state and RBN on the matters of crime and espionage is further
strengthened by the similarity of targeting, personal relationships, motives, IP space and tradecraft.

Using the computers as weapons has not been overlooked by Canada’s adversaries. The difference
between war-making of the past and today is that the Internet now enables all players to get into the
game with the big boys on nearly an equal footing. Cyberspace technology has invoked the concept
of "asymmetrical" threats. Now you are thinking ‘cyberterrorism.” Asymmetry is a term which refers to
weapons and tactics which empower groups of modest means to successfully engage a much larger
target such as a government. Given today’s infrastructure, a hostile group need not destroy large
portions of the infrastructure to be effective. The irony is that the ability for a traditional superpower to
strike back at a terrorist cell is limited in that this cell has little computing infrastructure which can be
counter-attacked. Conversely, most Western countries have sprawling information infrastructures
that are perforated with exposures.

Attacks using information systems are not always directed at system components, but also seek to
affect and exploit the trust (at the semantic layer) which users have in the integrity and validity of the
information in them.

"Threats to Canadian National Information Infrastructure are genuine". - DND White Paper on
Information Warfare.

6.4.5 CHINA

“Information in nature science may be sought through inductive reasoning; the laws of the universe
can be verified by mathematical calculation, but the disposition of the enemy is ascertainable through
espionage alone.” - Sun Tzu 500BC

In the 1994 study Chinese Information Warfare: A Phantom Menace or Emerging Threat? By Toshi
Yoshihara Director, Strategic Studies Institute, U.S. Army War College writes –

“The concept … has emerged as a subject of great interest in Chinese military discourse. The intense
discussions and debates within China’s defense community suggest that Beijing may be harnessing
the political will to devote substantial resources to developing information warfare doctrines and
capabilities. China’s potential ability to leverage the information revolution accompanied by its gradual
rise as a major military power have led many observers to speculate whether China might succeed in
becoming one of the global leaders in IW. In recent years, the Chinese have demonstrated a
voracious appetite for examining IW. Arguably, only the United States and Russia rival China’s
analytical work in IW. The exotic concepts and capabilities of IW have seemingly captivated the
imagination of Chinese futurists and military strategists alike. According to the Liberation Army Daily,
“an attacker can go around the enemy’s solid works he has long laboured for and, by way of ‘surgical
removal’ and ‘digital acupoint pressure’ (selective attacks), launch precision raids to destroy the
enemy’s war resources and shatter his will to resist. The Chinese have clearly grasped the
significance of the relationship between information and center of gravity. Chinese views of IW and its
convergence with pre-emption also leads to some other unsettling conclusions. Notably, the apparent
belief that information is a panacea in warfare could breed dangerous attitudes. In the tradition of Sun
Tzu, Chinese analysts of IW assert that knowledge can be assembled together in a rational and
coherent manner that would produce inevitable victory.”

The Asian Pacific News Service, reported 500 Chinese spy companies in Canada in a 07 August
2003 report. Cyber related implications included:

“Canada's Nortel Telecommunications [who] hired Katrina Leung's company of Merry Glory Ltd., to
broker a sale to China. Leung, who was paid $1.2 million in 1995 and 1996 for negotiating the Nortel-
China deal was later accused by the USA of having have slept her way into the good graces of two
FBI agents while stealing secrets for the Chinese government. Around the same time, the modern
day Matahari was greasing the way for Nortel, CSIS was conducting an investigation in the offices of
Ontario Hydro regarding the theft of information in the nuclear technology field by 'an individual of
Chinese origin' [from] the State Science and Technology Commission of China. Chinese state-owned
companies… related to military and intelligence activities obviously use Canadian consumer based
companies as cover.”

Public and private sector executive in Canada has been the subject to targeted Spear Phishing
attacks from China for the last several years, and it continues to this day. The UK National
Infrastructure Security Co-ordination Centre published a warning in 02 Jun 2005 on the subject of
Targeted Trojan Email Attacks (NISCC Briefing 08/2005) stating that:

“parts of the Critical National Infrastructure are being targeted by an ongoing series of email-borne
electronic attacks. While the majority of the observed attacks have been against central Government,
other organisations, companies and individuals are also at risk. The attribution for the originators of
the attacks is the Far-East. These electronic attacks have been underway for a significant period of
time with a recent increase in sophistication. They use unsolicited emails containing either a
trojanised attachment or a link to a website that hosts a trojanised file. When opened, the file installs
a trojan onto the user’s machine.”

A new approach to China

Posted by David Drummond, SVP, Google, Corporate Development and Chief Legal Officer

1/12/2010 03:00:00 PM
Like many other well-known organizations, we face cyber attacks of varying degrees on a regular
basis. In mid-December, we detected a highly sophisticated and targeted attack on our corporate
infrastructure originating from China that resulted in the theft of intellectual property from Google.
However, it soon became clear that what at first appeared to be solely a security incident--albeit a
significant one--was something quite different.

First, this attack was not just on Google. As part of our investigation we have discovered that at least
twenty other large companies from a wide range of businesses--including the Internet, finance,
technology, media and chemical sectors--have been similarly targeted. We are currently in the
process of notifying those companies, and we are also working with the relevant U.S. authorities.

Second, we have evidence to suggest that a primary goal of the attackers was accessing the Gmail
accounts of Chinese human rights activists. Based on our investigation to date we believe their attack
did not achieve that objective. Only two Gmail accounts appear to have been accessed, and that
activity was limited to account information (such as the date the account was created) and subject
line, rather than the content of emails themselves.

Third, as part of this investigation but independent of the attack on Google, we have discovered that
the accounts of dozens of U.S.-, China- and Europe-based Gmail users who are advocates of human
rights in China appear to have been routinely accessed by third parties. These accounts have not
been accessed through any security breach at Google, but most likely via phishing scams or malware
placed on the users' computers.

We have already used information gained from this attack to make infrastructure and architectural
improvements that enhance security for Google and for our users. In terms of individual users, we
would advise people to deploy reputable anti-virus and anti-spyware programs on their computers, to
install patches for their operating systems and to update their web browsers. Always be cautious
when clicking on links appearing in instant messages and emails, or when asked to share personal
information like passwords online. You can read more here about our cyber-security
recommendations. People wanting to learn more about these kinds of attacks can read this U.S.
government report (PDF), Nart Villeneuve's blog and this presentation on the GhostNet spying

We have taken the unusual step of sharing information about these attacks with a broad audience not
just because of the security and human rights implications of what we have unearthed, but also
because this information goes to the heart of a much bigger global debate about freedom of speech.
In the last two decades, China's economic reform programs and its citizens' entrepreneurial flair have
lifted hundreds of millions of Chinese people out of poverty. Indeed, this great nation is at the heart of
much economic progress and development in the world today.

We launched in January 2006 in the belief that the benefits of increased access to
information for people in China and a more open Internet outweighed our discomfort in agreeing to
censor some results. At the time we made clear that "we will carefully monitor conditions in China,
including new laws and other restrictions on our services. If we determine that we are unable to
achieve the objectives outlined we will not hesitate to reconsider our approach to China."

These attacks and the surveillance they have uncovered--combined with the attempts over the past
year to further limit free speech on the web--have led us to conclude that we should review the
feasibility of our business operations in China. We have decided we are no longer willing to continue
censoring our results on, and so over the next few weeks we will be discussing with the
Chinese government the basis on which we could operate an unfiltered search engine within the law,
if at all. We recognize that this may well mean having to shut down, and potentially our
offices in China.

The decision to review our business operations in China has been incredibly hard, and we know that
it will have potentially far-reaching consequences. We want to make clear that this move was driven
by our executives in the United States, without the knowledge or involvement of our employees in
China who have worked incredibly hard to make the success it is today. We are committed
to working responsibly to resolve the very difficult issues raised.

The People’s Republic of China (PRC) is experiencing unprecedented growth into cyberspace. There
are at least 1.3 billion Chinese online now.

The Vancouver National Post reported that Chinese nationals with links to the Triads have broken
into the computers at the Canadian High Commission in Hong Kong. The Computer Assisted
Immigration Processing System allegedly had over 788 files deleted. The intrusions may have taken
place as long as ten years ago.

Titan Rain is reportedly a codeword for a series of coordinated attacks on government computer
systems from 2003 to this day. The attacks have been definitively traced to Chinese in origin. The
motivations are a mix of state-sponsored espionage, corporate espionage, criminal and hacker
attacks. As early as December of 2005 the director of the SANS Institute, had suspected that attacks
were most likely the result of the Chinese military information warfare.

Gordon Housworth discusses the Informationalization in Chinese military doctrine affects foreign
commercial and military assets [ 5/31/2007] writes –

“Informationalization, the computerization of business, industry, and military, has entered Chinese
military thinking in earnest, affecting both foreign commercial and military assets. US and EU
commercial assets have already suffered serious predation from Chinese military assets and Chinese
commercial assets operating under military direction. In the absence of a counter-cyber warfare
strategy, Chinese IT technologists enter all but the most secure [government] systems, exceeding the
limits of passive examination and surveillance. Shifting from 'passive' to active cyberwarfare, the PRC
intends to "be able to win an "informationized war" by 2050. [Where technology continues to outstrip
policy, the advantage goes to the agile able to pierce regulatory and technical barriers.] The 2007
Military Power of the People’s Republic of China cites active and passive Chinese cyberwarfare.
[There is] a strong conceptual understanding of its methods and uses. The November 2006
Liberation Army Daily discussed infowar as a “mechanism to get the upper hand of the enemy in a
war under conditions of informatization.” China’s concepts include computer network attack, computer
network defense, and computer network exploitation. The PLA sees this as critical to achieving
"electromagnetic dominance." PLA theorists have coined the term "Integrated Network Electronic
Warfare" to prescribe the use of electronic warfare, CNO, and kinetic strikes to disrupt network
information systems. The PLA has established information warfare units to develop viruses to attack
enemy computer systems and networks. In 2005, the PLA began to incorporate offensive CNO into
its exercises, primarily in first strikes against enemy networks. The Chinese press has discussed the
formation of information warfare units in the militia and reserve since at least the year 2000.
Personnel for such units have expertise in computer technology and [are] to be drawn from
academies, institutes, and information technology industries.”

The PLA considers active offence to be the most important requirement for information warfare to
destroy or disrupt an adversary’s capability. Launched by remote combat and covert methods, the
PLA could employ information warfare pre-emptively to gain the initiative. Huawei Technologies

The mission of China’s largest telecommunications company Huawei Technologies is to dominate

telecommunications markets all over the World run Huawei has a workforce with 70, 000, and

recently hiring about 10,000 research and development fellows to study the next generation of
communications. The organization is run by ‘retired’ army officer Ren Zhengfei. The Rand Corp has
reported that Huawei has "deep ties" with the Chinese military. There is an ongoing debate whether
Huawei Technologies is effectively controlled, owned and operated by the military. The Chinese
media refers to Huawei as "tulang" or hyenas, denoting tactical aggressiveness. The Chinese Ministry
of Information Industry similarly describes Huawei as a "violent attacker" and competitor winning by
any means necessary. Huawei has been accused of a range of illegal activities, including bribery,
economic espionage and violating U.N. sanctions building a fibre-optic network that was used to link
Iraqi air defenses. Huawei also was linked to industrial espionage against Cisco Systems and Japan's
Fujitsu. Huawei also constructed a telephone system in Kabul for Afghanistan's extremist Taliban
government. The India security intelligence services are reportedly deeply troubled by Huawei’s large
research and development facility in their country. Huawei's operations to penetrate the Canadian
Information Communications Technology sector have provoked national security concerns amongst
critical information infrastructure owners.

U.S. intelligence agencies informed a Treasury Department-led review committee recently that a
merger between 3Com and a Chinese company would threaten U.S. national security. The proposed
$2.2 billion merger was announced quietly in October. 3Com manufactures computer network
equipment used by the Canadian government and national critical infrastructures. US Intelligence
officials are concerned that the technology China would gain from 3Com will boost the Chinese
military's computer warfare capabilities. Technology that China has procured in modernizing its
infrastructure leading in to Olympics has gone directly into building its information warfare and internal
cyber surveillance capabilities.

2007 Annual Report to Congress on the Military Power of the People’s Republic of China –

“The People’s Liberation Army (PLA) is building capabilities for information warfare, computer
network operations, [which could] be used in pre-emptive attacks. China’s CNO concepts include
computer network attack, computer network defence, and computer network exploitation. The PLA
sees CNO as critical to achieving “electromagnetic dominance” early in a conflict. The PLA has
established information warfare units to develop viruses to attack enemy computer systems and

“Sun Tzu is interwoven into this emerging theory [of information warfare].”- Douglas C. Lovelace, Jr.
Director Strategic Studies Institute, U.S. Army War College 1994.

6.4.6 AURORA

Operation Aurora cyber attack began in mid-2009 and continued through December 2009.The attack
was first publicly disclosed by Google on January 12 after forensic investigation pointed a finger at
China. McCafee reported that 15% of the Worlds Internet traffic was high jacked and redirected
through China.

The attack has been aimed at dozens of other organizations including Adobe Systems, Juniper
Networks and Rackspace; all of whom have publicly confirmed that they were targeted. According to
media reports, Yahoo, Symantec, Northrop Grumman and Dow Chemical were also among the
targets. - Wikipedia

Google said that the Chinese had stolen intellectual property and sought access to the Gmail
accounts of human rights activists.

“The encryption was highly successful in obfuscating the attack and avoiding common detection
methods,” he said. “We haven’t seen encryption at this level. It was highly sophisticated.” –
Alperovitch, McCafee.

It was a “sophisticated, coordinated attack against corporate network systems” – Adobe

“We have never ever, outside of the defense industry, seen commercial industrial companies come
under that level of sophisticated attack,” says Dmitri Alperovitch, vice president of threat research for
McAfee. “It’s totally changing the threat model.”

Here is an excerpt from a HB Gary report discussing the Aurora incident:

The Aurora malware operation was identified recently and made public by Google and McAfee. This
malware operation has been associated with intellectual property theft including source code and
technical diagrams (CAD, oil exploration bid-data, etc). Companies hit have been publicly speculated,
including Google, Adobe, Yahoo, Symantec, Juniper Networks, Rackspace, Northrop Grumman, and
Dow Chemical. The malware package used with Aurora is mature and been in development since at
least 2006.

Figure 13: Diagram demonstrating use of the Palantir in tracking actors.

Key actors have been identified in association with malware operations that use Chinese systems
and native language malware. This has lead to a great deal of speculation about Chinese-State
involvement. A large and thriving underground economy exists to both build and disseminate malware
worldwide, and that most of this malware is capable of intellectual property theft. The malicious

hacking underculture is strong in China, as in Eastern Europe and elsewhere, and clearly enmeshed
into a global criminal economy of data theft.

Section 7 BOTNET
The following chapter 7 discusses unclassified detection and mitigation strategies for robot networks
based upon close examination of operational programs in Canada.

7.1 Detection

A symptom of a computer being infected by a botnet used to be slow PC performance. This would
often initiate a client calling into an ISP help desk, to complain of slow Internet Service. Much of the
user complains of this nature are owing to malcode infection of the user’s PC. Slow performance
today is not always a good indicator of a botnet infection because botmasters have a vested interest
in having a victim’s computer run quickly and avoid detection. Furthermore, we are seeing the
criminal actually patch and tune the victim’s computers after they have assumed control so that there
is less chance of someone else (botherder) gaining access to their asset or hijacking their bot. It is
sometimes easer for criminals to ‘jack’ someone else’s bot, than harvest victims and build a botnet
from the ground up. It is important to fully appreciate the extent to which a botmaster will go to hide
and safeguard their botnet. Operational security and tradecraft is very sophisticated. Retribution is
severe. Taking down or jacking a botnet yielding a multi-million dollar revenue stream for organized
crime is analogous in the real world to stealing the mob’s money or burning a warlord’s opium fields,
where the police station may be firebombed in revenge. We typically don’t see that level of
irreverence by criminals in our real-World society. Canadian police agencies can expect this e-gang
warfare in cyberspace.

“The [NCSS] was quite vague and basic, offering little specifics other than imposing IT system
accreditation processes and Business Continuity Plan (BCP) development. The immaturity of the
document is further revealed by this example of imprecise language. To suggest that network users

Proactive Cyber Defence and the perfect storm, Dave McMahon, Bell Canada 2009

can perceive threats by noticing system slowdowns is at best a weak strategy.” - Evaluating Canada’s
cyber semantic gap By LCol Francis Castonguay

Detecting a complete botnet is not trivial. To find a one you first need to know something of the:
victim, control channel, and the threat:

• Victim information can come from a few sources: a citizens complain to police, a trouble ticket
open by a client with their provider, or from security appliances like honey pots that allow
themselves to be infected (the cyber equivalent of a bate car in countering auto theft). The
challenge is that victims rarely know that they have been hacked and even fewer ever report
the incidents. According to published crime statistics there were few reported cyber-incidents
to police in the last year. Yet, Cybercrime is becoming the number 1 crime in Canada today 8,
estimated at $100Billion 9 in Canada alone. In 2009, 1.5million machine infections every
day 10. This is a profound paradigm shift for law enforcement. If a victim is unaware that they
have been victimized, how do the police solve crime. Where botnets are concerned, notifying
victims can be an overwhelming part of the police work load, and raises all sorts of privacy

• Control channels are most often discovered by deep surveillance mechanisms across very
large autonomous address spaces. Ubiquitous Network surveillance is only effective if you
own/control or have privileged access to the network such as in the case of Internet Service
Providers. Nearly the entire cyberspace is privately owned and operated. Conversely, neither
law enforcement nor government own or control cyberspace, nor do they have significant
insight into its operation; to the degree necessary to detect and mitigate bots.

• Threat intelligence is achieved through trusted partnerships and covert investigations (online
and offline). An organization will have difficulty detecting botnet activity, without external/
foreign operations with trusted partners in private and public sectors. The
telecommunications and financial sectors operate the most powerful closed groups useful for
investigating botnets.

7.1.1 Threat Networks Provided Trusted Channels

In some cases, an organization can obtain confidential threat intelligence from a trusted source,
which may include more complete threat robot network topographies and details of the criminal
enterprises. The challenge that many organizations have is operationalizing good threat intelligence.

7.1.2 Open Sources Program

There is a great deal of accurate open (public) sources of good intelligence of botnets, their tradecraft
and identities. High risk sources and destination addresses can also be used to guide graduated
surveillance, filtering or application of security polices.

7.1.3 Honey Pots, Honey Nets and Tarpits

A honeypot or honeynet is a trap set for attackers that works on the principle of manipulative
deception. It consists of a computer, data, or a network site that appears to be part a legitimate
network. In reality it is isolated from the corporate network but left unprotected and monitored. It
should look like it contains information or a resource of value to attackers. A honeypot is valuable

Deliotte and Touche Study on Cybercrime, commissioned by Canadian Chiefs of Police
Proactive Cyber Defence and the Perfect Storm, 2009, Dave McMahon, Bell Canada
Bell Canada is the primary source.

surveillance, early-warning and forensic tool for combating botnets. A honeypot is often a computer,
but can take other forms, such as files or data records, or even unused IP address space – called
‘darkspace.’ A honeypot that masquerades as an open proxy to monitor and record those using the
system is called a sugarcane. A tarpit is uses seemingly valuable data to slow the attacker down in
order to record tradecraft and techniques. Honeypots should have no production value, and hence
should not see any legitimate traffic or activity. Whatever they capture is always malicious or
unauthorized. A honeypot can be used to become infected by and taken over by a botnet. There are
a number of important things that one can determine from controlling a zombie machine. The
authorities can extract a copy of the exploit for forensic analysis and reverse engineering. This can
be used to create signatures, assess tradecraft or identify perpetrator. One can also identify the
control channels, IP addresses or domain names used. The larger botnet can be mapped out in this
manner. Sometimes the operator must deliberately answer a phishing attack and visit a threat site to
become infected. A non-virtual environment is often required to recreate a realistic target.

7.1.4 Sensors
Deep packet inspection devices, Intrusion Detection Systems (IDS) and unified threat management
systems, firewalls, router logs and other mechanisms can be used to provide situational awareness.
A well designed sensor array can detect botnet activity, and map out source and destination the
attacks. But, just because an organization deploys a suite of IDS it does not normally lead them to
detect anything of significance. The manner in which one engineers the complete architecture and
choose the components is vital. Key factors that influence the ability to detect sophisticated botnets

• Coverage, placement and fidelity - There needs to be adequate sensor coverage on a

significantly large enough autonomous address space to see the botnet. Sensors need to be
placed at central monitoring points within the network, and these sensors must have the fidelity to
inspect at the packet level in real time;
• Monitoring and correlation – Someone has to monitor the sensors using a Security Information
Event Management System (SIEM) and correlate millions of alarms and events; and
• Global core intelligence – Without good intelligence, sensors are dumb machines. Zero-day threat
signatures, detection algorithms, source and destination addresses are some metrics form trusted
global source that can enhance the sensor array.

The biggest mistake is that many organizations place too few cheap sensors in the wrong place, and
never look at the results until there is an incident realised in real space.

7.1.5 DNS
Domain Name Services (DNS) is a primary control plan for the Internet. Monitoring DNS activity is a
lucrative means of detecting the registration of botnet addresses. The source of the most DNS
attacks has been infected computers (botnets) which attempt to overrun resources of the DNS server
(memory, CPU, network or disk) in an attempt to temporarily disrupt service or poison DNS queries to
defraud customers of their identity or banking information. Fast flux Botnets also re-register domains
and IPs at a rapid rate to avoid trace-back. This act in itself is an indicator of illicit use.

7.1.6 Dark Space

Dark address space, which is sometimes referred to as "darknet," is the area of the Internet's routable
address space that's currently unused, with no active servers or services. Any activity appearing in
darkspace is either a mistake or malicious. It is highly effective at tracing botnets. Blocks of Internet
address space, including darknet space, briefly appear in global routing tables and are used to launch
a cyber-attack, or send spam, before being withdrawn without a trace. By monitoring all traffic to and

from dark space, it is possible to gain insight into the latest techniques and attacks and to trace
victims and control nodes of a botnet. The U.S.-CERT uses its Einstein program to monitor federal
network "dark address space" on the Internet. Darkspace analysis is routine procedure for all major
telecommunications carriers. Few organizations operate darkspace in a centralized monitoring and
cleaning like Einstein III.

7.1.7 P2P (Peer-to-peer)

A peer-to-peer network is a network in which any node in the network can act as both a client and a
server. Traditionally, botnets organize themselves in an hierarchical manner with a central command
and control location. This location can be statically defined in the bot, or it can be dynamically defined
based on a directory server. Today, the most easily detected botnets use IRC as a form of
communication for command and control (C&C). One key property of IRC-based botnets is the use
of IRC as a form of central C&C. However, the property also serves as a major disadvantage to the
attacker. The threat of the botnet can be mitigated and possibly eliminated if the central C&C is
incapacitated. In a peer-to-peer architecture, there is no centralized point for C&C. Nodes in a peer-
to-peer network act as both clients and servers such that there is no centralized coordination point
that can be incapacitated. If nodes in the network are taken offline, the gaps in the network are closed
and the network continues to operate under the control of the attacker. Some research suggests that
peer-to-peer protocols came into prominence with the release of Napster. The Napster client was built
as an application that allowed peers to find and share music files with other peers in the network. File
indexing was done on a centralized server, so Napster is not entirely a peer-to-peer service. Users
would connect to the centralized server in order to upload an index of their files and search for files on
other user's computers. If a particular file was found, the user would directly connect to another peer
in order to retrieve the file. Because many of the music files shared between users were illegally
traded, a court found Napster's service illegal and the service was shutdown. Later variants of peer-
to-peer file sharing focused on evading authorities by avoiding centralized control. Botnet owners
have taken advantage of this property to hide control of the botnet and to transport data. Ninety
percent of the bandwidth of Canada’s p2p networks are consumed by ten-percent of the users. The
vast majority of this traffic is illicit. Primarily illegal file sharing of copyrighted material and botnet
traffic. Monitoring and shaping p2p is an highly effective means of detection and mitigation of criminal


7.2.1 Basic Strategies

We have discussed the various ways of detecting a botnet, but how does an organization combat
one? There are basic strategies in combat that we can apply to the botnet problem:

Reactive – Traditionally this has meant waiting for a complaint or incident to review log files and to
deconstruct the events, which lead to the compromise. From a law enforcement perspective, this
often involves gathering evidence over time, and building a case to bring those responsible to justice.
Incident management is ineffective in combating robot networks for a number of reasons:

• Victims do not detect that their machines are compromised;

• Those who do, don’t report it ;
• Botnet attacks are too sophisticated for traditional means of detection;
• Police do not have technical mechanisms to detect botnet attacks on citizens and businesses
• Most of the evidence precedes the attack and stales rapidly thus logs are of little value;
• Perpetrators typically operate from foreign safe havens outside of police jurisdiction; and
• The consequences and impact of the incident are so expensive that restoration and business
resumption are higher concerns than restitution and prosecution.

Laws have no effect on preventing an attack, only the ability to charge offenders who police can catch
within their jurisdiction.

Research at the Royal Military College (RMC) has also argued that “a reactive-oriented network
defence policy based solely on perimeter defences is not sufficient to properly safeguard IT
infrastructure.” But this is exactly what most private and public sector organizations are investing in.

Static Defences - are the traditional means of enterprise security architecture that most network
security engineer are aware of. These consist of techniques like IT security zoning and technical
policy enforcement, mixed with appliances firewalls, patching, intrusion prevention systems,
authentication schemes, encryption or malcode filters, etc. Static defences especially patching, limit
the number of vulnerable machines and the access to them. They are asset-vulnerability centric and
threat-to focused, rather than being driven from the threat perspective. In other words neither
reactive or static defences affect the threat nor the attack directly. Furthermore, static defences are
less than 20 percent effective against zero day attacks, particularly those used by botnets 11. The
central tenant to any conflict is that those who stay on the defence will ultimately lose the

“To adequately defend against computer network attacks requires more than simply hiding behind a
firewall.” - Evaluating Canada’s cyber semantic gap 2009, by LCol Francis Castonguay

Preventative (active-defence) – begins to decisively engage the threat itself by blocking, black-listing
of threat channels and activities through such activities as in-bound/ out-bound monitoring, or
triggering on threat signatures.

Proactive defence or offensive operations – consist of interdicting and disrupting threat activities to
counter an attack or preparation to attack. A much more exhaustive discussion of proactive cyber

Based upon testing 80000 zero day exploits a day for a year. Only 20% were detected by traditional means.

defence is published in the literature. Euphemisms for proactive defence include intelligence lead
policing, undercover work, computer network exploitation and attack. The goal is to discover, infiltrate
and disrupt criminal robot network before an attack is launched or take down / take over operational
botnets and rendering them inert. It involves shaping threat actions through deception or force.
Proactivity is the most effective means of combating botnets.

This diagram from Evaluating Canada’s cyber semantic gap published in 2009, by LCol Francis
Castonguay shows the conceptual overlap of sub-domains within CNO.

If the aim of the Proactive Defence Policy is truly to achieve a level of sophistication that will enable
the GC to deal with issues before they can impact business in Canada, this involves a higher level of
proactive Computer Network Operations (CNO) than we have exercised to date. This will require a
clear division of responsibility and the necessary legal mandate for dealing with cases where CND
graduates from passive to active defence. - Che, Elliot. Securing a Network Society – Carleton

It is our assessment, after examining the evidence, that policies, laws, legislation, regulation, doctrine,
mandates, committees or other traditional means upon which the many organizations operate are not
effective instruments against botnets. This much is measurable.

Re-active, Static, Active and Proactive strategies are often discussed in the literature. In practice, it is
common to see a strategy of inaction/non-action or even reactionary policy that resists market forces.
In-action/non-action can take the form of deliberately deciding not to act or most often deferring a
decision or ignoring the responsibility to make an official decision in time – this an implicit decision of
indecision without an explicit record of decision. Sometimes organizations embark on reactionary
policies, which ignore emerging issues and disruptive technologies and force antiquated measures.
Deploying ineffective solutions that drain precious resources can make the problem even worse.
Many traditional enterprise IT security practices may this fall foul of provisions in the management
accountability framework (MAF) and the Financial Administrative Act FAA, or similar commercial
standards in so much as managers persist in expending resources towards security that has been
shown to be ineffective.

7.2.2 Workable strategies Trusted Internet Connectivity (TIC)

Every organization must have a control of all Internet access points through a trustworthy provider.
Traffic passing through a reduced set of official access points can be monitored and cleaned more
efficiently and far more cost effectively than delivering dirty traffic all the way into the enterprise.
Bandwidth ought be purchased based upon cost/clean megabyte and the telecommunications circuits
should be certified and accredited. The US has allocated $40B towards their Comprehensive National
Cybersecurity Initiative (CNCI). Trusted internet connectivity (TIC) using MTIPS (Managed Trusted
Internet Protocol Service)and TICAP (Trusted Internet Connection Access Providers) is the
cornerstone of the CNCI. There are clear parallels with Canadian Secure Channel, Perimeter
security, SEIM, and upstream security programs. The vector for most successful botnet attacks has
been shown to follow untrusted or rogue connections and paths of critical interdependencies.
Purchasing untrusted bandwidth at for the sake of lowest cost per mbyte, but filled with malicious
traffic is dangerous and far more expensive to an organization than purchasing clean pipes. For this
reason network services cannot be viewed as a commodity.

Letting the cloud handle security by Edward Amoroso, Senior Vice President and Chief Security
Officer AT&T:

“The vast number of users, with broadband access, have little or no security. That’s how thousands of
unsecured PCs are commandeered to send spam e-mail and distribute malware. It is the No. 1
problem on the Internet in my estimation. The potential danger of a volume-based DDoS attacks is
still high. Maybe 95 out of 100 (enterprises) probably don’t have sufficient protection (against DoS
attacks). Gigabit Ethernet connections among data centres, virtual private networks and the like are
still vulnerable against an attacker who can round up – by organizing or compromising – enough
machines to bombard the network. If you get enough traffic at that gateway -- and it’s not that much
traffic, it’s easy to overwhelm the gateway. The individual enterprise approach of hanging a
technological defence onto a connection won’t stand up to a 3Gbps attack.”

Dr. Amaroso told the mostly government audience at the cyber security conference in Ottawa 2009,
that "not deploying security into the cloud and having DDoS protection was negligent."

“Carrier companies got into just pushing light – focusing on packet loss and latency -- and letting the
intelligent edge worry about everything else. But attacks pass through the carrier infrastructure and
that’s where the focus should be. Security is one of those things that’s best attended to in a
centralized area. But selling upstream security to enterprise, which has an ownership attitude toward
security regimes, gets pushback. Not a little bit of pushback, a lot of pushback. This message is a
very bitter pill to swallow. But if an enterprises want to try to stop denial of service attacks without
working with their carriers, he challenges them to explain how they’ll do it. How do you keep children
off inappropriate Web sites, when you can’t be there all the time and they’re often more
technologically sophisticated than you. In partnership with the carrier at the DSLAM or the headend,
and that applies to the enterprise connection, too. Firewalls and intrusion detection systems are
evolving to do tasks they weren’t conceived for in the first place. A typical enterprise might have 100
gateways to untrusted connections. Originally, a firewall was designed to act as a choke point for a
single connection. A firewall is no longer a firewall, I don’t know what it is.” - Edward Amoroso -
Senior vice-president and chief security officer, AT&T CNCI, TIC, and Networx Background

After numerous cyber attacks on several federal agency computer systems in the time period
following 9/11, The White House determined that the government needed a more comprehensive
strategy to defend government networks and sensitive information from hackers and nation states.
On January 8, 2008, President Bush launched the Comprehensive National Cybersecurity Initiative
(CNCI), by issuing National Security Presidential Directive 54.

The CNCI is the most thorough and far-reaching effort which the government has ever undertaken to
improve the management and security of its IT infrastructure. As one of the 12 components of the
CNCI, the Trusted Internet Connection (TIC) initiative was formalized in November 2007 with the goal
of decreasing the number of connections that agencies had to external computer networks to 100 or
less. Officials believe that the fewer connections agencies have to the Internet, the easier it will be for
them to monitor and detect security incidents.

Under the TIC initiative, EVERY agency must either work with an approved MTIPS (Managed Trusted
Internet Protocol Service) provider (AT&T, Sprint, Verizon, Qwest have been approved by GSA thus
far) or be approved by The Department of Homeland Security to provide their own consolidated
service by passing 51 requirements known as a TICAP (Trusted Internet Connection Access
Provider). Currently, there are 96 agencies ‘seeking service’ through MTIPS providers and 20
agencies who have registered to become TICAPs.

The program contract vehicle for government agencies to pursue the TIC initiative is called Networx.
Networx is the largest telecommunications program in the history of the federal government. It is the
replacement for the previous contract vehicle known as FTS2001. It is divided into Networx Universal,
with a ceiling of $48.1 billion, and Networx Enterprise, with a $20 billion cap. Both contracts are
indefinite-delivery indefinite-quantity (IDIQ) with four-year base periods and two three-year options.

The Consolidation Imperative:

The most fundamental aspect of the TIC initiative is consolidation. Driven by space, power, budget,
security, and other constraints, consolidation has become both a tactical and strategic imperative for
government IT and network defense professionals at all levels. The benefits of consolidation, whether
physical or virtual, are well known: lower equipment and operations costs, less power consumption,
improved manageability, and a better environmental footprint among them.

Most of the buzz about consolidation concentrates on its application to the data center as a whole, or
to application servers in particular. But this focus overlooks an area where consolidation offers even
more dramatic advantages: network security. The primary focus of consolidation with the TIC initiative
is to increase security. Consider, in the case of application server consolidation, most of the benefits
are in some sense peripheral to the fundamental task at hand: the delivery of application services. By
contrast, consolidating network security with a unified platform delivers profound improvements in its
ability to accomplish its fundamental task: managing the diverse range of threats that confront
government networks.

Historically, there has been something of a rivalry of importance between antivirus and vulnerability
researchers. Yet, as attacks become more complex and multi-modular, they demand a hybrid
approach to threat research that combines these multiple disciplines. Just as enabling the various
countermeasure modules in a consolidated solution to share knowledge makes its response to
threats more effective, so too an integrated program of research and development across all threat
types delivers more accurate countermeasures.

Consolidating network security also delivers notable cost benefits, another primary goal of the
Networx program vehicle. According to Gartner, in 2008, the most important way information security
organizations could save money would be to leverage the convergence of established security
functions into network- or host-based security platforms that provide multiple layers of security in a
single product to protect against an evolving multitude of network and content threats.

72 Rogue network detection
Illicit connections from inside an organization to/from the Internet can be detected though cloud
facilitated rogue connection discovery. Internal scanning and comsec monitoring can also detect
unwanted traffic by eliminating unwanted connections. Anti-Spam and malware protection

Filtering spam and phishing e-mails and malicious content is important to reducing the risks of
clients/constituents becoming infected. Over 98% of all e-mails fall within this malicious category. To
make this work, both inbound and outbound messaging needs to be initially filtered at the carrier
cloud and then at the enterprise Internet Access Points. Further AV and content scanning can occur
at the host computer. Centralized monitoring and cleaning stations in the cloud and single entry point
into enterprise is the most effective way to conduct this process. Carrier and large enterprise grade
systems produce better results, are better supported with a dedicated team of experts and can
leverage critical threat intelligence to protect the whole enterprise. Web Site restrictions

Exposure to infection can be reduced by simply limiting access to high risk web sites through security
policies in routers, unified threat management systems, firewalls, IPS and other security mechanisms.
Again, these security policies should be enforced in the cloud, at the enterprise gateway and on
hosts. Pornographic sites, illicit file sharing, known blacklisted sites, foreign countries of concern, or
any site less than 6 hours old etc. White listing

This is an effective means of reducing direct exposure to the threat by limited communications to
peers within a semi-closed user group. Anyone outside of the white list would be subject to additional
monitoring or more restrictive security policies. Typically this is performed through network zoning,
private VPN using MLPS, communities of interest, MLS and cross domain solutions, web certificate
checking, PKI and to a lesser degree anti-phishing redirection. The challenge rolling out white listing
to wider public access is determining whom to trust. DNS security and registration

DNS of one of the primary control plains of the Internet and upon which all virtual infrastructures
depend. Yet most organizations have no control of the DNS of which they are completely dependent.
A secure DNS infrastructure ought to be established within the Enterprise Security Architecture.
Formal relationships and assurance must be established with higher level DNS and root. Network
Time Protocol is another external dependency that administrations take for granted. By monitoring
DNS, an organization can achieve better situational awareness of botnet activity and are in a much
better position to eradicate it by controlling illicit use of DNS for the purposes of facilitating botnets. A sensor array

Sensors of sufficient coverage and fidelity are required to detect anomalous behaviour. This is an
organizations eyes and ear. Sensors are best placed at enterprise choke points to the Internet and
linked to cloud based sensors that are tasked to monitor malicious traffic or connections to and from
an enterprise’s autonomous address space. This inbound-outbound analysis often reveals
undesirable activities that have escaped an organization’s notice.

73 Security Information Event Management (SIEM)
SIEM is needed to make sense of the millions of alarms/events produced by a large sensor array.
Without it, organizations are often overwhelmed. This is why, of the thousands of sensors sold in
Canada every year, the output of the vast majority of them is sent to ground (ignored). Deep Forensics skills

Forensics and the ability to do reverse engineering of malicious traffic flows and exploits is required to
make sense of sophisticated botnets. The tradecraft is beyond most network security professionals.
This is why it is important to consolidate these deep-dive skills at a central point within the enterprise
with the ability to outsource more challenging problems to specialized external teams. DDoS protection

Criminal botnets are typically used to harvest credentials, personal or financial information, or
consume processing recourses to make money on-line. Occasionally, they can be used in anger to
launch Distributed Denial of Service attacks from thousands of zombie machines. Contemporary
examples have included Estonia and Georgia and more recently Canada. Note: most major carriers
now offer DDoS monitoring and protection to combat botnet enabled DDoS attacks.

Ed Amoroso, “IT Conversations” Podcast Interview with Sandra Schneider, published 29 January
2006. Dr Amoroso argues that “the only reason that networked-based security (network in the cloud
security) provided by the carriers has not occurred is that nobody is willing to pay for it. He predicts
the demise of Demilitarized Zones (DMZs) and security at the edge of networks, i.e. predominant
reliance firewalls, will disappear within two years. Filtering of volume perturbations by the ISP/carrier
will be more effective at preventing Distributed Denial of Service (DDOS) attacks.”

“Many parties contribute in some way to the botnet problem. The obvious culprits are the writers of
malware, the botnet controllers, and the parties who rent or pay for attacks by botnets. The criminal
law exists to deter and sanction this behaviour, but is clearly not sufficient. The scale of the problem
and the forensic and jurisdictional challenges of enforcement seem to have greatly reduced the
deterrent impact of the criminal law. These difficulties should not prevent the state from continuing to
pursue cyber-criminals to the extent possible given limited resources, but it cannot be the sole
approach. It is also necessary to ensure that others who are well-positioned to detect and prevent
attacks take reasonable steps to do so… Some have suggested that end-users could be fined or
sued in order to cause them to maintain system security… There appear to be a variety of measures
that ISPs can take that would help to impede the propagation of bot software or throttle botnet activity
on their networks. Some of these methods may constitute an unacceptable exchange of freedom and
privacy. With respect to DDoS attacks, ISPs can enforce ‘‘egress filtering’’, which monitors IP packets
sent from their subscribers. Other mechanisms based on monitoring for traffic anomalies have also
been proposed. [A victim of a DDoS attack] may soon find it worthwhile to sue [the ISP. However.]
The plaintiff is contributory negligent for failing to employ anti-DDoS services.” - Jennifer Chandler,
“Liability for Botnet attacks: using tort to improve cybersecurity”, Canadian Journal of Technology
Law. March 2006 Block or black hole

Active botnet traffic consists of control channels and communications from the zombie machines.
Blocking control channels effectively stops commands getting to the infected machines and status
messages returning to the threat agent. Whereas blocking the zombie traffic seeks to quell the
attacks directly. Stopping all zombie communications is a much bigger task given the sheer numbers
involved, and has the unwanted (collateral) effect of disconnecting the users altogether and legitimate
traffic. Targeting control channels is a much more surgical procedure but is technically more difficult

to achieve. Regardless of whom one blocks, the botnet still remains intact, just paused to some
degree. There is also a good chance that this countermeasure will be detected by the threat agent
and they will take quick action to reconstitute and defend their botnet. You need to treat the root
cause not just the symptom. The man-in-the middle technique

This is a better tactic to counter a botnet. Once the victim machines, threat servers and control
channels are detected it is not sufficient to use a blunt instrument such as blocking or black holing – a
more subtle deceptive approach is required. First one needs to isolate the zombie communications
and redirect to a monitoring facility where the legitimate communications is forwarded and the
malicious traffic quarantined (blocked). Similarly the control channels are redirected away from the
victim machines to a honey pot. The authorities’ proxy (fake) traffic to the control servers to fool the
threat into thinking that they still have control of the zombies. Meanwhile, the zombie machines
silence their malicious activities owing to the lack of control signals every reaching them. The victim
machines are still compromises but are inactive so long as the threat does not access them directly.
This tactic is effective but a bit more complicated because it requires that the authorities break into a
highly protected botnet control grid in order to perpetuate the ruse. This is a non trivial task in of itself
but also raises a number of legal, privacy, ethical business questions. Take down or take over

Once authorities have penetrated the botnet control channels they have four choices: let the crimes
continue unabated, shunt the activities into a virtual environment (honey pot) where they can monitor
the activities, take-over control of the botnet, or shut down the botnet by not commanding zombies to
do anything, or actively commanding them to remove trojan code. Allowing crime to go unchecked
may be negligent but taking over or taking down botnets places personal computers in the control of
police who now have invasive access without consent. Malcode sample

In order to take over a botnet, authorities need to obtain a sample of the highjacking malware. This
can be done in two ways. Establish a honeypot and allow one of the machines in your lab get
infected, or extract the malcode from a machine that is already infected. It is preferable to use the
honey pot method of extraction because it does not involve the victim machine. Sometimes it is not
practical to wait to provoke an infection using honey pots and authorities may need to ‘hack’ into a
victims machine by following the some vulnerability-threat vector. The victim’s machine is already
compromised. Police would not need to create greater exposures, but enter through the same break
and entry point created by the criminals. The legal predicament is: enter a victims computer without
consent to extract a code sample or permit the botnet to grow, possibly impacting millions of
Canadians and critical infrastructures. The real world analogy would be to enter someone’s house to
stop a burglary in progress. Help desk informs victim

One option is for ISP help desks or government authorities call all the victims personally and talk
them through cleaning their machine. The two difficulties with this are that it is very expensive and
frequently people are incredulous when cold-called on security matters. Often clients rather than
thanking the ISP or authorities for intervention accuse them of spying. Isolate and wait until patch cycle

A common approach to botnet mitigation is to bock the control channels with outgoing zombie traffic,
and wait for automatic updates on most people’s machines to eradicate the malcode. This way
consumers have approved of updates. The disadvantage is that the carriers must handle the

malicious bandwidth. If people do not update their machines, or do not have AV, then they will remain
zombies. Quarantine or remove user

ISPs also have the option of quarantining users and limiting bandwidth or functionality until the user is
cleaned. Some providers revoke all user access IAW user agreements, until the user can prove that
their machine is cleaned. This is expedient but not always good customer service. Blocking repeat
offenders, including the worst wholesalers maybe good business for the whole community. Popup on client desktop

One option is to push a message to the client desktop (browser) telling them that they are infected.
Again, this may be read with some incredulity. Eradicate at a push of button

The fastest but most radical means to clean millions of machines in an instant is to propagate a
stop/erase code command to all the zombie machines through control channels that you have taken
over. For fast-growing destructive botnets this may be the final solution. There are some risks. The
threat may have booby-trapped the coding to trick authorities into issuing a self-destruct action, and
the process itself is somewhat intrusive. Care must be taken to reverse engineer the malcode and
test action on lab machines before propagating the command to victim machines. Predator bots

A benevolent bot can be used to highjack or clean a malevolent one. The idea is that a good bot can
spread in a non-deterministic fashion, reaching out across the internet and supplant bad bots.
Afterwards it deletes itself. The challenge is that this is operating in uncharted legal waters. The
authorities may loose control of their bot. Or the good bot may actually end up causing more harm to
peoples machines or bandwidth use.

"This morning, the SANS Internet Storm Center posted a note about an increase in ICMP
traffic, including a quick initial analysis. As it turns out, yet another worm, this time the
W32/Nachi.worm, is going around taking advantage of the RPC DCOM vulnerability. The
twist this time: the worm will actually clean up machines. It tries to download the correct
patches from Windows Update and remove the Blaster worm."

A paper entitled ‘Predators: Good Will Mobile Codes Combat against Computer Viruses’
argues for the use of ‘Predators’ to combat viruses. Predators are described as “good will
mobile code which like viruses, travel over computer networks, replicate and multiply
themselves” (Toyoizumi & Kara, 2002, p11). The paper shows that ‘Lotka-Volterra’
equations can be used to model the interaction between predators and viruses. ‘Lotka-
Volterra’ equations are widely used to model ‘predator-prey’ relationships in the natural
world. It has been postulated that the use of predators can be used to overcome the
weaknesses of current anti-virus protection. A predator can search for and find code by
looking for telltale signs of viruses. It proceeds to eliminate the virus and attempts to
propagate itself. The replicated predator is sent to all vulnerable machines connected to
the current one and the process begins again. The predator only multiplies if it destroys
the target virus. Theoretically predators can be effective in ridding systems of viruses.
The paper postulates that if a predator had been built for the ‘Code-Red’ virus on July
19th 2001, the total number of machines infected could be limited to 2000, rather than the
359,000 machines that were affected. If predators can be so effective, should they be
utilised to combat viruses? The use of predators to defend systems from malicious
pranks involves an ethical component. This ethical component must be considered

before use is made of a predator. A “Protection Mechanism Grid” (Neubauer & Harris,
2002, p270) can be used. The article states: “The implementation of system protection
mechanisms potentially involves actions by both the system owner/administrator and the
organisation that provides the protection software/service. The behaviour of the
protection provider can be either passive or active” (Neubauer & Harris, 2002, p273) A
good example of a predator is the W32/Nachi.worm. It has been rated by McAfee
Security as a moderate security risk. It tries to delete another worm called
W32/Lovsan.worm and place the relevant Microsoft patches onto a computer system that
it has invaded. Kill the control channel

Finding and taking down (interfering, infiltrating or blocking) control and exfiltration channels often
results in quelling malicious traffic across the network. However the user machine is still infected and
vulnerable. Black listing

Providers can black hole IP addresses and domains of zombie machines, control servers or end-
threat groups. Blocking zombies protects other users but denies service completely to those users.
Black holing evil IP/domains can have collateral damage. When a threat domain/IP can be isolated, it
can be very effective. Black lists must be constantly revised since threat groups (fast flux) change
addresses often. There are liabilities and QOS issues if friendly addresses are blocked but conversely
liabilities for not stopping attacks. Good surgical blocking, frequent auditing validity of blacklists and
problem resolution of collateral addresses through a help desk, are important steps to providing a
balanced approach. Gated communities

White lists, grey nets, dark nets, clean pipes and trusted connectivity are a proactive means of
managing the risk from botnets. In this scenario, users would opt into restricting who they talk to.
Trusted third parties could provide general white lists to supplement user profile (lists). Strike back

A very effective option in all means of warfare is to counter-attack. Authorities could isolate threat
infrastructures and either infiltrate them using CNE or use Computer Network Attacks such as DDoS.
Authorities will need to conduct careful target templating and weapon selection before launching a
counter-attack. Counter-attack if used too often can be manipulated to provoke an reaction and
redirect it towards a new victim. There is also the risk of collateral damage. Attacks by Canadian
authorities can also be construed as acts of war since most attacks originate from foreign soil. Do
police carry out offensive CNO under warrant, covertly or overtly in uniform. Notwithstanding,
offensive (proactive) cyber operations against botnets is the most effective means of dealing with
them and the only manner in which police can win the battle. To not engage in offensive operations
maybe negligent. This is akin to police not arresting dangerous armed criminals, owing to a decision
not to arm officers with guns. Pre-emptive strike

The most effective strategy to combating botnets is to target threat networks before the botnets go
active. His necessarily requires good intelligence and a willingness to conduct pre-emptive proactive
strikes against the threat network or operators.

77 Disposal of botnets and Rendition
Do ISPs handover control of a criminal robot network to authorities? Are they obliged to do so? What
if the botnet is of foreign intelligence potential? If a botnet is being used to rob Canadians, then
police on some level should be made aware. The problem of detecting, isolating and delivering
botnets to police is resource intensive. In a fashion ISPs would be doing significant amount of police
work with neither the public budgets or legal authorities. ISPs have a great deal of expertise already,
and as is often the case, authorities would come to rely on these capabilities more and more. A
slipperier slope is handing off botnets for FI. Clear conditions and policy needs to be in place to
permit this to occur but controls around using carrier infrastructures and SIGINT platforms. There may
need to be a distinction between voluntarily reporting incidental cases and deliver targeting. The
provider must be able under the law to report/hand-over both criminal and FI bots but not be obliged
or directed to do so – especially without enumeration. Conversely, providers should be cautious of
deliberately hunting down bots on behalf of authorities for payment by bot. Subcontracting, botnet
detection and investigation and mitigation to providers much occur, but rules of engagement need to
be clear. Physical arrest

At the controlling end of every bot is a person. Let’s not forget traditional police enforcement.
Arresting bot herders is hard because they are most often operating abroad. There also needs to be a
discussion on Canadian’s stance on rendition of foreign military actors. Meanwhile, it is possible to
focus botnet detection on control nodes and networks operating in Canada for the purpose of making
an arrest. Traffic Shaping

Throttling bandwidth, ports or applications can be considered where blocking an IP address (zombie
or controller) is too harsh, or when there is real risk of collateral damage.

On July 11th, Bell submitted a detailed response to the CRTC regarding the Canadian
Association of Internet Providers' (CAIP's) application requesting that Bell be ordered to
cease and desist from traffic-shaping measures in connection with its wholesale ADSL
access services. In mid-May, the CRTC denied a request by CAIP for interim relief and
established a proceeding to look at the broader issue. The final ruling authoring traffic
shaping was subsequently given. As a result of the explosive growth in bandwidth usage
and the resulting increase in network congestion, Carriers have taken objective measures
- first with its retail customers and more recently with its wholesale customers - to ease
congestion by limiting the bandwidth of peer-to-peer (P2P) file sharing applications during
the peak hours. P2P applications use a disproportionate amount of bandwidth compared
to other types of Internet traffic. For the most part this bandwidth is malicious, illegal and
contravenes acceptable use policies. Copy right infringement, piracy, child exploitation
and botnets top the list of offences. The use of Deep Packet Inspection (DPI)
technology to shape Internet traffic is a necessary tool to improve the experience of the
vast majority of customers who are held hostage by the few users who slow everyone
down and propagate toxic content or attacks. Bell estimates that, without network
management solutions, as many as 790,000 customers could be negatively affected by
network congestion by the end of Q1, 2009. One third of Bell’s band width is consumed in
this fashion, costing $1B to transport questionable content. Traffic shaping is just one
prong of a three-pronged strategy uses to manage capacity and security on backbone
networks. The other two include investing in capacity through managed capital spending
and moving towards usage-based billing. None of these initiatives on its own could
neither be an 'instant fix' nor the sole solution to network congestion. By ensuring a
robust network, competition and market forces Carriers continue to encourage
innovation. Wholesale GAS and consumer Internet service share the same access

network. DPI technology and network management solution is thus applied to wholesale
and retail end-users in the same manner. Internet bandwidth demands on the core
network has faced incredible growth in the last two years, primarily due to the popularity
of Peer to Peer (P2P) applications such as Bit Torrent, and video streaming applications
such as YouTube and the propagation of botnets. These demands have placed
significant pressure on the carrier network creating problems due to capacity constraints
during the peak periods (generally in the evening). Therefore carriers began managing its
network by limiting the rate speeds of P2P applications of its retail customers during the
peak evening periods. Carriers intentionally slow down, but does not block, P2P
applications. This relieves congestion on the network and allows other applications (such
as video streaming on YouTube) to move faster. The program is extended to business
retail High Speed customers and to its wholesale ISP customers that purchase its
Gateway Access Service (GAS), a wholesale DSL product. Carriers have been granted
the legal right to manage its network through traffic shaping under its tariffs.

Section 8 Issues and
This section describes legal, privacy and ethical concerns and the issues surrounding information
exchange and national response.

8.1 Tactical Challenges

8.1.1 Extracting C&C Information from Malware
Modern botnet malware typically contains the full spectrum of malicious capabilities found in top-of-
the-line malware [Reference 9]. As discussed previously, the malware becomes a ‘botnet’ if it
contains features that allow the infected host to be controlled by a C&C infrastructure that is capable
of managing multiple bots. This allows botnet response teams to identify botnets through the
characteristics of their communications with the C&C infrastructure and once the path (or paths) to
the C&C is (are) neutralized, the botnet is effectively taken down.

In order to make botnets robust against law enforcement takedown notices or hijack efforts by
security researchers and criminal competitors, botnet controllers need to invest substantially more
time and money on C&C infrastructure than they do on malware construction. Even though there are
a growing number of techniques that criminals can use to make C&C robust against loss,
enumeration and operator disclosure, each of these alternatives also makes the chosen C&C more
difficult to change. Every serially generated malware sample relies upon the same C&C infrastructure
to enable and maintain botnet functionality. Therefore, the C&C becomes the “Achilles heel” for the
detection and mitigation of the botnet.

Botnets, such as APTs, that exist within an enterprise present a more difficult problem for the
following reasons.

• If the botnet malware identifies that it is being executed within a sandbox or other
virtualization technology, it may assume that it is being analyzed and become benign and not
exhibit any botnet-like behaviour or it may self-destruct and leave no trace of its existence.

• Most botnet malware will “test” the network, e.g. attempt to access a public web site or send
a spam message. If the malware does not pass the network tests, it may behave differently,
e.g. attempt to compromise local hosts and not communicate with the C&C infrastructure until
Internet access is available.

Recovery of malware samples in these situations may not be feasible and the enterprise will require a
different approach (discussed in Section 5.2.2) in order to identify the method(s) of communicating
with the C&C infrastructure.

8.1.2 Information Exchange

Information sharing is thriving between private sectors entities, usually across similar CIs. Critical -
co-dependency forms the impetus for full-disclosure The vast majority of cyber-attacks are directed at
financial institutions over carrier facilities; cyber-threats are fuelled by an exchange of wealth (illicit
financial exchanges).

8.1.3 Barriers to sharing

The greatest barriers to information sharing 12 and data exchange are:

• Trust through mutual verifiable and compliance. Organizations were understandably reticent
to share sensitive information with other organization unless they could verify the security
posture of the receiving organization first hand. There must be measurable security
standards (horizontally) established across all sectors. Until organizations concede to mutual
security audits to universally accepted standards, there will be little exchange.
• Uniform data classification. A harmonized data classification scheme regarding the handling
of information, pursuant to its sensitivity, does not exist across sectors. In fact, it is an issue
within every sector including government where there is no reciprocity for facility and
personnel security clearances. We were provided examples by respondents of unauthorized
public disclosure of exceptionally sensitive data. Protected status simply does not carry the
same weight as Classified. This practice should evolve before Private sectors will share with
Public sector.
• Need-to-Share not Need-to-know. The need-to-know doctrine was touted as the root cause of
the intelligence failure leading up to 9/11. The need-to-know principle puts the entire burden
on the person requesting the information to justify why they need it. But you don’t know what
you don’t know, how can you ask for it? So people rarely do. The need-to-share doctrine puts
the obligation on the one with the information to ensure that all, whom require it in official
capacity, are actively provided it. The security controls are retained, but are now a shared
• Equities. Every security device, means and method has a dual investigative use. Plugging
un-published vulnerabilities or reacting to threat-intelligence in an uninformed matter can, in
some circumstances compromise investigative capabilities. Conversely, not sharing
investigative tools, techniques and intelligence may leave one’s own defences wide open to
attack. Let’s take the football analogy to show the argument; if you allow your defensive line
to practice counters to your offensive line’s secret plays, then eventually the other team is
going to catch on. Meanwhile, the defensive line has to adapt to attacks from a variety of
teams that the offensive line hasn’t conceived of, or still thinks are secret. Both are kept in the
dark but the defensive line is at a greater disadvantage because they most counter the
greatest number of attacks, of which only one attack need be successful. The solution is
pretty darn simple, assuming that your defensive line is every bit as intelligent and trustworthy
as your offensive line; gather behind closed doors and share what you know, decide on a
common strategy for how to proceed, and exercise it privately. This can involve advanced
detection and a measured response under a convincing cover story. Historically, the equities
argument, albeit false, has favoured the offensive line, which typically retain a need-to-know

Proactive Cyber Defence and the perfect storm, Dave McMahon, Bell Canada 2009

stance. The equities dilemma is most exclusively a public sector preoccupation. Who makes
the equitable decision regarding the sharing of information between teams, and ultimately
sectors? There does not appear to be an escalation policy regarding equities so that a billion
dollar decision could be made by someone invested is corresponding authority. It was
suggested that a executive level committee be established to arbitrate matters of equity with
CI sector representatives.
• Ambiguity in mandates. Combating botnets, naturally involves crossing numerous Public and
Private sectors mandates. Take for example a well orchestrated broad-based cyber attack
against Canada’s financial institutions from an organized criminal enterprise operating from a
foreign country with the duplicity of the state. The telecoms sector needs to share information
with other providers globally, and with banks. But whose responsibility is it to investigate,
interdict, disrupt and prosecute the end threat agent? Remember that “Blocking is not
stopping.” The procedural mine-fields are often: the presence of private Canadian information
(even it they are the target), operating on foreign jurisdiction, interpretation of lead security
agency role, CIP portfolio, identification of threat agents (criminal, espionage, terrorist,
commercial) by motive even though these days cyber-threats agents are converged,
resourcing irregardless of mandate, and the necessity to use investigative methods which
constitute offensive information operations to resolve. There is a public-private sector need
for cyber security leadership. The lead security agency concept under Government Security
Policy (GSP) is not understood for something as real as a botnet attack.
• Exchange mechanism. Should all the aforementioned hurdles be cleared there remains one
more. A mechanism needs to be engineering to securely exchange metrics, and this comes
at a cost. What is built, who builds it, who pays and who operates it? Could this be jointly
funded and operated initiative?

8.2 Legal, Privacy and Ethical Concerns

As stated earlier, botnet attacks are proliferating at an alarming rate. The prevalence of this crime is
exacerbated by the fact that most affected individuals are unaware they are victims, reporting on the
crime, therefore, is virtually nonexistent. In addition, law enforcement cannot act alone. Tier 1 carrier
networks, as gate keepers of the Internet, must play a role in combating such attacks. As such, it is
imperative that law enforcement employ a multi-faceted interdiction strategy allowing for the gathering
of evidence in advance of harm and for coordinating and centralizing efforts across public and private
sectors. Information on individuals needs to be gathered before they become victimized and, to this
end, Tier 1 carriers need to engage in threat intelligence.

However, before an effective interdiction strategy can be employed, it is critical to understand the
nature of the botnet crime. Technology lies at the heart of the botnet, and understanding the interplay
between the technology and the crime is key. Moreover, the nature of this crime often requires that
law enforcement touches personal information (from intelligence gathering through to seizing of
hardware for evidence). The technology behind the scenes, therefore, leads to a complex paradigm
of legal, privacy and ethical issues. To best understand these issues, one must first begin with asking
a very simple question: how is combating botnet attacks, and, more generally, combating cyber
crime, different than combating other types of crime?

The definition of “crime” is a malleable concept, dependant on context. 13 In this case, the context is
cyberspace. Originally, focusing on public networks, the definition of “cyberspace” has necessarily

See Peter Eglin, Sociology of Crime (London: Routledge, 1992) at 27 wherein he states:
Without criminal law there would be no “crime”. Virtually every form of human action – from taking others’ property to taking others’
lives – has in some time or place been deemed warranted, if not desirable. For example, until recently, slavery, non-consensual
intercourse within marriage and all form of execution have been widely practiced and have received official approval. This is not to say
that people have not objected to these forms of behaviour, but until proscribed by criminal law they cannot be considered “crimes”.
Crime, then, is a relative or legalistic, rather than an absolute, concept. Criminal law is constructed within society. Hence, crime, too, is
a social construction.

grown to include both public and private networks. For one, this is because such independently
managed networks are all linked together through a common protocol, and two, because the vast
majority of network-space, at some point, through some path, probably has access to the Internet.
Under this definition, cyberspace exists everywhere there is a networked device. The botnet threat
therefore, is potentially everywhere.

The inquiry must then shift to look at whether these are crimes that occur in cyberspace or crimes
that use cyberspace in order to occur. The answer is that both are true. Cyber crimes fall into two
general categories: (1) crimes that actually could not exist save for the advent of the computer i.e.
where the computer is the object of the crime; and (2) traditional crimes that can be carried out using
a computer i.e. where the computer is the tool of the crime. In a botnet attack, both categories are
true: the botnet shepherd uses a computer to take over other computers.

It is important to highlight these approaches in order to determine what utility, if any, there is in
defining something as a cyber crime. There are those who suggest that there is no benefit in
demarcating legal approaches in real space from those in cyberspace. 14 And yet the rate at which
such crimes are increasing, leads one to a different conclusion.

Noted cyberlaw academic, Professor Lawrence Lessig advances the argument that cyberspace
changes the playing field. In justifying why cyberspace requires a rethinking of existing approaches
to law, Lessig said the following:

…there is an important general point that comes from thinking in particular about how the law
and cyberspace connect.
This general point is about the limits on law as a regulator and about the techniques for
escaping those limits. This escape in both real space and in cyberspace comes from
recognizing the collection of tools that society has at hand for affecting constraints upon

Lessig’s approach makes sense, as the tools at law enforcement’s disposal are different. How the law
and cyberspace currently connect is the critical piece to consider formulating an effective strategy to
dealing with botnets. The technology which is so intimately related – as the tool and the object to the
crime – must be considered.

As stated above, there is a dearth of data on botnet investigations for two reasons: very few cyber
crimes are ever reported and when reported, they are often accounted for under another crime (such
as fraud). Data from victim reporting is limited to extant Criminal Code provisions. In this respect, it is
useful to consider analogues – crimes that have typically been identified as cyber crimes in the
Criminal Code 16 and to consider how such crimes have been dealt with.
The Criminal Code relies heavily on the term “computer system”. The definition of computer system is
found in the fraud section 17 and is as follows:

… a device that, or a group of interconnected or related devices one or more of which,

(a) contains computer programs or other data, and
(b) pursuant to computer programs,
(i) performs logic and control, and
(ii) may perform any other function;

In the mid 1990s a conference was held at the University of Chicago entitled “Law of Cyberspace”. At this conference, Justice Frank Easterbrook
told the assembled audience that there was no more a “law of cyberspace” than there was a “Law of the Horse”. Easterbrook went on to say:
…the best way to learn the law applicable to specialized endeavors is to study general rules. Lots of cases deal with sales of horses; others
deal with people kicked by horses; still more deal with the licensing and racing of horses, or with the care veterinarians give to horses, or
with prizes at horse shows. Any effort to collect these strands into a course on “The Law of the Horse” is doomed to be shallow and to miss
unifying principles
Lawrence Lessig, “The Law of the Horse: What Cyberlaw Might Teach” (1999) 113 Harvard Law Review 501.
R.S., 1985, c. C-46.
Criminal Code s. 342.1(2).

The interconnected – would capture a networked environment (i.e. the Internet) and computer
programs, could conceivably capture everything from html to cookies to TCP/IP. Furthermore, the
definition seems to fit “cyber crimes” as “computer system” appears in the following offences: warrant
of seizure of child pornography 18 and posting of child pornography, 19 luring a child, 20 warrant of
seizure of hate propaganda 21 and posting of hate propaganda, 22 unauthorized use of computer for
committing fraud 23 and Possession of device to obtain computer service. 24 Clearly then, the Internet
is a prominent feature. It is sufficient for categorizing something as a cyber crime, but it is not

For example, computer system does not appear in the following two provisions, however, the
following deserve mention because they appear to be overtly, cyber crime-related. The first is
mischief in relation to data s.430 (1.1) of the Criminal Code which states that:

Every one commits mischief who wilfully

(a) destroys or alters data;
(b) renders data meaningless, useless or ineffective;
(c) obstructs, interrupts or interferes with the lawful use of data; or
(d) obstructs, interrupts or interferes with any person in the lawful use of data or denies
access to data to any person who is entitled to access thereto.

It is most often found in concert with charges related to fraud, for example, in unauthorized use of
computer, however, the connection to botnet activity is clear.

The second offence, which should be examined despite the fact that it does not expressly mention
“computer system” or “computer program”, is theft of telecommunication service. Section 326.(1) of
the Criminal Code, reads as follows:
Every one commits theft who fraudulently, maliciously, or without colour of right,

(b) uses any telecommunication facility or obtains any telecommunication service.

Here too botnet activity is implicated.

These “cyber crimes” fall into roughly four categories of crimes: child exploitation, hate crimes, fraud
and identity theft and theft of a telecommunication service – botnets are particularly connected to the
last two categories. Categorization of this sort allows one to begin to address the question as to
whether the current Criminal Code provisions addressing cyber crimes are sufficient in deterring such
crimes, including that of botnet attacks.

As indicated previously in this paper, statistics that are available indicate that Criminal Code
provisions alone appear not to have been sufficient for contending with botnets. However, before it
can be said that it is the inadequacy of such provisions that is to blame, one also must consider
whether there are other enforcement gaps. One should recognize that the law does not function in a
vacuum. There are other enforcement mechanisms and regulatory modalities at play.

First, there are social norms. These are unwritten rules that have the effect of regulating how we
behave by threat of penalty ex post – offenders are likely to receive some degree of social
castigation. Then, there are also the effects of market forces. Under such forces, the tradeoffs
between economic and social utility are weighed and laws are enforced if it is economically efficient to
do so. Lastly, there is architecture – this is the physical element that controls human behaviour.

Criminal Code s.164.1(1)
(164.1(2) and (5))
Criminal Code s.320.1(1).
Criminal Code ss.320.1(2) and (5).
Criminal Code ss.342.1(1)(a)-(d).
Criminal Code s.342.2(1)

These are largely physical constraints intended to affect human behaviour (e.g. a fence to deter
trespassers in real space – or more germane to the topic at hand – a firewall to constrain unwanted
data). Thus, a holistic approach to combating botnets, should consider the interplay of all four such
modalities (legal, social norms, market forces and architecture).

All of these modalities have counterparts in cyberspace, but they are often overlooked in the
discussion about cyber crime. This is especially so in the architectural modality – probably because
the physicality of cyberspace is less apparent. However, though there are no physical walls in
cyberspace, there is code which can to regulate human behaviour. The code constraints include
passwords, permissions, anonymizers – the range is almost limitless – anything that has the ability to
conform technological function can potentially control behaviour online.

This means that the police need to look at the end game, i.e. what behaviour is being targeted and
then reflect this outcome in the technology. In order realize this goal it is imperative to take a closer
look at the architecture of the technology itself.

Richard Whitt, telecommunications and Internet consultant put it this way:

Informed by the way that engineers create layered protocol models, and inspired by
the analytical work of noted academics and technology experts, policymakers should
adopt a comprehensive legal and regulatory framework founded on the Internet’s
horizontal network layers. We must build our laws around the Net, rather than the
other way around.

This approach is informative because it validates the notion that an effective approach to botnet
interdiction must consider the underlying structure of the technology.

Professor Yochai Benkler 26 was among the first to think about the interplay of technology and law
using three layers: 1) the physical layer (Cable, wireline), 2) the logical layer (software and standards
for interoperability), and 3) the content layer (text, images, information, etc.). Benkler, argued that
each depended upon the other; any communication online depended on all three.

Lessig built on this idea of layers. 27 To demonstrate how these layers work, he contrasted two
communication systems: Speaker’s Corner and cable television. Speakers Corner, in London’s Hyde
Park, provides a venue where all are free to talk about whatever they chose to whomever will listen.
The physical layer of this communication system (the park) is free; the logical layer (the language
used) is free, and the content layer (what is said) is also free. All three layers in this context are free;
no one can exercise control over the kinds of communications that might happen there. In the cable
TV scenario, the physical layer (the wires that run into the house) is owned; the logical layer (only the
cable companies get to decide what programs are carried) is owned; and the content layer (the
shows that get broadcast are copyrighted shows) is owned. All three layers are within the control of
the cable TV company and no communications layer remains free. 28

This paradigm is useful because the Internet is also a communication system and also has these
three layers. At the bottom, in the physical layer, there are wires and computers and the network
which binds them together. These resources are owned. At the logical layer there is the network
design. It is both free and controlled. While standards and network protocols such as TCP/IP enable
data transfer they can control how data are sent and received. Lastly, we have the content layer. In
cyberspace, even the content layer is both free and controlled. Some of the content is owned (for
example, copyrighted works) while other content is not (for example, open source applications).

Richard S. Whitt, “A Horizontal Leap Forward: Formulating A New Public Policy Framework Based on the Network Layers Model - An MCI Public
Policy Paper” (March 2004) online: MCI <>.
Yochai Benkler, “Property, Commons, and the First Amendment: Towards a Core Common Infrastructure” (White Paper for the Brennan Center
for Justice, March, 2001) online: Benkler Publications <>.
See, The Architecture of Innovation.
Lawrence Lessig, “Lecture: The Architecture of Innovation”, (2002) 51 Duke L.J. 1783.

Once again, it is the interplay between the physical, logical and content layers that is important for
addressing cyber crimes and combating botnets. Effective code, whether law or syntax, cannot work
in a segregated state – all modalities and all layers must coordinate. For example, pernicious
material on the Internet can be filtered out (albeit to varying degrees of success). This can occur at
different stages of the network from firewalls that filter content for ISPs and/or institutions to filtering
applications that sit locally on hard drives that block content based on individual settings. This relies
heavily on the logical layer – the computer code. And yet, what is deemed pernicious is anything but
a function of architecture. Thus, any initiative to deter botnets must be applied within this framework.

As indicated earlier, botnets are not expressly mentioned in the Criminal Code. Therefore, to better
understand why an approach to law enforcement must consider not only the law but social norms,
market forces and architecture, it is useful to look at other cyber crimes. One cyber crime which
serves as a useful analogue is that of child luring. Although child luring can take place in real space,
its prevalence has increased with the advent of the Internet. It thrives on technology and technology
changes how the law and regulatory modalities fit together. Because technology has brought child
safety initiatives to the fore, they often pave the way to introducing new laws and technologies that
can later be used to fight wider issues of cyber crime. A strategy to fight child luring implicates similar
considerations to fighting botnets.

Section 172.1(1) of the Criminal Code is clear: thou shall not use the Internet to prey on minors. 29
What is less clear, however, is whether the real-world constraints on predators offline, exist in online.
There can be little doubt that in real-space, a strange man standing inside a school yard with a
lollipop attempting to strike up a conversation with a child, would be approached with a great deal of
suspicion by any adult present and, hopefully, any child. However, in cyberspace, social norms
appear to change.

For one, there is an element of anonymity. People have a sense (whether valid or not) that they are
anonymous online, and offenders often pose as others. 30 More importantly, there is evidence that, in
fact, social norms online are different than those in real space. For example, youths may be more
willing to talk extensively and about intimate matters with adults online than in real space
environments. Status differences such as age and social background that may pose barriers to
comfortable face-to-face communications between adolescents and adults could be less of an
obstacle online. 31

In addition, most children have been taught not to talk to strangers in real space, and yet many of the
victims of exploitation appear to have been unconcerned with revealing personal information to
complete strangers online. One study reported that youths who send personal information (e.g.,
name, telephone number, pictures) to unknown people or talk online to such people about sex are
more likely to receive aggressive sexual solicitations—those that involve actual or attempted offline
contact. 32

In short, there appears to be a disjoint between how we expect people to behave and how they do
behave online. This indicates that an effective strategy for mitigating botnet attacks must consider
how victim and criminal behaviour changes in the online context.

Every person commits an offence who, by means of a computer system within the meaning of subsection 342.1(2), communicates with…
(a) a person who is, or who the accused believes is, under the age of eighteen years, for the purpose of facilitating the commission of an
offence … with respect to that person;
(b) a person who is, or who the accused believes is, under the age of sixteen years, for the purpose of facilitating the commission of an
offence … with respect to that person; or
(c) a person who is, or who the accused believes is, under the age of fourteen years, for the purpose of facilitating the commission of an
offence … with respect to that person
See e.g. the high-profile convictions of Christopher Neil or Mark Bedford who posed as teen in order to lure his young victims.
A recent study out of the US by Wolak et al., 2004, indicated that in the majority of cases of child exploitation, victims are aware they are
conversing online with adults finding further that only 5% of offenders pretended to be teens when they met potential victims online. Another study
indicated that offenders rarely deceive victims about their sexual interests. Sex is usually broached online, and most victims who meet offenders
face-to-face go to such meetings expecting to engage in sexual activity.
(Mitchell, Finkelhor, & Wolak, 2007b)

Market forces could also have a profound effect on mitigating criminal behaviour. In the child luring
example, these appear generally absent from the equation. Short of a few companies which may be
capitalizing on portraying themselves as child friendly sites or emphasizing parental controls, there
seem to be no substantial adverse economic consequences for enabling or providing websites that
do not. In short, industry has little incentive to participate in proactively sanctioning this type of
criminal activity.

Similarly, in the architecture modality, there is not a significant degree of control being afforded. As
discussed, offenders use Internet facilities such as email, instant messages, social networking sites
and, especially, chatrooms to meet and develop relationships with victims. All of these cyberspaces
have architectural components. However, analyzing the layers quickly reveals that these technologies
are generally devoid of constraints that would support the law.

Beginning with the bottom – the physical layer – there are wires and computers and network all of
which are owned. However, there is little in the luring context which takes aim at the elements in this
layer. Next, at the logical layer – the syntax or standards employed here do not function in a way to
prevent luring. Short of using an IP addresses to potentially identify folks in a post hoc fashion, it is
not readily apparent that code is being used to secure children’s safety online. Lastly, on the content
layer, one finds that it too is mostly unconstrained. Take for example chatrooms. Research indicates
most of the online child molesters described meet their victims in chatrooms. 33 Further, these data
also accord with victim reports where about one third of youths who received online sexual
solicitations claimed they had received such solicitations in chatrooms. 34

That being said, users are generally free to say what they want within the “community standards” of
the chatroom – i.e. if one is in a chatroom for teen relationships, chances are there will not be an
application filtering or restricting what you can say to another chatroom user.

It appears therefore that a strategy to combat child luring should consider more than the Criminal
Code provision only. This is informative as the challenges faced in deterring this crime may be similar
to those facing that of botnets. Social norms, market forces and architecture change online, and so a
sound botnet interdiction strategy must consider how they play out in the botnet context.

It would be misleading, however, to say there are no cyber crime provisions in the Criminal Code
which consider the various modalities. Perhaps such provisions can also elucidate a better approach
to combating botnets. For example, when looking at the provisions targeted at reeling in the material
associated with child pornography and hate propaganda, there is a sense that the architectural
modality has been considered because there are constraints aimed at the technologies’ layers.

Specifically, the Criminal Code instructs that judges may issue a warrant to seize the computer (i.e. a
physical constraint) on which the material sits and to order both the deletion of the posted material
(i.e. a content constraint) and the forfeiture of any materials or equipment used in the commission of
the offence (i.e. a constraint that acts at both physical and content layers). Section 164.1(1) on
warrant of seizure states that: “If a judge is satisfied by information on oath that there are reasonable
grounds to believe that there is material — namely child pornography … that is stored on and made
available through a computer system …” may obtain the material and, ultimately, order its destruction.

Arguably, at the juncture when the judge intervenes, the harm has already ensued, and these
constraints speak more to limiting future harm than they do interdicting it. This type of reactive
strategy means waiting for a complaint or incident to review log files and to deconstruct the events,
which lead to the compromise. From a law enforcement perspective, this often involves gathering

the National Juvenile Online Victimization Study
Ybarra & Mitchell, in press)

evidence over time, and building a case to bring those responsible to justice. For the reasons stated
earlier, incident management is ineffective means for combating botnets.

An effective botnet interdiction strategy requires proactive defence consisting of interdicting and
disrupting threat activities to counter an attack. The goal is to discover, infiltrate and disrupt botnet
activity before an attack is launched or to take down / take over operational botnets and render them
inert. Proactive defence, therefore, involves a multi-pronged approach that leverages technology.
There appears to be some understanding of this approach in the House of Commons as there are
several legislative initiatives (recently passed or soon to be) which will better position law
enforcement to deter cyber crimes – in advance of harm being done.

An Act to amend the Criminal Code (identity theft and related misconduct) 35 came into force On
January 8, 2010. This piece of legislation is particularly useful to examine because the amendments
to Criminal Code support the notion of the uniqueness of cyber crimes and the reality that dealing
with them has to take into account the technology and the impact of the various regulatory modalities.

As stated earlier, one of the untoward implications of botnets is identity theft. Until now, however, the
Criminal Code did not contain a specific identity theft offence. In fact, most of the offences attempting
to capture identity theft were fraud provisions that predate the advent of the Internet save for offences
dealing with credit and debit cards 36 and “unauthorized use of computer”. 37 The biggest shortcoming
of these provisions is that they make it illegal to fraudulently use personal information, but they do not
cover the unauthorized collection, possession and trafficking of such personal information.
Intervention is difficult.

Seemingly, Parliament caught on (or were impelled to catch on based on media attention dedicated
to identity theft issues) that there was a need to close these legislative gaps. In short, not only was
there lacking a clear definition of the crime (i.e. identity theft), but law enforcement lacked the ability
to intervene until, more often than not, a crime had occurred already.

The general purpose of the Identity Theft Act creates three new offences:
1. obtaining or possessing identity information with the intent to use it to commit certain
crimes; 38
2. trafficking in identity information with knowledge of or recklessness as to its intended use in
the commission of such crimes; 39 and
3. possessing and trafficking certain government-issued identity documents – expanding the
relevant documents form passports to social insurance numbers, drivers’ licenses, birth
certificates, etc. 40

Furthermore, and importantly, it introduces the concept of restitution for the victim.

Before wading into possible areas of concern spawned by the legislation, it is important to recognize
what the Identity Theft Act does well. First and foremost, by criminalizing the foregoing, the Identity
Theft Act gives the police the ability to intervene at the stage of possession and trafficking – before
fraud has actually been committed.

Second, the Identity Theft Act is forward thinking and has tried to anticipate technology and not shy
away from it. For example, the Identity Theft Act does a reasonable job capturing the various
technical manifestations of identity, including biometrics which will undoubtedly be a significant

Bill S-4, An Act to amend the Criminal Code (identity theft and related misconduct), 2nd Sess., 40th Parl., 2009 (assented to 22 October 2009)
[Identity Theft Act].
Criminal Code s.342.
Criminal Code s. 342.1.

source of identity theft in future years. This is a good example of a provision wherein the legal
modality contemplates the technology and, ultimately, the architectural modality:

402.1 For the purposes of sections 402.2 and 403, “identity information” means any
information — including biological or physiological information — of a type that is commonly
used alone or in combination with other information to identify or purport to identify an
individual, such as a fingerprint, voice print, retina image, iris image, DNA profile, name,
address, date of birth, written signature, electronic signature, digital signature, user name,
credit card number, debit card number, financial institution account number, passport
number, Social Insurance Number, health insurance number, driver’s licence number or

It is also helpful to examine the other modalities to see how well this new legislation matches up.
Seemingly, the social modality has had an effect on how people handle their personal information.
There is increased public consciousness regarding identity theft and recognition that this is no longer
a crime affecting niche groups only. Further, individuals are beginning to think twice before disclosing
“identity information”. It appears that legislation reflects social recognition that Canadians are
concerned about personal information and that custodians of personal information (namely,
companies) should ensure adequate safeguards are place. This aspect also plays a part in
influencing the market forces modality.

The Identity Theft Act appears to recognize the power of market forces. As mentioned, in addition to
jail time for these fraudulent acts, identity thieves will now be facing the possibility of having to
reimburse their victims for costs incurred as a result of the fraud (e.g. the price of rehabilitating one’s
identity, replacing credit cards and documents and correcting one’s credit history, etc.). 41 This also
indicates that in addition to the perpetrator – there is the possibility that if the perpetrator is an
employee of a company, this may implicate corporate pockets as well.

Corporate liability can be found in the Criminal Code and is worth briefly mentioning here. Criminal
intent becomes attributable to an organization where:

(a) the organization benefits to some degree from the offence, and
(b) a senior officer is a party, or where a senior officer has knowledge of the commission of the
offence by other members of the organization and fails to take all reasonable steps to prevent
or stop the commission of the offence. 42

At this juncture it is important to expand upon the notion that there is a threshold of reasonableness
by which criminal intent can be imputed. To understand what is reasonable necessitates a look at
what qualifies as identity theft under the legislation. The new s.402.2 states (emphasis added):

(1) Everyone commits an offence who knowingly obtains or possesses another person’s
identity information in circumstances giving rise to a reasonable inference that the
information is intended to be used to commit an indictable offence that includes fraud, deceit
or falsehood as an element of the offence.

(2) Everyone commits an offence who transmits, makes available, distributes, sells or offers
for sale another person’s identity information, or has it in their possession for any of those
purposes, knowing that or being reckless as to whether the information will be used to
commit an indictable offence that includes fraud, deceit or falsehood as an element of the

Act s.11.
See Criminal Code s. 22.

Two issues come to the fore: first, “reasonable inference” that the information is intended for fraud
and second, the person was “reckless” as to whether such information could be used for fraud. So, by
what standard is one to judge reasonableness and recklessness in the realm of identity theft?

The question of standards has a direct impact on the effectiveness of the architectural modality in
constraining human behaviour online. Recall the layers of the architectural modality. The desire to
constrain the ability to traffic and possess stolen identity information addresses the content layer as
data related to personal information is content. At the physical layer however, it looks like it is still not
being touched directly. 43

It can also be argued that the logical layer is also contemplated. Recall from above the discussion of
market forces, wherein the terms reasonableness and recklessness were mentioned. Underlying the
use of such terms is an assumption that one can determine what is reasonable in the circumstances
and when someone has been reckless. Perhaps this implies there is an applicable standard of care. It
should also be recalled that the logical layer is all about implementing syntax and standards.

However, it is unclear what standards could be used for ensuring privacy of identity information. This
is where the Identity Theft Act falls somewhat short – there are no clear standards. There are
standards for security only – there is no equivalent for privacy. But when we talk about identity theft –
it is theft of personal information – that is a distinct privacy-related term. Therefore, clarification is
needed to determine what standards in order to really build this in at the logical layer.

There are perhaps several areas which can provide guidance in deriving privacy standards. First,
there are industry standards such as best practices in the field, or standards making bodies such as
the Canadian Standards Association or International Standards Organization. Second, there are
privacy Commissioner orders they render which address standards (e.g. an order requiring encryption
on all devices containing personal information. 44). Lastly, legislation may also be the source of
standards. The Supreme Court 45 has acknowledged that the breach of statute may imply a standard
of care. This means we can use privacy legislation such as the Personal Information Protection and
Electronic Documents Act 46, to ascertain such standards by looking at relevant sections, such as
those dealing with safeguards in Schedule 1.

Lastly, the common law can provide some insight. Although there has not yet been a significant
award of damages for identity theft in Canada (part of the reason being, of course, that no tort for
breach of privacy currently exists), it is happening in the United States – a jurisdiction with laws
similar to that of Canada’s whose jurisprudence has been seen as persuasive in Canadian courts. 47

The point for law enforcement is this: standards are not always readily available but are, however,
critical – not only for imputing criminal negligence but, more importantly, in providing a framework for
implementation. This is relevant not only for level-setting expectations for organizations supporting
infrastructure but also for law enforcement, which will come into contact with personal information.

The authorities need to be prepared to be able to operationalize privacy at all stages of its botnet
strategy. This is because large amounts of personal information of both attackers and victims may be
collected, used or disclosed in the course of an investigation. Furthermore, law enforcement may find
itself as the steward of large data stores of personal information in the course of collecting evidence.

Compare to Bill “clean internet act” legislation aimed at the physical layer, which seeks to hold ISPs accountable for harmful material
disseminated using their services.
See IPC/ON order H0-004.
See Canada v. Saskatchewan Wheat Pool [1983] 1 S.C.R. 205 where SCC stated that although there was no nominate tort of “statutory breach”
it acknowledged that the breach of statute may imply a standard of care.
2000, c. 5 [PIPEDA].
See e.g. Randi A.J. (Anonymous) v. Long Island Surgi-Center, No. 2005-04976 (N.Y. Sup. Ct. App. Div. Sept. 25, 2007). In a three-to-two ruling,
a New York state appellate court ruled that punitive damages can be awarded for the unintentional disclosure of sensitive medical information. The
court concluded that punitive damages are available where the breach of patient confidentiality is the result of “willful or wanton negligence or
recklessness,” and awarding such damages would “deter future reprehensible conduct.”

All such personal information would be subject to the Privacy Act. 48 In short, categories of such data
stores come in three primary forms.

First there is victim information can come from a few sources: a citizen’s complaint to police, a trouble
ticket open by a client with their service provider, or from security appliances like honey pots 49 that
allow themselves to be infected. As stated previously, challenge with this type of reporting is that
victims rarely know that they have been hacked. In addition, there is a significant cost associated with
victim notification and it is unclear who would bear the burden of such a cost.

The second source of personal information comes from control channels which are most often
discovered by deep surveillance mechanisms across very large autonomous address spaces. The
architecture of the Internet is such that ubiquitous network surveillance is only effective if service
providers have the appropriate infrastructure in place and are willing to assist.

Third, there is threat intelligence which is achieved through trusted partnerships and covert
investigations (online and offline). An organization will have difficulty detecting botnet activity, without
external/ foreign operations with trusted partners in private and public sectors. 50
The Identity Theft Act is just one of several recent legislative initiatives which would support a botnet
interdiction strategy. There is also proposed legislation which deserves mention, namely, lawful
access and the Electronic Commerce Protection Act 51.

ECPA is intended to combat spam and botnets 52 by providing a holistic approach to enforcement.
First, the enforcement provisions are intended to facilitate consultation, referral and information
sharing among relevant enforcement agencies 53 to improve the efficiency and effectiveness of
investigations and enforcement actions. In addition, because the spam problem is global in reach and
regularly implicates multiple jurisdictions, these agencies will also have the authority to share
information under written arrangements with foreign states where the information may be relevant to
an investigation under a foreign law that addresses substantially similar conduct. 54

Second, the bill recognizes that spam and botnets are privacy issues. The bill says that a “computer
program” 55 may not be installed on an individual’s computer unless there is express consent of the
“owner or an authorized user of the computer system” or in accordance with a court order. 56 This
provision is aimed at spyware and botnets which are programs that install on a computer without the
users knowledge and carry out pernicious activities.

Similarly, the bill specifies that unless there is express consent of the sender or a court order, altering
the transmission data in an electronic message is prohibited. 57 This provision is aimed at combating
“man in the middle attacks” or the increasingly prevalent scam known as “phishing”, where the
recipient believes the message is coming from a bona fide source and, in responding to the electronic
message, may disclose personal information which can then be used for fraudulent purposes. To
ensure that service providers are able to combat this problem, an exception has been made for such
enterprises if the transmission is affected for the purposes of “network management”. 58

Although ECPA is not yet law, there is other reason to draw the conclusion that service provider
intervention is permissible. Without wading into the debate over “net neutrality” it should be noted that

R.S., 1985, c. P-21.
See Section 4 above.
At present, the telecommunications and financial sectors operate the most powerful closed groups useful for investigating botnets.
Bill C-27 the Electronic Commerce Protection Act 2nd Sess., 40th Parl., 2009 (as passed by the House of Commons 30 November 2009) [ECPA].
Filtering spam and phishing e-mails and malicious content is important to reducing the risks of clients/constituents becoming infected. Over 98%
of all e-mails fall within this malicious category.
The Office of the Privacy Commissioner of Canada, the Canadian Radio-television and Telecommunications Commission and the Competition
ECPA s.60.
ECPA uses the definition found in s. 342.1(2) of the Criminal Code.
ECPA s.8.
ECPA s. 7.
ECPA s.7(2).

the Canadian Radio and Telecommunications Commission has previously issued two decisions 59
permitting service providers to be excluded from the application of s. 36 of the Telecommunications
Act 60 with regard to their Internet services. Thus, one can assume that service provider intervention
should be regarded as permissible where it is related to delivery of quality service.

There are other bills which also indicate that active reporting by service providers is an integral piece
of law enforcement strategy. For example, Bill C-58, An Act respecting the mandatory reporting of
Internet child pornography by persons who provide an Internet service 61 mandates notification to law
enforcement. This bill is considered to be complementary to two other bills introduce in spring 2009.

Bill C-46, An Act to amend the Criminal Code, the Competition Act and the Mutual Legal Assistance
in Criminal Matters Act, 62 provides police with the authority to obtain information from Internet
providers related to any criminal investigation. The authority includes preservation orders to freeze
data for up to twenty-one days, production orders compelling a company to provide a customer’s e-
mail or IP address, and tracking orders to require mobile phone providers to use its network to assist
police in finding a particular cell phone or smart phone user.

In addition, Bill C-47, An Act regulating telecommunications facilities to support investigations 63, not
only requires service providers to ensure they have the capability to intercept communications but it
also sets out notification requirements for service providers. That said, it is important to recognize that
implementation is an issue. Neither bill indicates what infrastructure is necessary to allow for such
reporting. Furthermore, it is unclear who would fund such infrastructure.

Law enforcement already has the ability through a warrant to obtain subscriber information (albeit a
slower process than many would like). However, these powers are only applicable if the police
already know the target’s identity. Thus, the laws are only superficially useful in combating botnets
and the majority of cyber crime.

The challenge has been that, the decision to provide subscriber-information to police without a
warrant has been left up to the provider. Thus, placing all liability with respect to privacy violations
and due diligence on the provider and none on the requesting police agencies.

Finally, there is recognition in Canada’s private sector privacy legislation which recognizes that
disclosures without consent are necessary. For example, PIPEDA (to which tier 1 service providers
must adhere) sets out scenarios which allow for disclosure without knowledge or consent in the
context of subscriber information. Section 7 outlines exemptions as follows (emphasis added):

(1) … an organization may collect personal information without the knowledge or

consent of the individual only if

(b) … the collection is reasonable for purposes related to investigating a breach of an
agreement or a contravention of the laws of Canada

(2) … an organization may, without the knowledge or consent of the individual, use
personal information only if

(a) in the course of its activities, the organization becomes aware of information that it
has reasonable grounds to believe could be useful in the investigation of a

See Telecom Decision 99-4, and Telecom Decision 98-9 at par. 91.
1993, c. 38. Section 36 of this act prohibits a Canadian carrier controlling the content or influencing the meaning or purpose of
telecommunications carried by it for the public
2nd Sess., 40th Parl., 2009, (as first read in the House of Commons, 24 November 2009).
2nd Sess., 40th Parl., 2009, (as first read in the House of Commons, 18 June 2009).
2nd Sess., 40th Parl., 2009, (as first read in the House of Commons, 18 June 2009), also called “the Technical Assistance for Law Enforcement in
the 21st Century Act”.

contravention of the laws of Canada, a province or a foreign jurisdiction that has been,
is being or is about to be committed, and the information is used for the purpose of
investigating that contravention;

(3) … an organization may disclose personal information without the knowledge or

consent of the individual only if the disclosure is

(c) required to comply with a subpoena or warrant…;

(c.1) made to a government institution or part of a government institution that has made
a request for the information, identified its lawful authority to obtain the information and
indicated that
(i) it suspects that the information relates to national security, the defence of
Canada or the conduct of international affairs,
(ii) the disclosure is requested for the purpose of enforcing any law of Canada, a
province or a foreign jurisdiction, carrying out an investigation relating to the
enforcement of any such law or gathering intelligence for the purpose of enforcing
any such law, or
(iii) the disclosure is requested for the purpose of administering any law of Canada
or a province;

(d) made on the initiative of the organization to an investigative body, a government

institution or a part of a government institution and the organization
(i) has reasonable grounds to believe that the information relates to a breach of an
agreement or a contravention of the laws of Canada, a province or a foreign
jurisdiction that has been, is being or is about to be committed, or
(ii) suspects that the information relates to national security, the defence of Canada
or the conduct of international affairs;

(i) required by law.

The act also states in the note of “4.3 Principle 3 – Consent”:

In certain circumstances personal information can be collected, used, or disclosed without the
knowledge and consent of the individual. … When information is being collected for the
detection and prevention of fraud or for law enforcement, seeking the consent of the individual
might defeat the purpose of collecting the information.

The theme is a common one: collection, use and disclosure of personal information may take place
for the purposes of lawful investigations and law enforcement. Thus, to varying degrees, all of these
legislative initiatives recognize the role of the private sector in deterring crime. This approach has
merit in relation to the strategy proposed herein where market forces are to be appropriately
considered. Where there is an immediate benefit to private enterprise (such as clean pipes to
increase bandwidth), incentive to report malicious behaviour is high. However, where such reporting
requires capital outlay for new technologies, the incentive to implement such system to provide such
reports is decreased.

This is where these legislative initiatives fall short; they do not appear to address funding for providing
reports in the context of a proactive defence strategy. It is reasonable to assume that where reports
stem from existing practices within an organization (i.e. a service provider comes across the
information in the course of looking at trending, network flow, etc.), they would be more likely to
provide such information to law enforcement. Where, however, reporting requirements go beyond
what is considered to be a normative practice, organizations cannot be expected to provide them
without some compensation.

It should also be remembered that just because the authorities can access information it does not
mean that it should. Information may be gathered about a potential threat before it becomes personal
information. Where malicious activity is coming from can be identified in general terms. Consider the
following two general types of information. First there is network centric information which highlights
data flows, source and destination, etc. their connection behaviour. Second, there is content based
which looks at the substance of the message. Both types need not implicate identifiable information
about the individual. However, both types may be linked to each other or publicly available
information and a bigger picture begins to form. The authorities can use this information to help obtain
a warrant without having to run afoul of privacy laws.

The following table describes the activities that take place during a botnet attack. The activities have
been organized by incident management category (botnet attack phase) and, subsequently, by the
key actors that are involved in each phase. Legal and privacy implications for each activity are
provided where applicable.

Botnet Key Actors Activities Legal/Privacy Implications

Prevention IT Security 1) Monitor logs and alerts from 1) Provided this monitoring activity is
Staff 65 botnet safeguards such as conducted internally in the course of
firewalls, proxy servers, DLPs, network management, there do not
IDSs, AV and URL/content appear to be any legal/privacy
filters. Monitor application logs issues associated with this activity.
and transactions.
2) Provided this information
2) Information gathered could gathering activity is conducted in the
include usernames, timestamps, course of network management,
transaction details, personal there do not appear to be any
information, email addresses, legal/privacy issues associated with
email contents, file attachments, this activity. Where possible,
IP addresses and URLs of information gathering should be
destination. done in the aggregate without
identifying the individual.

3) Monitor threat advisory

services and keep safeguard 3) Provided there is no personal
configurations up to date. information involved in this activity,
there do not appear to be any
legal/privacy issues associated with
this activity.

This table is not a legal opinion and is not intended to provide legal advice of any kind in regard to whether any of the activities described herein
are compliant with legislative requirements. Legal counsel should be consulted in advance of carrying out any of the activities described herein.
Includes system and network administrators

Botnet Key Actors Activities Legal/Privacy Implications
Carriers 1) Monitor IP traffic (packets) 1) Provided this monitoring activity is
(Telco’s) across the network. conducted internally in the course of
network management, there do not
appear to be any legal/privacy
issues associated with this activity.
2) Monitor threat advisory
services and keep safeguard 2) There do not appear to be any
configurations up to date. legal/privacy issues associated with
this activity.
Internet 1) Monitor IP traffic and 1) Provided this monitoring activity is
Service application level, e.g. email and conducted internally in the course of
Providers web traffic within the ISP network management, there do not
jurisdiction. appear to be any legal/privacy
issues associated with this activity.

2) Monitor threat advisory 2) There do not appear to be any

services and keep safeguard legal/privacy issues associated with
configurations up to date. this activity.

Users 1) Perform daily functions. 1) There do not appear to be any

legal/privacy issues associated with
this activity.
2) Backup working files and data
to a secure server or removable 2) There do not appear to be any
media. legal/privacy issues associated with
this activity.
3) (Home Users) Maintain local
safeguards such as AV, firewalls 3) There do not appear to be any
and application/OS level legal/privacy issues associated with
patches. this activity.
Bot Herders 1)Attempt to release new botnet 1)There are a number of Criminal
malware into the Internet Code provisions which could
through email, previously become relevant depending on the
infected hosts and infected web bot herder’s activity such as
sites. unauthorized use of computer for
committing fraud (342.1(1)(a)-(d)),
mischief in relation to data (s.430
(1.1)), etc.
2) Operate undetected botnets
for the purposes of carrying out 2) Same as (1) above.
DDOS attacks, spamming and
identity fraud.

Botnet Key Actors Activities Legal/Privacy Implications
Law 1) Monitor threat advisory 1) There do not appear to be any
Enforcement services. legal/privacy issues associated with
Agencies this activity.

2) Monitor cyber criminal activity 2) If monitoring facilities require

either through local botnet disclosure of personal information
monitoring facilities or in from a third party, then such a third
conjunction with ISPs and party must have the legal authority
carriers. to disclose such information, Any
disclosures of PI without knowledge
of the individual must be in
accordance with a legislative
exemption or the result of a warrant.
Detection IT Security 1) Assess the security incident 1) There do not appear to be any
Staff and determine if it is real or a legal/privacy issues associated with
false positive. this activity.

2) If real, commence incident 2) Provided this identification and

response procedures. Identify response activity is conducted
the nature of the attack and the internally in the course of network
infected hosts. management, there do not appear to
be any legal/privacy issues
associated with this activity.
3) Gather information (logs, real
time IP traffic and bot agent 3) Provided this analysis activity
sessions) for analysis. does not involve unauthorized
disclosure, there do not appear to be
any legal/privacy issues associated
4) Perform initial containment if with this activity.
necessary, e.g. shutdown email
servers, restrict 4) There do not appear to be any
incoming/outgoing Internet legal/privacy issues associated with
access. this activity.
Carriers 1) Assess the security incident 1) There do not appear to be any
(Telco’s) and determine if it is real or a legal/privacy issues associated with
false positive. this activity.

2) If real, commence incident 2) Provided this identification and

response procedures. Identify response activity is conducted
the nature of the attack and the internally in the course of network
infected hosts. management, there do not appear to
be any legal/privacy issues
associated with this activity.
3) Gather information (logs, real
time IP traffic and bot agent 3) Provided this analysis activity
sessions) for analysis. does not involve unauthorized
disclosure, there do not appear to be
any legal/privacy issues associated
with this activity.

Botnet Key Actors Activities Legal/Privacy Implications
Internet 1) Assess the security incident 1) There do not appear to be any
Service and determine if it is real or a legal/privacy issues associated with
Providers false positive. this activity.

2) If real, commence incident 2) Provided this identification and

response procedures. Identify response activity is conducted
the nature of the attack and the internally in the course of network
infected hosts. management, there do not appear to
be any legal/privacy issues
associated with this activity.
3) Gather information (logs, real
time IP traffic and bot agent 3) Provided this analysis activity
sessions) for analysis. does not involve unauthorized
disclosure, there do not appear to be
any legal/privacy issues associated
with this activity.
Users 1) Advise IT security staff or ISP. 1) Provided this advisement does
not implicate another individual’s
personal information, there do not
appear to be any legal/privacy
issues associated with this activity.
Bot Herders Monitor bot agents and C&C 1) There are a number of Criminal
servers for signs that the botnet Code provisions which could
has been identified. become relevant depending on the
bot herder’s activity.
Law Determine if the incident 1) Gathering of any evidence at this
Enforcement requires LEA action. If so, juncture would most likely require a
Agencies gather evidence from carrier, warrant or a production order.
ISP and local facilities including
source and destination IP
addresses and information
exchanged (pictures, login
credentials, sensitive data)

Botnet Key Actors Activities Legal/Privacy Implications
Analysis IT Security 1) Analyse the collected 1) Provided this analysis activity is
Staff information. conducted internally in the course of
network management, there do not
appear to be any legal/privacy
issues associated with this activity.

2) Identify the infected hosts.

2) Provided this identification activity
is conducted internally in the course
of network management, there do
not appear to be any legal/privacy
issues associated with this activity.
3) Identify the nature of the Where possible, IP addresses
infection and the source. This should not be linked with other
may involve engaging third party identifiable information.
incident analysis teams from the
carrier, ISP or independent labs. 3) Depending on how the
identification takes place, legal
4) Collect malware samples. privacy issues could be associated
with this activity. Legal counsel
should be consulted in advance of
5) Send malware samples to AV carrying out this activity.
4) There do not appear to be any
legal/privacy issues associated with
this activity.

6) Provided such samples do not

include personal information, there
do not appear to be any
legal/privacy issues associated with
this activity.

Botnet Key Actors Activities Legal/Privacy Implications
Carriers 1) Analyse the IP traffic that is 1) Provided that analysis involves IP
(Telco’s) exchanged between the source addresses only and that these are
and destination. not linked to any other identifiable
information, there do not appear to
be any legal/privacy issues
associated with this activity.
2) Commence detailed traffic
and message analysis through 2) Provided this analysis activity is
deep packet inspection and conducted internally in the course of
application level monitoring network management, there do not
tools. appear to be any legal/privacy
issues associated with this activity.
3) Identify the nature of the
infection and the source. Under 3) Depending on how the
special circumstances, e.g. at identification takes place, legal
the request of a LEA, insert privacy issues could be associated
“man in the middle” or session with this activity. Legal counsel
decryption technology to recover should be consulted in advance of
plain text transactions from an carrying out this activity.
encrypted, e.g. HTTPS, session.

4) Notify the affected ISPs and

government agencies.
4) Legal counsel should be
consulted in advance of carrying out
5) Collect malware samples. this activity to ensure there are
lawful grounds for such notice.

6) Send malware samples to AV 5) There do not appear to be any

vendor and/or commence legal/privacy issues associated with
reverse engineering. this activity.

6) Provided such samples do not

include personal information, there
do not appear to be any
legal/privacy issues associated with
this activity.

Botnet Key Actors Activities Legal/Privacy Implications
Internet 1) Analyse application level 1) Provided that analysis involves IP
Service traffic such as email or web addresses only and that these are
Providers transactions that are exchanged not linked to any other identifiable
between the source (bot herder) information, there do not appear to
and destination (infected host). be any legal/privacy issues
associated with this activity.
2) Identify the nature of the
infection and the source. 2) Provided this analysis activity is
conducted internally in the course of
network management, there do not
3) Collect malware samples. appear to be any legal/privacy
issues associated with this activity.

4) Send malware samples to the 3) There do not appear to be any

AV vendor. legal/privacy issues associated with
this activity.

4) Provided such samples do not

include personal information, there
do not appear to be any
legal/privacy issues associated with
this activity.
Users 1) Await direction from ISP or IT 1) There do not appear to be any
security staff. legal/privacy issues associated with
this activity.
2) Do not continue to use the
infected host. 2) There do not appear to be any
legal/privacy issues associated with
this activity.
Bot Herders 1) Determine the extent to which 1) There do not appear to be any
the botnet has been identified legal/privacy issues associated with
and by which ISP. this activity.

Law 1) Analyze the collected 1) Determine in advance of carrying

Enforcement information. This could include out this analysis whether there is
Agencies sensitive personal information lawful authority to do so. Legal
such as bank account numbers counsel should be consulted in
and login credentials. It could advance of carrying out this activity.
also include criminal
transactions such as
unauthorized withdrawals from
bank accounts or unauthorized
purchases. 2) Determine in advance of
identifying individuals whether there
2) Identify the perpetrators and is lawful authority to do so. Legal
the victims. counsel should be consulted in
advance of carrying out this activity.

Botnet Key Actors Activities Legal/Privacy Implications
Response IT Security 1) Move infected hosts to a 1) There do not appear to be any
Staff treatment segment. legal/privacy issues associated with
this activity.
2) Move operations to a DR site
if necessary. 2) There do not appear to be any
legal/privacy issues associated with
this activity provided that the DR site
3) Apply OS and other security is in Canada.
updates to uninfected hosts.
3) There do not appear to be any
4) Update security safeguards in legal/privacy issues associated with
the operational network, e.g. this activity.
URL filters, AV, IDS signatures
4) There do not appear to be any
5) Apply OS updates, AV and legal/privacy issues associated with
IDS signatures and other local this activity.
safeguards to the infected hosts.
Verify that the updates
remediate the botnet infestation. 5) There do not appear to be any
legal/privacy issues associated with
this activity.

Carriers 1) Apply droplists and other 1) Provided that the IP address used
(Telco’s) safeguards to defeat DDOS and for drop list are not associated with
spam attacks. individual, there do not appear to be
any legal/privacy issues associated
with this activity.
2) Provide information to LEA
investigative teams when 2) Legal counsel should be
necessary, e.g. under warrant. consulted in advance of carrying out
this activity to ensure there are
lawful grounds for providing such
3) Advise clients and ISP once information.
the attack has been remediated.
3) Providing no personal information
is disclosed, there do not appear to
be any legal/privacy issues
associated with this activity.
Internet 1) Update safeguards such as 1) There do not appear to be any
Service spam filters to re-direct spam legal/privacy issues associated with
Providers email. this activity.

2) Apply OS updates, AV and 2) There do not appear to be any

IDS signatures and other legal/privacy issues associated with
safeguards where necessary to this activity.
identify and remediate the botnet

Users 1) Switch to an uninfected host. 1) There do not appear to be any

legal/privacy issues associated with
this activity.

Botnet Key Actors Activities Legal/Privacy Implications
Bot Herders 1) If the bot herder determines 1) There are a number of Criminal
that one or more bot agents Code provisions which could
have been detected: become relevant depending on the
• re-vector the infected host to bot herder’s activity.
a new C&C server
• download new malware to
the infected host
• re-vector the C&C server
• shut down the bot agent on
the host
• shut down the host
Law 1) Obtain search warrants and 1) Provided that the search is
Enforcement any other legal documents conducted in accordance with the
Agencies required to seize computers and approved warrant there do not
apprehend cyber criminals. This appear to be any legal/privacy
may include PCs from victims of issues associated with this activity.
cyber crimes as well as those
that belong to cyber criminals

2) Liaise with other LEAs if the

botnet is run from other 2) Whether there is a legal/privacy
jurisdictions, either within issue depends on where other LEAs
Canada or from another country. are located and what information is
exchanged. Information sharing
should take place once data sharing
agreements are in place (especially
with foreign states).
Recovery IT Security 1) Restore base operating 1) There do not appear to be any
Staff system functions to infected legal/privacy issues associated with
hosts using a trusted image. this activity.

2) Re-install applications on 2) There do not appear to be any

hosts from trusted media. legal/privacy issues associated with
this activity l.
3) Restore data and files from
trusted backup media. 3) There do not appear to be any
legal/privacy issues associated with
4) Install OS and safeguard this activity.
updates to defend against future
attacks. 4) There do not appear to be any
legal/privacy issues associated with
5) Move clean hosts back into this activity.
the operational network.
5) There do not appear to be any
legal/privacy issues associated with
this activity.

Carriers Update messaging analysis, 1) There do not appear to be any

(Telco’s) traffic analysis and drop lists to legal/privacy issues associated with
detect and defeat future attacks. this activity.

Botnet Key Actors Activities Legal/Privacy Implications
Internet Update internal safeguards, e.g. 1) There do not appear to be any
Service email and URL filters, to detect legal/privacy issues associated with
Providers and defeat future attacks. this activity.
Users 1) (Home users). Restore from 1) There do not appear to be any
a clean image or from a trusted legal/privacy issues associated with
system restore tool. Apply OS this activity.
and AV updates to the infected

2) Recover files from trusted 2) There do not appear to be any

backup media. legal/privacy issues associated with
this activity.
Bot Herders 1) Download new malware to 1) There are a number of Criminal
hosts using the same or different Code provisions which could
attack vectors. become relevant depending on the
bot herder’s activity.

2) Re-establish communications 2) There are a number of Criminal

with bot agents. Code provisions which could
become relevant depending on the
bot herder’s activity.
3) Resume criminal activity.
3) There are a number of Criminal
Code provisions which could
become relevant depending on the
bot herder’s activity.
Law 1) Arrest and prosecute cyber 1) Provided criminal procedure has
Enforcement criminals. been followed there do not appear to
Agencies be any legal/privacy issues
associated with this activity.

2) Update analytical tools to 2) There do not appear to be any

incorporate the characteristics of legal/privacy issues associated with
the recent attack. this activity.

In sum, cyber crimes are substantively different from real space crimes and require a re-thinking of
traditional means for dealing with crime. Law enforcement must take a multi-pronged approach to
combat botnet threats, which considers the modalities described above. The regulatory modalities
described earlier – law, social, market forces and architecture – which are relied upon in real space,
do not function the same way in cyberspace – and the authorities must be sensitive to this fact. A
well-crafted proactive defence against botnets will see a centralized, coordinated effort among a
variety of players necessitating a heavy reliance on public-private partnerships. Enforcement efforts
must focus on interdiction. The next section will examine the details on enforcement mechanisms.

Section 9 Recommendations
Solutions can be inferred from previous chapters. This section 66 makes some more explicit
recommendations for combating robot networks.

9.1 The Canadian Government NCSS

The following is a summary of the Industry response to the National Cyber Security Strategy placed in the
context of combating Botnets.

9.1.1 Peace, Order and Good Government

The responsibility for the security of Canadians in cyberspace rests with the Federal Government as the
national guarantor of Peace, Order and Good Government (POGG).

The private sector is driven by regulation and business efficiency to manage security through basic risk
transfer techniques – insurance and contracted limits on liability. While some bi-lateral interdependency
risk management occurs between sector sub-groups (IE, Telecoms, Electric grid operators, Chemicals),
no company or sub-sector practices interdependency risk management with all other sectors. [Addendum
– Ensuring Resiliency: Macaulay, Conference Board of Canada Proceedings, May 3 2008]

The resilience and assurance of critical cyber infrastructure is a matter of Canadian security and
prosperity, to be protected by the Federal government.

9.1.2 Tax Credits for National Cyber Security

At this time, the government does not possess the necessary tools to manage cyber security for Canada.
Thus, national security of Canadian cyberspace is currently undertaken in a parochial manner by sector
players; the investment required to support coordinated security initiatives will not be made by the private
sector without financial support.

The Canadian Government should consider a tax credit system linked to information sharing. This credit
system could be similar in nature to the successful Scientific Research and Education Development
credit, which the Finance Department establishes as more effective than directed program funding or
subsidy. Investment incentives for non-tax-paying, government entities can be devised to align with
private sector incentives. [Addendum – Science and Technology; Perspectives for Public Policy, Industry
Canada 1995 also Why and How Governments Support R&D, Canadian Department of Finance, 2008 -

Proactive Cyber Defence and the perfect storm, Dave McMahon, Bell Canada 2009
105 also How effective are fiscal incentives for R&D? A review of the
evidence, Hall and Van Reenen, 2000].

Investment incentives will enable government to coordinate cyber risk management, and enhance
Canadian security and prosperity. The cyber threat represents significant Operational Risks to Canadian
business. Organizations and jurisdictions with better Operational Risk management are more resilient to
shocks and eligible for better cost-of-capital, making them more competitive and attractive for investment.
[Addendum – Standard & Poor’s Viewpoint, Dreyer and Ingram, 2007 also Operational Risk
Management: Paper I , Macaulay 2008]

9.1.3 Program Management and Timelines for delivery

The NCSS contains no provision for program management and establishing milestones too far out, in
yearly increments. The strategy should be properly, formally managed and progress increments should
be expressed in weeks or months, not years. The projected rate of progress is simply too slow. While
this rate may represent historical precedent, it is unsatisfactory going forward. A major incident can
happen at any time, yet there appears to be no sense of urgency in the approach to the program or

9.1.4 Cross-border context

A strategy must clearly address a global environment, with a strong cross-border context. The strategy
needs to consider globalization and cross-border interdependencies, IT and communications
convergence, and criminalization and foreign threats to domestic CI assets. For instance, Energy,
Telecoms and Finance sectors do not see physical borders and possess very significant cross-border
dependencies. Cyber security issues in these sectors are resolved globally and already support active
dialog among “alien” enterprises, agencies and governments.

Private critical information infrastructure owners have aligned themselves with the US CNCI in the
absence of a Canadian initiative

9.1.5 Proactive Cyber Defence

Any sophisticated strategy must posses a high degree of proactivity, in addition to preparedness,
response and recovery. Preparedness, response and recovery are well understood within traditional
emergency management; however that strategy reflects a willingness to wait for disaster to strike. A
strategy with a proactive element will include prediction and pre-emption of threats through intelligence
gathering and metrics. The benefit of a proactive component for government is both a reduction in risk
and significant value-add related to information sharing. (See Section 6.0).

A proactive element is a value-add almost uniquely within the abilities of Government and its relationships
with the Safety sector (Law Enforcement). Proactive activities and information sharing is be considered a
valuable contribution from other sector layers.

“Proactively survivable systems differ from reactively survivable systems in that proactive systems may
act (i) to increase resistance, (ii) to initiate recovery, or (iii) to adapt: before or concurrently with the
recognition of a problem in the system.” [Meredith]. The absence of proactively in the current strategy is
a significant gap and a misalignment with major allies such as the United States and Britain. [Addendum
- Metrics for the Evaluation of Proactive and Reactive Survivability, Michael Meredith, Carnegie Mellow
University for US DOD 2003 also UK Civil Contingencies Secretariat -]

9.1.6 Funding of research

Cyber security coordination and risk management will require the management of large volumes of
information, the sources of which are not even defined at this time. Efforts will be aided by timely,
published academic research in the area such as:
• Threat and dependency metrics,
• Visualization and simulation,
• Business case and investment analysis

We support the NCSS abbreviated messaging on this topic on page 8 of the Strategy under the heading

9.1.7 Comments on “Building trusted and sustainable partnerships”

Where it is deemed a rationale business and security practice, most sectors have already established
sector forums for information sharing and exchange. Sector associations often perform this task.

“Sector networks” composed of individuals from sector organizations is useful and will support the growth
and development of trusted and sustainable partnerships; however, expectations and ambitions
associated with these sector networks should be tempered. Most sectors consist of competing players
and information sharing will be constrained by competitive instincts – regardless of the trust among the
individuals at the table.

Information sharing through existing sector forums is already widely practiced within CI sectors and
occasionally between CI sectors on a bi-lateral basis; while this sharing may not fulfill the vision within the
National Strategy, it is worth taking account of.

Information sharing has to be as near real-time as possible. A Sector Network based on face to face
meetings will provide access to information whose value has substantially decayed – even a few hours
make it almost irrelevant to CI protection. Conversely, encouraging Sector Network participants to share
information bilaterally encourages ad hoc (or no) security applied to potentially sensitive information. See
Section 6.0 for discussion on Information Sharing requirements.

Assigning staff to Sector Networks as liaison staff is beneficial if these staffs are present to contribute
organizational intelligence and other sector-relevant information to the group.

9.1.8 Trust and Partnership Building

Sector Associations should be encouraged to lead and manage any new Sector Networks or adopt
existing forums. Sector Associations should be supported with funding if new existing forums and
programs are to be enhanced to support the strategy. The public sector should focus on means and
methods for providing threat intelligence to CI Sector Networks managed and run by existing

An inventory of existing information sharing forums within the CI sectors should be undertaken before any
attempt is made to establish sector networks.

9.1.9 Implement an all-hazards risk management approach

Risk assessments play a critical role in managing the security of specific cyber assets, but are of
diminishing utility the larger the number of assets under management. For instance, a risk assessment

might be meaningful for a building, but less so to the operator considering the range of assets under
management; it may be mildly meaningful to the sector as a whole; and the same risk assessment will be
barely meaningful (unintelligible) to other sectors with interdependency risks associated with the sector
under consideration.

Risk assessments are of no value to addressing CI interdependency risks without establishing a baseline
of interdependency under normal operating conditions. This baseline must be establishing using
repeatable metrics and measures, utilizing inclusive and valid CI sector definitions. [Addendum – The
Interdependencies of Payment and Settlement Systems, Bank of International Settlements, June 2008
also Assessing the Risks of our Interdependent Critical Infrastructures, Front Line Security Magazine, July
2008, Tyson Macaulay]

Risk assessments are point-in-time artefacts, which are often obsolete before they can be distributed.
They are appropriate for annualized audit and compliance purposes but less so for real-time risk
management against constantly evolving threats.

The goal of conducting and sponsoring increased exercises is appropriate and provides substantial
benefits to participants.

9.1.10 Risk Management

Government can usefully support cyber risk management through the quantification of CI
interdependency – an area no single sector can asses alone - at various levels of details and granularity.
For instance, inclusive, 360° (full view) tools, methodologies and metrics available at regional and
local/municipal levels of granularity.

Government should provide threat information and intelligence rather than building and operating risk
management tools. Government can most usefully provide threat intelligence for sector entities to use in
their own risk assessment processes; rather than trying to define risk assessment processes and tools,
and especially performing risk assessments.

All exercises should be inclusive of all sectors, not just some sectors, to help better understand CI

9.1.11 Advance the timely sharing and protection of information

among partners

The objective of improved information sharing has the potential to offer substantial gains to CI assurance
and risk management.

As expressed in the Strategy, the need for a clear information sharing protocol is an absolute pre-
condition of information sharing.

Information sharing is a short-term Policy issue and a long-term Operational responsibility; mandates and
tasks should be divided accordingly between Policy and Operational shops. Policy shops should not
attempt to develop or undertake Operational tasks associated with gathering, managing and reporting
information and metrics gathered from Sector Networks and CI sector players.

Information sharing and intelligence from government focusing on “threat-to” and “threat-from” metrics
would be considered valuable by the private sector; re-packaged public-domain, vulnerability and patch
information is of limited value no matter how cleverly tailored. Intelligence from government sources may
be considered analogous to the proprietary threat information from CI sector players that the government
seeks; resulting in a certain amount of “quid pro quo” and the perception of reciprocity. [Addendum –

Operational Risk Management: Paper III, Macaulay 2008 also Critical Infrastructure Information Sharing,
David McMahon 2007]

9.1.12 Information Sharing

Information sharing should be a precondition of CIP tax credit eligibility.

Pending the development of a Concept of Operations, government entities dealing with intelligence,
threat assessment or law enforcement should be considered first and foremost to support operational
information sharing.

An Information sharing protocol must be defined with full sector consultation, probably managed by sector
Associations. The protocol must include:

• technical information related to meta-data, chain-of-evidence, formatting and privacy-compliance

• a demonstrably secure information management system and,
• anonymization capabilities in place of lengthy, mutual recognition processes. Information
collected such that the ability to determine the identity of the originating sector-member is by
default precluded, unless volunteered.

[Addendum – Information Sharing: Practices that can Benefit CIP, US GAO, Oct 2001]

Information sharing has to be as near real-time as possible.

Information sharing tools must include sophisticated visualization and simulation capabilities due to the
potentially large volumes of data, which might be submitted.

9.2 Tactical Solution Set for Police Combating Robot Networks

The following represents a tactical solution set for law enforcement agencies in Canada to combat robot
networks and cyber-crime:

9.2.1 Funding.
Sufficient funding must be allocated to support internal and external initiatives. The quality and targets of
investment is more important than the quantity of funding. Monies cannot be consumed internally when
this is ostensively a global problem requiring a community wide-solution. The tax base should be a
catalyst for the solution that will help Canadians directly and measurably. Public institutions spend over
$2B on e-security every year but it is difficult to measurable the effect it is having on the magnitude of
malicious activity in cyberspace impacting Canadians. Organized crime is paying top dollar for top talent.
Contrast this with contracting goods and services to the lowest bidder and lowest quality in the war
against cyber crime.

9.2.2 Awareness and expectations.

There needs to be a pragmatic legal interpretation of existing laws, policies and mandates within the
context of an open public debate of privacy issues and public expectations. It is envisioned that an
awareness campaign would precede the initiative so that society may have ‘informed’ discourse.

9.2.3 P3.
Establish public-private partnerships with industry leaders (internet service providers, and vendors) to
address cyber security issues on a consolidated front. Effective partnerships require a trusted
relationship, reciprocal sharing and an equitable distribution of work and wealth. It is important to
consider that the private sector is driven by market forces, self-protection and corporate responsibility. But
at the end of the quarter, they must remain financially viable. There must be reasons to work with intra-
sector, and it cannot be for free. In the past P3 initiatives have been inequitable. We could find no
evidence that of the millions in critical information infrastructure protection (CIIP) any reached CIIP owner-
operators to clean Internet pipes, or build more resilient systems.

Consider outsourcing LEA operations where capabilities already exist in the private sector or where it is
more cost effective to develop and host those activities externally. Prospective business segments
include: all research and development, upstream (cloud) security services, in particular core intelligence,
targeted investigations, open source and grey source feeds, advanced cyber forensics, lawful access,
clean pipes, trusted network access and connectivity.

Tax credits should be provided to all enterprises investing in cyber security, CIIP and national security
initiatives much like R&D credits are provided today. The concept was submitted by industry associations
and approved in principle by past public safety minister.

9.2.4 Trusted partnerships.

Special strategic partnerships need to be established with critical infrastructures and significant players in
cyber security such as Tier 1 telecommunications carriers, and chartered banks. The first step is to imbed
security liaison officers (SLO) into these institutions and reciprocate by hiring a SLO from these
communities of interest to work operational cases on site with the police. The next step is to allow SLOs
direct or indirect access to each others databases and other electronic systems. The final step is to
integrate police systems with those of the financial and telecommunications sectors in a trusted manner
with SLOs and appropriate controls and safeguards in place to ensure security and privacy.

9.2.5 Training.
Advanced cyber security and enforcement training should be injected into general police college training.
Talent scouting may identify candidates to a specialist career path in fighting cyber crime. A robust
professional development program will help to create a cadre of skilled cyber-crime fighters. This team
will need to work closely with industry experts. Perhaps, like the USA model, half the cyber policing staff
will be formed from contractors. The police cannot hope retain top talent on police salaries. An elite
practitioner will make $250k/yr consulting. Some talent may choose to remain in uniform for altruistic
reasons but most expertise in cyber policing has already transferred to the private sector. On the job
training is the most pragmatic means of sharpening skills. Before police try a botnet takedown
themselves, it is highly recommended that they shadow existing operations.

9.2.6 Sources.
Most of the intelligence required in a cyber investigation will not come from the crime scene or from within
the police station. It is available on the global market. There are a number of well-established and
exceptionally-accurate commercial sources of cyber security intelligence. Other organizations have been
in the business of providing ongoing assessment, consolidation and consolidation of these primary
sources. The security industry have further assessed over 100 primary and meta-sources of cyber
security intelligence (not in scope of this report). It is not cost effective, practical or possible for an
organization to recreate these services. There simply is not the trusted access or economies of scale for
the public sector to sustain a competitive business model. It is expected that over 80% of intelligence
required by authorities can be derived from open sources, grey sources, strategic partnerships with
trusted ISPs and commercial subscription services. It may be simpler to leave network collection to
owners of those networks. Authorities could pay to have a trusted partner consolidate and triaged
commercial sources and inject Canadian relevant metrics. External organizations can be tasked to

update, enhance and review the product and delivered in a form that LEA can consume. The added
factor of subscription based source consolidation services is that they can collect data for police work by
proxy while maintaining operational security. Otherwise LEA would need to have managed overt
business relationships with hundreds of sources. This exposes LEA on a number of levels: providing
operational feedback with so many sources in collection management can betray investigations; and
many more (untrusted, insecure) network connections necessarily increase the risk for LEAs. Important
Note: Police need a trusted, secure and non-attributable network access to search, receive and retrieve
data from external sources.

There are a number of categories of cyber intelligence sources:

Open – a great deal has been written on the subject of botnets and the criminal enterprises that run them.
The body of published literature and cyber security science forms the foundation of any investigation. A
central database (expert knowledge portal), or file share would be indispensable to analysts. Open
source analysis also included media and social network monitoring for new, blogs, posting and public
chat. There is a great deal that is posted about real ongoing attacks, which can steer an investigation in
the right direction.

Grey – There is an investigative requirement to visiting publicly accessible threat web sites or postings by
persons of interest. This is equivalent of physical surveillance of suspects in public places and
overhearing conversations spoken in public places.

Subscription – These are formal (pay-per-view) cyber security publications, feeds and subscription
services. It is considered an essential reference database for cyber-security intelligence analysis.

Trusted – Statistically-valid quantitative metrics from trusted Canadian infrastructure owner-operators

provide primary data about botnets. Instrumentation on core telecommunications and financial
infrastructures such as deep packet inspection, netflow, DNS, network and content anomaly detection
systems can give real-time situational awareness and a common operating picture to LEAs.

Covert – Exfiltrating information from within the target (threat) environment using computer network
exploitation and attack techniques, covert human or technical sources. This can also involve botnet
takeovers and man-in-the middle attacks.

Targeted Intercept – Lawful access or communications intercept under warrant is an effective means of
precise targeting persons of interest provided: you know a priori the identity of the target; you have
gathered sufficient intelligence through non-warranted means to get the warranted targeting authority; the
provider has the technical means and access to the subscriber; chain of evidence can be preserved
incontrovertibly; the target’s communications can be intercepted, reconstructed, and analysed; and police
have the means to forensically process malicious traffic that is of high bandwidth and sophistication in
real time.

Dark Matter –Dark Matter consists of a correlation of Dark Web and Dark Space and Dark Universe in the
context of counter-semantic warfare, cyber-terrorism, cyber-crime and cyber espionage.

Dark Web content analysis would be conducted from a number of perspectives: technical sophistication,
content richness, web interactivity, and weaponization of cyberspace. Dark Web is a means to analyse
the technical sophistication of threat groups’ Internet usage and could contribute to an empirical
appreciation of the Web technologies and their role in the global terrorism phenomena. The analysis
contributes to an evidence-based understanding of the applications of Web technologies in the global
terrorism phenomena. Analyzing terror campaigns on the internet: Technical sophistication, content
richness, and Web interactivity. Terrorists and extremists are increasingly using Internet technology to
enhance their ability to influence the outside World.

Dark Space - Cyber-Threat activity that is undetectable by traditional means can be discovered by
seeding dark space and using ‘null-steerage’ techniques. Synaptic (signal related information) and
semantic (content) filters can be applied nationally to gain a precise picture of threat information
operations. Malcode Treatment Analysis and Handling at a Global carrier level currently uses
sophisticated honey pots and tar pits are decoy systems used to lure or trap irregular traffic on the
network. This traffic could easily be monitored and which has no legitimate reason for traversing the net.
Similar systems can be set up at an enterprise level.

Dark Universes - Targets of interest are migrating to operations within virtual environments from
traditional web and e-mail conduits. Massively Multiplayer Online Role-playing Games (MMORPG) like
the “World of Warcraft” and persistent virtual worlds (PVW) like Linden Labs’ Second Life are simulated
online environments in which users are able to create anonymous personas which parallel activities
normally associated with the real world. Just like the matrix, the players are bound only by their own
imagination. Big business has immediately appreciated the profitability of this unique environment as
have various threat actors. Virtual embassies have been established in Second Life, and foreign security,
intelligence and militaries are now developing exploitation capabilities. These environments have not
escaped the notice of terrorists, spies, and criminals. Virtual world terrorism facilitates real world
terrorism: recruitment, training, communication, radicalization, propagation of toxic content, fund raising
and money laundering, and influence operations. For example, terrorists have modified existing
sophisticated commercial electronic games so that Allied troops are the default enemy force, and have
created entire games for would be terrorists. Access to these games is neither illegal nor monitored in
Canada. Terrorists have recruited experts in media production, image marketing and perception
management. Hyper-realistic training and scenario run-throughs are entirely consistent with the evolution
of terrorism in MMOGs. State sponsored espionage is seen as using PVW for covert agent
communications and egressing data through cut-outs and virtual electronic dead letter boxes. The
player-to-player in-game text and VOIP chat permits covert communication between cells, illegal agent
networks or the ephemeral ‘clans’ or ‘guilds’ in MMOG. These environments allow players to conduct real-
money transactions (RMTs) in virtual worlds and permit the unregulated currency exchange of virtual
credits for real funds. ‘Gold farming’ and ‘power levelling’ operations of criminal organizations are some of
the novel means of exploiting the medium. The encrypted exchange of zero-day network exploits is
present as is the out-of band control and tasking of bot-nets. We propose to build program to specifically
watch terrorism, crime and espionage in persistent virtual worlds, similar to the “Reynard” program in the
USA or the PVW monitoring solutions proposed in the UK, but to a much greater depth.

9.2.7 Business transformation from prosecution to prevention

The full dynamic of organizational re-engineering requires study. Combating robot networks and therefore
cyber crime will thrust a police agency into a new realm. Technical solutions cannot simply be bolted onto
exiting procedures, culture and infrastructures prosthetically. Changes will ripple through the organization
and can have profound secondary effects. Examples of secondary impacts include: new training
requirements, recruitment, strategic planning, crime statistics & performance indicators, network load and
topology, transport toll rates, outsourcing, O&M, capital budgets, operational security, legal, privacy,
policy, communications planning & campaigns, re-organization of operational elements, p3, source
management, etc.

9.2.8 Corporate architecture.

Before a LEA becomes decisively engaged combating robot networks that are powerful enough to level
an entire country in cyberspace, the LEA’s network infrastructure needs to be hardened against attack.
There exists a real and present threat of being counter-attacked by those who police are investigating. Is
your corporate network ready to go to war? Can it sustain a 40Gbps DDoS attack or sophisticated
penetration? Is the culture ready to conduct investigations while under deliberate attack? An enhanced
threat risk assessment and an enterprise security architecture review are highly recommended. [See
detailed description of enhanced enterprise security architecture in subsequent chapters.]

9.2.9 Investigative architecture.

A capability to combat robot networks and sophisticated cyber crime can be constructed upon the
foundation of a hardened corporate network infrastructure. This capability would require a number of
elements: a malicious treatment and handing facility; dark space analysis; collection/collation/processing
system for multifarious sources; security event information monitoring (SEIM) collation system; trusted
(non-attributable) and covert access to the Internet (globally); analyst workstations with advanced
investigative tools; secure communications; trusted guarded and systems to ‘back stop’ attacks; self
monitoring and audit controls; skilled resources. A full balanced computer network exploitation (defensive)
and computer network attack (offensive) capability needs to be established. Police need to be prepared to
go on the offensive with proactive operations to interdict and disrupt criminal robot networks before they
attack Canadians. There needs to be an equivalent of a SWAT team for cyber-space – and this team
needs to operate globally. This special facility will need tight integration with: Special Investigation
(communications intercept), technical operations (intrusion); tech crime; national security and CIP; covert
operations; forensics and network operations. [See detailed description of advanced investigative
architecture in subsequent chapters.]

9.2.10 Numbered sources and undercover operatives.

Recruit the right people in and out of uniform. This can take consist of police officers, contractors,
advisors/influencers, informants or operatives. Talent scouting, recruiting, undercover operations require
a great deal of street smarts. In the underworld of cyberspace this often means elite hacking skills or the
ability to launder money electronically. Establishing a legend (pedigree) of elite skills and fitting into this
counter-culture in a convincing manner is a non-trivial manner. Assessing true skills and motives of
potential informants is often problematic, as is controlling their extracurricular activities to stay within the
law. Police may want to hire trusted white hat hackers (professional security consultants) who can
approach grey-hat and eventually black-hats to do some of the preliminary talent scouting. This talent
scouting network needs to be established first before direct recruiting. Next, a HUMINT/SIGINT counter-
surveillance network needs to be build to watch and continually assess your own sources for their safety
and trustworthiness. The police need to run specific honey nets with feed material that their operatives
can ‘break-into’ do build credibility with the threat group. Eventually, key individuals who deal in criminal
botnets need to be recruited and police operatives invited by them to join. Infiltration of the criminal
networks should be done electronically and with physically. Covert sources (including technical ones)
need to be placed throughout the criminal ‘network’ including the money laundering operation, the bot
herder, controllers, and at the intermediary’s points. Police need to be able to demonstrate control their
own botnet of significant size (perhaps virtually). In all cases, covert operations will most likely become
under intense scrutiny, counter-surveillance by the threat group. A clandestine means to communications
to agents and sources needs to be established before operations. This system must withstand the most
sophisticated tradecraft. Note: some of these undercover operations should be outsourced to private
companies where they possess unique access, skills, bonifides or experience. Contractors can also
provide arms length non-attribution from police. Ample precedence has been established using security
guards, military contractors and private investigators. Contractors cannot however, be asked to break
Canadian law.

9.2.11 Operational readiness review.

It is highly recommended that a full operational capability be exercised and run in a realistic controlled
environment before going live. This may initially consist of a police lab, but testing should migrate to a
carrier central office and test IP space to mimic the real Internet. Full penetration and vulnerability testing
by skilled internal and external teams is necessary. Operational security testing should examine technical
and non-technical exposures including: in-bound/out-bound and interdependency analysis for malicious
activity and competitive intelligence; independent investigation (discovery); contracting; supply chain;
personnel; electronic signature; brand/data leakage protection; anonymity/attribution; etc. The
operational security plan needs to start, before a botnet initiative is begun. Note: although most of the
operations can be conducted remotely from police HQ, there will have to be some police facilities and
presence abroad. This foreign infrastructure needs to run over or by trusted providers.

9.2.12 Action Summary

To summarize in chronological order:

• First step to combating botnets is to understudy operational programs and develop a cadre of
• Fortify the operational infrastructure;
• Build a centre of excellence, provide tools and trusted network access;
• Cultivate technical and human sources, establish SLOs and for P3;
• Monitor live traffic, starting with monitoring corporate nets, then Canada;
• Geolocate Canadian targets and high priority crime clusters/malcode within the LEAs jurisdiction;
• Supplement with traditional police work to build a case.

The following chapter discusses the necessary technical steps to establishing the defence-in-depth and
operational security required to combat robot networks.

9.3 Botnet Defence Architecture within an Enterprise

This section will focus on the architectural components that can help detect and mitigate botnet attacks
within an organization. The detection aspect of botnets is a critical first step in:

• identifying the malware;

• reverse engineering the malware to determine its method of communicating with the C&C server; and
• determining the host and or IP address of the C&C server.

This information will point law enforcement agencies towards the criminals behind the botnet and
government agencies can use this information to identify infected hosted within their enterprise networks
and commence clean up operations.

The architectural discussion will focus on the characteristics of the safeguard and how effectively it
remediates known and unknown or “zero day” attacks. The reason for this distinction is to highlight the
fact that most safeguards are designed to detect and/or prevent known exploits and attacks. A zero day
vulnerability is an exploitable weakness in a program that has not been identified by the vendor or has
been identified by the vendor but has not been fixed. Day zero vulnerabilities are the most attractive
exploits to botnet developers as they will not be detected by signature based anti-virus (AV) or intrusion
detection systems (IDS) and the vulnerability in the host application or operating system has not been
remediated or patched.

9.3.1 Traditional Enterprise Security Architecture

An example of a “defence in depth” security architecture for a medium to large organization is illustrated
in Figure 14. While a complete description of generic security architecture standards is not within the
scope of this study (see [Reference 67] for further information), we felt that the botnet-specific safeguards
should be highlighted within a generally accepted architectural framework. In addition, the safeguards will
be characterized with respect to their function in IT security incident management, i.e. prevention,
detection, analysis, response and recovery.

For those readers who are familiar with Government of Canada zoning concepts, the corporate LAN in
Figure 14 is analogous to an operations zone and, depending on the organization, there may be security
or high security zones that extend from the operations zone.

Figure 14: Sample Defence in Depth Architecture for a Control System

9.3.2 Network Consolidation

Network consolidation of Internet access points is the first step that should be undertaken by
organizations in order to combat the introduction and spread of botnet malware. Unlike Figure 14, which
shows a clean, single point of access to the enterprise, many organizations have multiple point of
ingress/egress to the Internet, each with varying degrees of security and control. Multiple entry points are
harder to secure, require additional safeguards and provide multiple paths for botnet controllers to
obfuscate their activities. The concept of consolidated Internet access, trusted pipes and upstream
security in the carrier zone will be discussed later in this section.

9.3.3 DNS
The domain name service is a critical part of any TCP/IP based infrastructure. DNS is not a security
safeguard per se but it can provide valuable information that can be used to track botnet activity and
identify infested hosts in the enterprise and their C&C servers. Through cache poisoning and other
techniques, Botnet controllers have used DNSs as unwilling participants in the distribution of malware.

The recent cache poisoning attack [Reference 8] for authoritative servers represents a serious threat to
the DNS and Internet community. It was agreed that adding UDP Source Port Randomization (UDP
SPR) to DNS queries would be an improvement over the existing solution which relies on a single default
value. As the name implies UDP source port randomization randomizes the UDP port used in a DNS
query and this makes it harder for an attacker to guess all the correct values in a query response.
However, it is not impossible for an attacker to guess the UDP port. The more packets an attacker can
send the greater the chance they will succeed in guessing the correct port. Unfortunately it has already
been demonstrated that freeware implementations that rely solely on UDP SPR, can be successfully
compromised in less than ten hours. Therefore, the commonly deployed UDP SPR fix is insufficient in

preventing cache poisoning attacks and, as a result, DNS vendors have identified the following additional
security layers to significantly reduce this attack [Reference 13].

• deterrence;
• defence;
• resistance; and
• remediation.

These solutions also help resolve problems with UDP SPR in enterprise networks where NATs (Network
Address Translation), load balancers, and firewalls undermine UDP SPR defences by de-randomizing the
UDP Ports. Deterrence
UDP Source Port Randomization and Randomization of the Transaction ID greatly improves resilience
and security compared to implementations that rely solely on UDP SPR and TXID. Query IDs generated
with an entropy pool algorithm are as random as possible. Use of a high performance caching server that
has very low latency will severely limit the time available to an attacker. Defence
The defence layer looks for incorrect incoming DNS query responses (e.g. TXID, or UDP Source port
don’t match original query) and sets up a TCP connection to the Authoritative server that sent the
response and then sends another query. This catches spoofers in the act and takes them out of the loop
because TCP connections are virtually impossible to spoof. Resistance
Glue segregation separates records obtained from servers that are not authoritative for a domain. These
records are not used in outgoing query responses. This eliminates opportunities to insert a false record
that the attacker can use to gain control of a valuable domain. Query response screening intelligently
screens DNS answers to ensure malicious data in DNS responses is not used to answer valid user
queries. This should be done in a manner that does not have a significant impact on performance. Remediation
Alerts are generated when a mismatched TXID or UDP Source Port is seen by the server. Reporting
information, including the IP address and information contained in the query response is stored. These
records can be consolidated and analyzed with other sources of information and can be used to identify
botnet activity.

9.3.4 Patch Management

Any program that can be touched by malware from the Internet or local area network, e.g. workstation
operating systems, word processors, databases, web servers and web browsers, may be vulnerable to
exploitation. There are many different vulnerabilities but they can all be categorized as software flaws
that, when exploited, cause program instability or privilege escalation that allows an attacker to assume
control of the system and introduce malware.

The patching process involves applying a software modification (patch) to an application that corrects the
software problem and closes the door to the attacker. In order to ensure that enterprise systems are
properly patched, an effective patch management system is essential. Patch management is usually
automated within an enterprise and controlled through a centralized system such as Tivoli or SMS.
Patching can introduce instabilities in custom applications and this can delay the implementation of
patches until they have been tested. Enterprises may require other solutions such as filtering incoming
documents that could contain maliciously crafted content until the applications can be patched.

Patch management is purely a preventive safeguard and does not help to identify known or “day zero”
malware. Patch management systems are typically deployed on the corporate LAN.

System patching is an effective means of resisting known malware attacks if the systems have not
already been infected. Patching helps “close the door” but, once inside, sophisticated botnet malware will
quickly open other doors and obviate the original exploit, i.e. the patch will have no effect.

9.3.5 Firewall Policies

A firewall is a part of a computer system or network that is designed to block unauthorized access while
permitting authorized communications. It is a device or set of devices configured to permit, deny, encrypt,
decrypt, or proxy all (in and out) computer traffic between different security domains based upon a set of
rules and other criteria.

Firewalls can be implemented in either hardware or software, or a combination of both. Firewalls are
frequently used to prevent unauthorized Internet users from accessing private networks connected to the
Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall,
which examines each message and blocks those that do not meet the specified security criteria.

There are several types of firewall techniques:

• Packet filter: Packet filtering inspects each packet passing through the network and accepts or rejects
it based on user-defined rules.
• Application Level Gateway (ALG): ALGs augment firewalls and are used to transparently mediate
remote sessions to specific applications such as FTP and Telnet servers.
• Circuit-level gateway: Applies session level security to TCP and UDP connections and ensures that
only legitimate packets are exchanged between the participants.
• Proxy servers: More generic than an ALG, a proxy server mediates access to numerous applications;
usually web based, and hides internal network information from the requestor. Proxy servers have
good visibility into application level transactions and their log information can be used to detect and
monitor botnet activity. Proxy servers can be implemented on the corporate LAN and used to mediate
user access to the Internet and they can also be implemented in the external DMZ (e.g. behind the CS
firewall in Figure 14) to mediate user access from the Internet. Proxy servers that implement source
IP spoofing sometimes suffer from asymmetric routing where an enterprise uses more than one
Internet service provider (ISP) for diversity. In these situations, and to better balancing traffic loads, a
clustering proxy solution may be used [Reference 14].

Firewall policies that restrict access to the minimum number of services required for enterprise
applications are an effective defence against malware that uses generic port scanning to identify potential
victims. The firewalls themselves will come under attack so it is important to ensure that the firewall host
or device is hardened against malware attacks. Firewall logs are another information source that can be
used to detect and monitor botnet activity.

Firewalls will not usually protect an enterprise from known or zero day malware attacks that use common
TCP or UDP communications channels, e.g. port 80 (web), port 25 (mail) or port 443 (secure web).
However, proxy servers can be used as filters, either content or URL based, to restrict access to
dangerous web sites or content.

Firewalls are typically located at the ingress/egress points to all DMZs and all external access points as
illustrated in Figure 14.

9.3.6 Intrusion Detection Systems

Network and host intrusion detection systems (NIDS and HIDS) are used to identify and, if necessary, to
drop any malicious traffic that enters or circulates within an enterprise network, either from the Internet or
from hosts that have been compromised locally. As such they provide malware prevention and detection
capabilities. Signature-based IDSs evaluate network traffic based on characteristics of the malware such
as a code fragment in a document or some other file. State of the art malware, as discussed earlier, has
the ability to quickly change or “morph” into a different executable with the same functionality and evade
static signature detection systems that are based on older malware samples.

Maintaining current signatures is critical if the enterprise wishes to detect and prevent the latest known
malware attacks. Some intrusion detection systems use behavioural analysis and heuristics to identify
malware through generic signatures or modus operandi, e.g. attempt to open a command shell on a host.
Again, these rules are usually developed from existing malware samples but should be kept up to date.

A typical deployment of NIDS sensors is shown as blue dots in Figure 14. HIDS sensors are deployed on
all external and internal hosts (servers and workstations).

As discussed in the malware, analysis, treatment and handling (MATH) section of this report, IDSs can be
used to detect botnet activity and identify the bots by monitoring and reporting any activity in unallocated
or “dark” IP addresses. Hosts that scan dark IP addresses are invariably infected with some form of
malware that is attempting to propagate by scanning all IP addresses in the enterprise in order to locate
and exploit vulnerable hosts.

9.3.7 Anti-Virus
Anti-virus (AV) safeguards are similar to intrusion detection systems in that they are generally signature
based but operate on a host, i.e. a workstation or server and support the prevention and detection of
malware attacks. They have the ability to scan incoming and outgoing email attachments for known
malware and they can scan local storage media for infected files. As such they both detect and prevent
malware infestations.

More advanced anti-virus products can scan web pages for malware and also protect against spyware
and adware which track Internet activity and display unsolicited ads in web browsers. As with IDSs, it is
important to keep the AV signatures as current as possible or develop other methods of restricting access
to malicious web sites and files, e.g. through content and web (URL) filtering.

AV safeguards are usually deployed on all hosts in the internal and external networks. A centralized AV
server with access to the AV vendor signature files may be located on the corporate LAN and used as a
central control point for accessing and distributing AV signatures.

AV solutions can be augmented with integrity checking tools that scan critical operating system files and
identify any files that have been modified, e.g. bot agents that are disguised at windows systems files.
These tools are not signature based and have the potential to detect zero day attacks. SecCheck from
myNetWatchman is a free, web-based integrity checking tool that is highly effective.

9.3.8 Content and Web Filtering

Content filtering can perform two functions; filter email or filter web content. Email filtering is most
commonly used to filter spam. These filters act either on the content, the information contained in the mail
body, or on the mail headers (like "Subject:") to either classify, accept or reject a message. AV solutions
can also be classified as content filters since they scan simplified versions of either the binary
attachments of mail or the HTML contents. Content filters can also enforce parental controls by analyzing
data and either restricting the data or changing the data with chat filtering.

Web content filtering is commonly used by organisations such as offices and schools to prevent computer
users from viewing inappropriate web sites or content, or as a pre-emptive security measure to prevent
access to known malware hosts or hosts that have a high risk of containing malware, e.g. social
networking sites. Filtering rules are typically set by a central IT department and may be implemented via
software on individual computers or at a central point on the network such as the proxy server or internet
router. Depending on the sophistication of the system used, it may be possible for different computer
users to have different levels of Internet access.

Web and email content filters are usually deployed either as standalone devices in the corporate LAN or
co-hosted with the web or email servers. In terms of security incident management practices these
safeguards are mainly preventative in nature.

9.3.9 Log Consolidation and Analysis
Logs are a very important source of information in terms of detecting and monitoring botnet activity.
Firewall, IDS, DNS and proxy logs will allow a botnet analyst to analyze incoming and outgoing network
traffic and to identify botnet patterns. This, in turn could lead to the discovery of the C&C servers and the
infected hosts. As one can imagine, each of these platforms can generate incredible amounts of log data
that are impossible to analyze independently.

The solution to this problem is a security information event management system that can collect
aggregate and correlate log information from numerous sources and identify botnet behaviour. The
reference security architecture has been modified in Figure 15 to highlight the implementation of a SIEM
and the various hosts that provide information to the SIEM. As with most system components, SIEMs can
be clustered, distributed and interconnected over multiple segments and intermediate devices such as
aggregators can be inserted in the network to reduce bandwidth requirements. SIEMs support the
detection and analysis aspects of security incident management.

Figure 15: Log Collection Data Flow to SIEM

9.3.10 System Restore

Despite the best defensive measures, we must assume that internal systems and individual computers
will become infected with malware at some point. One effective method of recovering from a botnet
attack is to implement a robust system restore tool on enterprise hosts. This application would effectively
take a trusted snapshot of a clean workstation or server image and used to restore the host to a trusted
state after it has been infected with malware. This tool could be administered centrally or deployed locally
and offers a time efficient alternative to wiping and re-building each infected host which is always an
option for system restoration.

9.3.11 Security Awareness
One of the main contributors to a successful IT security program is security awareness and training. If
employees are not informed of applicable organizational policies and procedures, they cannot be
expected to act effectively to secure information technology resources. The purpose of IT security
awareness training is to enhance IT security by:

• improving awareness of the need to protect IT system resources;

• developing skills and knowledge so IT users can perform their jobs more securely; and
• building in-depth knowledge, as needed, to design, implement or operate IT security programs for
organizations and IT systems.

IT security awareness programs set the stage for IT security training by changing organizational attitudes
to realize the importance of IT security and the adverse consequences of its failure and remind users of
the procedures to be followed. Organizations are encouraged to create acceptable use policies for IT
systems and ensure that each user reads, acknowledges and understands this policy.

In order to prevent the infestation and spread of botnet malware, security awareness training should
include current bot herder tactics such as “spear phishing” as described earlier and other methods of
propagation such as embedded malicious web links in emails and accessing potentially dangerous web
sites. Employees should also be instructed in the and proper handling and importing of information from
magnetic media such as USB sticks and CDs.

9.4 Remediation
Once a bot has infected an organization the effectiveness of the response becomes critical. Typically a
system becomes unusable and untrusted after it has become infected with a bot agent and the best
remediation is to return it to a trustworthy state. For workstations this is normally done by re-imaging the
system with a trusted or “gold” image. Servers often require a complete rebuild, including the operating
system, data restore, and reinstallation of all applications.

An IRC based bot will attempt to maintain continuous communication with the C&C server. This can be
taken advantage of in an organizational network by redirecting the IRC traffic to one under the control of
the response team. This will be described more deeply in the section on Malware Treatment Analysis
and Handling.

Once a bot infection has been identified the following activities will typically be performed:

• Containment: Containment attempts to limit the further spread of a bot infection. Single systems can
be isolated, and network subnets can be segmented. Choke points are established where the bot
traffic is limited to specific portions of a network.

• Eradication: Analyzing the bot, and removing it from infected systems. This can be a manual process,
or automated in coordination with anti-malware products.

• Recovery: Returning the network and systems to operational status. Ensuring that the root cause of
the compromise is also dealt with, such as patching vulnerable systems.

• Lessons learned. Following up on the incident with a post-mortem meeting and incident report.

Remediation efforts that are implemented in haste and without proper planning often end in failure.
Attempting to block the enterprise perimeter or remove affected hosts from the network without identifying
the cause of the attack will only cause the attacker to update the complexity and sophistication of the
attack or remove all traces of the attack until a later time.

Incident response must be planned, co-ordinated executed quickly after all infected hosts have been
identified. If the enterprise does not have people trained in this level of remediation then third party
incident response teams should be brought in.

9.5 Summary of Enterprise Defence Mechanisms
In addition to implementing best standards and practices for IT security, enterprises should ensure that a
robust system of log capture and analysis is deployed, in particular:

• Implement logging at DNS servers, DHCP servers, VPN concentrators, Windows application, system
and security event logs, Proxy and File servers, anti-virus and intrusion detections systems.

• Ensure that key information such as host name, IP address pairings, audit success/ fail indications and
URL browsing information is included

• Store logs in a central place or, ideally, in a Security Information Event Management (SIEM) system
for aggregation and correlation.

Other best practices include:

• providing security awareness training to executives and other likely targets of spear phishing attacks;
• deploying “defence in-depth” infrastructures to limit the ability of the malware to move throughout the
• filtering emails that contain URL links or file attachments;
• monitoring outbound connections for suspicious activity;
• deploying file integrity and system restore solutions that are triggered whenever criticial files are
• deploying malware analysis, treatment and handling capabilities within the network; and
• place sensitive information in highly secure servers with strong authentication

9.6 Enhanced Enterprise Security Architecture

In the most basic constructs, trusted security engineering must consider the security attributes and
properties at every layer; from the physical to, synaptic, semantic and abstract.

The traditional model of enterprise security architecture based upon a simple (special) closed system
model. External interdependencies are not considered and the cloud (upstream zone) does not exist. The
public internet closes in around the enterprise and malicious traffic permeates this zoning model in both
directions. Quantitative analytics confirm this hypothesis through inbound-outbound monitoring of
autonomous address spaces. The reasons are that all systems are open, there is no defined perimeter
and traditional security mechanisms can neither detect nor prevent the majority of cyber attacks today.

Now consider a new general 67 or enhanced model for enterprise security architecture. This one accepts
that infostructures are open systems that exist with the cloud zone. We begin by managing control planes
on the network like DNS and trusted routing upstream. By layering on upstream security services at all
layers; from the cable plant to the knowledge-sphere the architecture is provided true defence-in-depth.
Proactive defence operations that detect botnets using darkspace, and interdict and disrupt emerging
threats like DDoS only become possible in the cloud. The organizations under this achieve close to
99.99995% security when we measure percentage of malicious traffic.

Proactive Cyber Defence and the perfect storm, Dave McMahon, Bell Canada 2009
The manner that one migrates away from a traditional (special) or limited protection model to general
(enhanced) security architecture can make the difference between success and failure. Traditionally,
managers have deployed point solutions until they exhaust their security budgets with no regard to
security ROI. The order in which you secure your organization is important. We propose is to first do the
things that will eliminate the greatest amount of malicious traffic. Let’s call this the must do list. With the
remaining budgets choose solutions that generally provide good security-value for dollar spent. Should
you still have money to burn, or are sufficiently paranoid, then consider some high-grade safeguards.
There is a law of diminishing returns that takes place where absolutely security becomes unaffordable
and rapidly increases business risks. Currently, most organizations have this backwards; spending an
inordinate amount of funds on things like type 1 cryptography, multilevel security and tempest cell-
phones, when their networks cannot withstand the most rudimentary DDoS attack or Botnet penetration.

A parallel model to consider is one of proactivity. The money you spend on preventing an attack is far
more effective that recovering from an incident. Ironically, nearly all the public security initiatives to date
focus on response and disaster recovery.

The major difference to the models is to design a protection path from the cloud, to enterprise
ingress/egress point and the end-user. Emphasis proactivity and implement according to security ROI at
all layers.

9.7 Inter-network Security in the Cloud

The Internet Service Provider (ISP) cloud forms the first and most important security zone of any
organization. On a national perspective; trusted internet connectivity, core intelligence, centralized
monitoring & cleaning, DDoS protection, DNS security and supply chain/operational security form the
foundation of a National Cyber Security Strategy as published by the US Comprehensive National Cyber
security Initiative (C&CI).

Managed Trusted Internet Protocol Services (MTIPS), Trusted Internet Connection Access Providers
(TICAP) and upstream security services provide Enterprise networks the effective defence-in-depth that
they require to fight cyber crime.

Cyberspace is composed of a number of large peering “tier-1” carrier networks. These large carriers
provide wholesale services to smaller “tier-2” networks such as Internet Service Providers or enterprise
wide area networks. At the end are millions of users and devices. A large carrier like, Bell Canada for
example maintains over 27 million connections and an IP space of around 50 million – about 46% of
market share in Canada or 80% of government.

The aggregation of services and control planes to large national carriers as the centre of cyberspace
brings unique capabilities for cyber defence.

Traditional network perimeter and host based defences have reached there theoretical limitations.
Quantitative metrics at a national level have demonstrated that 80 per cent of zero-day and 99 percent of
sub-zero day malicious exploits are undetected using traditional means of Anti-Virus, firewalls, unified
threat management systems, or intrusion detection systems. Furthermore, the current mean volume of
malicious traffic in public and private sector bandwidth is around 50% and a host infection rate of 5%.

Most net-centric attacks are invisible because the malicious payload is imbedded in legitimate looking
content over high-velocity services such as web or mail. Neither heuristic nor signature-based detection
systems can keep up with the 80,000 new zero-day exploits every 24 hours. The end user has very little
visibility into the scope of the attack and does not have enough data points to triangulate or identify the

Zero-day attacks are measurable at the network centre to within 99% accuracy. A major carrier can filter
approximately 94% of the malicious traffic and attacks without unnecessary intrusiveness. To clean the
pipe to 99.99995% confidentiality and integrity would require explicit permission of the clients for it would
involve filtering based on content and invoking highly restrictive security policies. This level of assurance
has been achieved on some corporate networks. Internet safety is ironically limited by net neutrality and

Fortunately, malicious activity can be identified at “upstream” confluence points in core or choke points
where all traffic must pass. The full extent of the attack can be determined by identifying patterns in a
much larger dataset. Once detected, sub-zero-day attacks can be managed through a variety of
advanced techniques in the cloud at much greater efficiencies then traditional perimeter defences – and
at a fraction of the cost.

9.8 Upstream (Cloud) Security

Upstream security will become a de facto element of enterprise networks in the coming years, as
standard in security architecture as firewalls. Upstream security is a layer of information and
communications controls and safeguards available beyond the enterprise perimeter; up into the carrier
network in a place typically considered a no-man’s land, the “open internet”. In fact, this is a highly
engineered environment that can be deliberately re-purposed to form a security layer to benefit the
efficiency and effectiveness of any enterprise security program.

These are typical capabilities of carrier networks that can be effectively re-deployed to create additional
layers of security to compensate for the ever increasing variety and effectiveness of internet-based
threats. We seek to establish a framework and naming convention of carrier network elements which can
be mixed, matched and deployed as upstream security – even if it is branded in other ways by individual
providers. The utility of this effort lies in the resulting ability to recognize and categorize an upstream
security element decisively for important business functions like audit and compliance, return on
investment calculations, service architecture and development, and risk management.

Upstream security is about an additional layer of security that can be deployed in the top-level carrier
networks to enhance the security of enterprise (business) and domestic users alike. Figure 2 shows an
upstream security layer in relation to a traditional security architecture which begins at the network
perimeter operated by the enterprise.

Figure 16: Upstream security architecture

9.8.1 Collection Elements of Upstream Security

At its highest levels, upstream security can be considered as two distinct service-classes: a proactive
service and a response service. The proactive service is defensive and applied by enterprises as a new
security, intelligence and surveillance layer in advance of the discovery of threats, risks or impacts.
Investment in this type of upstream security is akin to investments in secure perimeter technologies like
firewalls and mail filters – but within the carrier network as opposed to on or within the enterprise
perimeter. The second variety of upstream security that might be considered is a reactive, response-
driven security layer associated with interdiction and, mitigation of on-going attacks, reverse engineering
threats and performing forensics and post-mortems. This type of upstream security is related to the
Computer Incident Response Team (CIRT) capabilities that most sophisticated enterprise will possess to
some extent (though less sophisticated enterprises will leave this capability to ad hoc remedies at the
time of impact). Figure 17 depicts the different information collection elements, which compose the
ecosystem of upstream security capabilities.

Figure 17: Anatomy of Upstream Security

Table 2 is a summary of the different information elements that comprise upstream security and how they
can be deployed: either as proactive security control or mitigating controls to be applied for incident
response and remediation.

Table 2: Upstream Security Elements

Upstream Security
Source Summary Proactive Carrier- Carrier-grade
intelligence response
Traffic flow National-level traffic patterns
analysis reveal suspicious
communications paths and
data flows to malicious or Yes Yes
compromised IPs and
Autonomous System Network
(ASNs) or rogue connection
Infrastructure Router drop lists applied to
elements (drop list) diverse ingress/egress points
from alternate service
DNS analysis Domain name lookup statistics
and logs reveal incongruous
matches between IP addresses
Yes Yes
and domain names (pharming)
and command and control
communications paths.
Messaging Spam and phishing attacks
analysis crossing or leaving the carrier
network reveal IPs of Yes Yes
compromised device acting as
spam relays and engines.
P2P analysis File sharing traffic indicates
violations of enterprise
acceptable use, data leakage, Yes
command and control

9.8.2 Traffic Flow Analysis

Traffic flow analysis (AKA core traffic analysis) is the observation of network patterns at the carrier level to
identify illicit activities. When observed at the network edge – the enterprise outer perimeter -this
behaviour appears random. Viewing this traffic from the carrier level is like viewing events on earth, from
orbit. Zero-day attacks, illicit activities, penetration attempts and compromises which are largely
undetectable by security products at the edge reveal themselves through communications patterns.
Unlike “infrastructure elements” discussed below, traffic flow analysis must be explicitly established and
supported through carriers by provisioning special skills, architectures and equipment.

Traffic flow analysis at the core is the observation of network traffic patterns over a carrier network to
identify illicit activities, for instance, end points which scan horizontally through IP ranges or vertically
through sequential IP addresses. When observed at the network edges (by individual organizations) this
behaviour is so widespread to the point of appearing random.

When observed from the network centre, concerted illicit behaviour seldom appears random. Patterns
become visible related to:

• Bogon traffic: attempts to send packets from unassigned IP space or to unallocated IP space betray
• C&C pathways: traffic related to compromised devices display heuristics distinct from typical, benign

Once patterns are detected in the carrier network centre, further analysis reveals:

• C&C end-point: the “’bot herders” and the IPs they use to control the “zombies”, or the proxy devices
they use to relay commands to the zombie end-points. The identification of C&C resources allows for
carrier to compile dynamic list of both known bad IP addresses and bad ASNs. Once the bad sources
are revealed through network centre analysis, the further layer of intelligence related to zero-day
exploits starts to become visible:
• Targets: services and/or assets subjected to probes and data flow from bad sources, i.e. organized
crime, spammers, hostile state-sponsored entities attempting to compromise specific organizations
• Compromised assets: compromised users are identified by intermittent and/or continuous bi-
directional communication with known, bad sources.

A further element of traffic flow analysis is the sharing of threat information among carriers and other
large networks operators. Information about known bad IPs and ASNs is important to all major network
operators as a matter of managing the assurance of their networks. Since the network is, by definition,
the cyber-attack route, the more attacks the greater the “noise” and waste in the network. The less noise
the more effective the network’s capital utilization. For this reason large carriers share information about
bad IPs and ASNs on a confidential and collegial basis in order to apply varying approaches to mitigation.
Therefore the sum capabilities associated with flow analysis and carrier grade intelligence depends also
on “softer” business and professional relationships among approximately 2500 large network managers
(telco’s, ISPs, cable companies, enterprises) world wide.

These IPs and ASNs are either controlled by, or sometimes operated by, organized crime and malicious
state-sponsored entities. The distinct “control” and “operation” is that IPs may belong to legitimate
organizations, but have become so badly compromised by malicious entities that they are in effect “bad”.
This capability to perceive the network sources of the most lethal threats engenders the ability to
proactively address, with high degrees of confidence and specificity, the latest (zero day) threats and
compromises propagated by the high-resource threat agents (organized crime and malicious state
entities). White-listing trusted network is just as important.

Large carriers share “close source” information on a trusted and collegial basis in order to support
individual approaches to mitigation. Therefore some of the capabilities associated with traffic flow analysis
and intelligence depend on “softer” business and professional relationships among approximately 1600
large network managers (telco’s, ISPs, cable companies, enterprises) world wide. Other sources of
closed source information used in traffic flow analysis are derived from law enforcement and customer

Supporting law enforcement efforts related to data interception is a mandated requirement for most
regulated carriers; in the course of supporting lawful access requirements, information from police
intelligence is cross-pollinated with other closed source information within the carrier for a richer, final
product. Customer complaints are also a source of closed source information, as customers attempting to
cope with compromised device will often contact the ISP or carrier first because they figure (wrongly) that
the degradation of service they are experiencing is related to a network problem. Such support calls
frequently reveal severely compromised machines and accelerate their identification as either “zombie” in
bot-nets and/or command and control servers.

“Open source” information will also be integrated as an adjunct to closed source information. Open
source information is freely available or available through groups with open memberships; list of bad IPs
will be found from various security vendors websites as well as sites dedicated to security such as Spam
and Open Relay Blocking System (SORBS)xi or SpamHaus. Open source information also includes the
signatures and profiles of malware distributed by vendors as part of a product-support service like anti-
virus or intrusion detection.

Legal Opinion: Upstream security services mitigating threats such as botnets and DDOS are derived from
operational infrastructure and processes in the network core. These security services are not tied to any
tariffed products, and are therefore forborne. The service elements relate to suspect devices on the
Internet behaving in a manner, which indicates possible compromise. Service elements include:

• IP Address
• Port
• Protocol

Business clients use this information to manage network traffic outbound (from) their network(s) to these
suspect-devices, as appropriate, under their individual security policies and programs. No identifying
information about the suspect-device-owner is provided.

The prevailing legal stance is that there are no privacy issues and no regulatory issues as the provision
by the carrier of the limited information does not include customer information nor does it constitute a
telecommunications service and so no tariff is required (nor a forbearance order).

9.8.3 Infrastructure Elements

Network infrastructure elements from the carrier core can be extended through configuration and human
analytics to provide a range of upstream security services including, both proactive carrier-defences and
carrier-grade intelligence. In all cases, this extension of infrastructure capabilities depends on the
availability of traffic flow analysis, which serves as the key that unlocks the upstream security value of the
infrastructure elements and associated services. Network infrastructure elements include security routers
employing drop lists, Domain Name Services (DNS), Messaging analysis and Peer-to-peer (P2P) traffic
shaping analysis. Infrastructure elements in the cloud are supported by special purpose Threat
Management Solutions (TMSs) and Deep Packet Inspection (DPI) systems that are specially designed to
detect DDOS attacks, spamming and identity theft. Drop Lists

A security drop list is a common instruction set across routers on the carrier-border (peering points) and
routers in the carrier core. If a packet from or to an IP on the drop list arrives at one of the routers’
interfaces, it is sent to a null interface or “blackhole” where the packet is essentially deleted from memory
and not passed any further. Great care must be taken when applying drop lists because a single IP
address may easily host both legitimate and illicit sites. An IP address may also be legitimate in that it is
performing important “innocent” functions for legitimate users – but also hosting parasitic services like
‘bots’, spam relays, IRC or malware servers. Implementing a drop list must occur across all peering
interfaces simultaneously, because malicious entities can change upstream providers rapidly and
therefore the inbound / outbound route to these IPs and ASNs can also change rapidly.

While managing and implementing drop lists at the enterprise perimeter is possible, it is not a high-
efficiency activity for deterring or preventing inbound attacks such as spamming, DDOS, Phishing or
botnets. The problem is that the attacks will have progressed right to the enterprise border before being
addressed. In the case of DDOS attacks, this is already too late: the largest DDOS attacks have been
observed surpassing 40 Gbps – enough to overwhelm any non-carrier enterprise at this time. Therefore
the relatively small number of external interfaces being managed by a typical enterprise (relative to a
carrier) means that the advantages of drop lists are limited to outbound traffic – preventing internal
devices from (unwittingly) reaching malicious entities. However, without the technical and industry
resources available to carriers, any enterprise drop-list will likely be based on open source information
and possess gaps. Given the effort associated with maintaining and managing drop lists, enterprises
need to assess if it is even worth supporting such a safeguard locally.

132 DNS Analysis
DNS infrastructure in a carrier network is often far more substantial than that of a normal enterprise, and
can provide very valuable upstream security services. Because carrier DNS services will often support
millions of direct, domestic users and thousands of business, they see a lot of traffic. DNS is also known
to be a critical asset worthy of direct attack because compromise of DNS services can result in wholesale
fraud of dependent users.xv For these reasons, DNS services in the carrier generate at least 2 useful
pieces of information: who has been compromised by malware, and who is launching attacks against
specific assets.

It is very typical of the worst forms of malware to encode a DNS name as the “call-home” command-and-
control address once a device has been compromised. Using an DNS name rather than an IP address
provides the bot-master advantage of being able to change C&C servers to avoid detection and for
redundancy. Awareness of the DNS names being used for C&C operations allows DNS operators to set
alerts whenever the malware domain is queried, and commence response operations since the device is
very likely compromised or at best fatally curious. DNS records may reveal useful information such as the
address of the request, time, machine operating system of the victim machine, and the variant of malware
which has been installed. The variant of the malware is revealed by using traffic analysis to observe the
traffic patterns going to the C&C location for their heuristics (port, protocol, payload sizes,
communications frequency and timing). Because successful malware is so often “zero-day” and no
signatures exist from the IDS/IPS/AV vendors, these heuristics provide enough information to trace
compromised machines on the internal enterprise network without the benefit of the binary signatures.

As DNS services become the target of attack and compromise, they also posses the potential to become
a source of information about which assets are being targeted. DNS logs can reveal attempt to subvert
DNS records associated with any domain not managed from the DNS service in question. For instance,
repeated attempts to high-jack the domain of a banking entity is a strong warning that the specific service
has been targeted. While the attack may fail on the carrier DNS, it may succeed on less well defended
DNS servers run by smaller ISPs or enterprises. The owner of the asset may have a variety of response
options that would be activated based on the threat intelligence coming from the carrier DNS records. Messaging Analysis

Messaging analysis and infrastructure is de rigour for carriers coping with the fact that 95%+ of email on
the internet is illicit. The ability to manage email security is paramount for large organizations and
enterprises in the position of applying more and more resources to protect a rapidly shrinking proportion
of legitimate versus illicit messaging arriving at their perimeter. This represents an opportunity for carrier
to offer upstream message filtering, which is in effect the outsourcing message-cleaning. A significant by-
product of large scale message cleaning is the insight available from messaging patterns (of both
malicious and innocent but unauthorized).

A variety of content filters will typically be applied at major mail aggregation points. It is useful to
understand the nature of these filters because the reports they generate can be applied to proactive
threat and risk management, if the sample-set is of sufficient size and diversity. The first distinction to be
made is between inbound and outbound message filtering.

Inbound filtering is related to messages arriving at the carrier aggregation point from domains external to
the destination domain and / or from peering networks. These filtering metrics indicate threats to the
organization, enterprise or user base. Outbound filtering is related to message leaving a organization for
external domains. Reports from outbound filters are therefore of particular interest because they can
indicate security issues related to data leakage, inappropriate usage, flawed configurations and especially
devices compromised by known threats/malware. Therefore outbound filters can be used to alert
proactively when undue and suspicious amounts of outbound messages are being filtered. Similarly,
outbound messaging analysis can be applied reactively to trace compromises and malware which have
breached or circumvented internal security controls, and have started the next phase of their malicious
lifecycle: to propagate.

133 Peer to Peer Traffic Shaping Analysis
Peer to peer (P2P) analysis AKA “traffic shaping” is another key upstream element of a modern carrier
network. P2P analysis is deployed in order to manage bandwidth where a small number of users
consume a hugely disproportionate amount of resources. P2P analysis involves real-time inspection of
traffic streams from IP addresses looking for tell-tail signs of file sharing applications such as Kazaa,
eMule, bitTorrent and a range of other similar tools. These applications will distinguish themselves not
just by large bandwidth consumption but also by the ports and protocols they use, the nature of the
payload and the destinations they may be communicating with in order to coordinate file sharing. P2P
analysis infrastructure has become critical to Tier-1 carriers, credited with “reclaiming” a substantial part
of the network from mostly illegal activities which threaten the assurance of the whole network – not just
copyrights on music and movies.

In using P2P infrastructure for upstream security applications, both proactive and reactive capabilities
become apparent. From the proactive perspective, P2P infrastructure, like DNS infrastructure, can be
configured to monitor and issue alerts when new P2P sessions are initiated from domains or IPs which
should absolutely not have P2P traffic. For instance, P2P traffic exiting from most enterprise is an
indication of misuse of corporate network assets at best, and malware compromise at worse. It is also a
well established fact that many P2P applications will be dual-purpose: they will support file sharing
according to user expectations, but will also index and surreptitiously expose everything on the computer.
In this way, any personal or corporate information which is resident on the system will become exposed to
the P2P network. Analysis of P2P search strings cascading through the file sharing networks shows
plenty of evidence of queries related more to espionage and identity theft than copyright infringement.

P2P infrastructure also posses value as a responsive tool to address incidents within the enterprise
perimeter from the carrier core. Consider an enterprise that has the requirements to shutdown all P2P
traffic without necessarily interfering with other outbound, corporate traffic. In this case a punitive
management rule can be put in place for the enterprise domain, throttling all P2P traffic while allowing
business to continue otherwise unaffected. While this sort of control is in place, the security and IT staff
within the enterprise can undertake the process of enforcing (anti) P2P policy, with increased confidence
that they are not merely relying on voluntary compliance. While controlling P2P communications at the
enterprise perimeter is also viable, this returns to the issue of more tools for internal staff to buy, monitor
and refresh; meanwhile the carrier already supports the same capability as a core competency.

9.8.4 Upstream Security Use Cases

The capabilities we have described so far in this document are not proposed as standalone solutions. The
traffic flow intelligence and the infrastructure elements are all components to be mixed and matched
according to the security requirement. Potentially, dozens of security and operational risk management
bundles could result. What follows are sample use-cases of upstream security solutions composed of
different capabilities derived from carrier network capabilities.

The following use cases describe the major scenarios that can be mitigated by upstream security and
how organizations are involved.

134 Use-case 1 -Botnet detection with Traffic Flow Intelligence

Malware and the ensuing botnet compromise are distressingly frequent because the binaries and
communications paths can remain undetected for extended periods. Traffic flow analysis use case in
Figure 18 can be applied to substantial effect as a mitigating control for zero-day malware
compromise. Devices which have been infected with new or unknown forms of malware can be
observed from the carrier network. Alerts can be issues to enterprise response teams for follow-up on
the internal network. The same intelligence can be replicated inside the enterprise network for
deployment inside IDS and IPS services as blacklist, alerting administrators if any of the internal
devices attempt to communicate with the known bad or suspected IP addresses from the traffic flow

Figure 18: Botnet detection use case

135 Use-case 2 -DDOS Mitigation with Traffic Flow Intelligence

DDOS attacks pit the combined force of potentially millions of compromised devices against an
enterprise. Figure 19 is a simple depiction of DDOS mitigation using traffic flow intelligence, where
the attack is detected in its early stages. Before the attack can grow to the point of impacting the
targeted service, the traffic is routed within the carrier network to scrubbing centres to remove the
attack traffic and allow legitimate traffic to be routed to its destination (on the targeted host or
domain). Because the carrier will possess probably dozens of peering points where the attack traffic
enters their autonomous network, it does not have the opportunity to sufficiently converge on its
target or pose an imminent threat to the carrier’s network.

Figure 19: DDOS mitigation use case

136 Use-case 3 -Rogue Connection Detection with DNS and Messaging

Large organizations with many geographic locations can be faced with the challenge of detecting and
preventing unauthorized and uncontrolled network connections. In the case of internet connections,
these network access point can present lethal backdoor threats to the enterprise as a whole as
illustrated in Figure 20. Messaging analysis in the carrier core can monitor for message originating or
destined to domain associated with the enterprise but using IP addresses not part of the official list of
ingress or egress points. DNS analysis can similarly reveal domain names associated with an
enterprise that are resolving to an IP not part of the official list of ingress and egress points.

Figure 20: Rogue connection detection

137 Use-case 4 -Fourth Factor Authentication Combining Drop List,
Messaging, Traffic Flow and DNS Analysis

On-line businesses are escalating their authentication requirements and technology, but continue to
be thwarted by sophisticated threats. These systems increasing employ a second factor: what you
know (IE password), what you have (IE random number token) and what you are (biometric). The
newest threats which include zero-day malware and resulting botnets employ techniques which
capture multiple factors of authentication information to circumvent these “adaptive authentication”
systems. A fourth factor of authentication can be considered, which is what a device has been
observed to be doing (IE, communication with know bad IPs and ASNs). The aggregated information
from drop list, messaging analysis, traffic flow analysis and DNS logging can be applied to form an
additional piece of authentication criteria upon which to base authentication escalation decisions by
on-line businesses.

Figure 21: Fourth factor authentication

It is important to understand upstream security in a manner that allows enterprises to distinguish
among carrier-based security products and services in a more informed manner. Malware and the
resulting botnets will continue to present the most significant threats to on-line services in the coming
years. This includes threats to financial transactions and profits, threats to privacy and compliance,
threats to industrial systems and safety, threats to confidentiality, integrity and availability.

These threats cannot be sufficiently addressed by the legacy, perimeter-based security designs,
which start at the enterprise perimeter and face inwards; they require treatment starting upstream at
the carrier level.

This is not a new concept, but it is newly relevant. What is novel today is the realization that
continued investment in legacy security designs has passed the point of diminishing returns. Security
designs inclusive of carrier-based upstream security represent an efficient, effective and needed

9.9 Malware Analysis Treatment and Handling

Malware Analysis Treatment and Handling (MATH) labs are used within the cloud to isolate botnets
and identify the victim hosts and the C&C servers. Since MATH depends on port scanning in dark IP
space it is best suited to the cloud where service providers see a broader swath of illicit activity and
have the resources to re-direct botnet traffic to the analysis, treatment and handling segments of a
MATH lab. The components of a MATH lab are illustrated in Figure 22. MATH labs are operational
today. Darkspace analysis was listed as the highest e-security R&D priority for the GC in 2010. A
detailed design of MATH is out-of-scope from this study. A complete design was proposed in Public
Security Technical Program (PSTP) Call No. 2, e-Security Community of Practice Study No. 2, Study
on the Analysis of Darknet Space for Predictive Indicators of Cyber Threat Activity. We have tabled
this topic under future work.

Figure 22: Malware Analysis, Treatment and Handling (MATH) Lab

The operating characteristics of the analysis, treatment and handling segments are described below.

9.9.1 Analysis
IP address allocations for corporate use are usually done using contiguous blocks of space. Small
portions (4-8 contiguous IP Addresses initially located at the beginning of the address block) are
skipped in the assignment process. These small portions of IP Space are all statically routed towards
a centralized sinkhole environment and are declared “Dark” – meaning no IP activity should take
place in these address spaces. Analysis on any IP traffic detected on these dark IP address spaces,
e.g. in the 131.101.x.x range above, will easily pinpoint any source IP address that may be guilty of
“scanning” and usually this will indicate a sign of bot compromise in the host.

A network intrusion detection system can be used to report suspicious port scanning activities in dark
IP space. Once a dark space endpoint IP address has been identified, a sample of the initial
communication from the infected host can be obtained by installing a host at the endpoint and by
configuring the host to respond to any TCP/UDP connection. Preliminary analysis of this data can
yield the source address of the host within the network and the address of the C&C server.

The traffic from the host under investigation can be placed under closer traffic analysis to reveal more
information about the C&C or the malware can be identified and manually extracted from the host in
the “Handling” segment.

9.9.2 Treatment
The purpose of the treatment segment is to act as a re-direction point for infected hosts. The first
step is to identify the process behind the port scanning on a single host which should then lead to the
underlying malware. Once the method of communicating with the C&C server has been established,
a duplicate environment is configured on the treatment segment and the infected hosts are re-
directed to the fake C&C server in the treatment segment. This effectively stops the bot agent from
communicating with the real C&C server and, once the C&C commands are understood, the infected
hosts can be instructed to cease port scanning or other malicious operations on the network.

9.9.3 Handling
The handling segment consists of a series of hosts that represent standard configurations for the
enterprise environment under attack. Each operating system and operating system version should be
implemented in the handling segment.

The handling segment is most useful for large service provider or carrier environments when time is
of the essence and determining exact malware behaviour is of the utmost importance. Although AV
vendors a generally proficient in providing proper signatures for a given botnet infection, they do not
always address all host configurations in an enterprise and the signatures sometimes do not
completely describe the malware in question, e.g. how it behaves in a virtual environment. Highly
skilled IT security technicians who are knowledgeable in malware research and reverse engineering
techniques can analyze the malware sample in the handling segment and develop a mitigation
strategy before the AV or product vendor responds with an updated signature or patch.

A significant proportion of malware is packed (encrypted and/or compressed) in order to produce

smaller code and mainly to avoid signature based antivirus detection. Packer identifiers such as PEiD
available at can automatically determine the specific packers, encryptors (PE) and
compilers used for PE files.

Another method of determining the packer used for a malware specimen is to analyze it in a
debugger. Ollydbg is one example of a debugger available at “OllyDbg is a 32-
bit assembler level analysing debugger for Microsoft® Windows®. Emphasis on binary code analysis
makes it particularly useful in cases where source is unavailable.” A third option is to disassemble
the malware into readable machine level instructions that can be analyzed to determine the logic
behind the malware.
IDA Pro is such a tool and is available as a demo from

9.9.4 Summary
MATH labs are most effectively deployed in a cloud environment and offer the best defence against
zero day botnet attacks. They may also be used by LEAs under special circumstances to investigate
criminal botnet attacks and to pursue cyber criminals. Alternatively, by establishing liaison positions
with the carriers, LEAs can use the information gathered by the carriers MATH labs to carry out their
investigations without having to make a large investment in highly specialized technology and

9.10 Building an Advanced Investigative Capability

The principle challenge facing law enforcement and national security institutions addressing
cyberspace is an absence of appropriate of tools, tradecraft, and permissions enabling clear and
credible path from detection through to prosecution. The staggering growth of the Internet over the
past 20 years has outpaced the capacity for law-enforcement and security institutions to adapt their
procedures, obtain sufficient political guidance, and build the necessary capabilities to sustain
effective policing within this domain.

In part, this challenge is inherent to the architecture of cyberspace, which emphasizes resilience over
security, and where relatively frictionless global connectivity allows perpetrators to act locally, while
hiding globally. The misuse of cyberspace is evident to telecommunications operators tasked with
managing network flows. However, at present, institutional, legal, cultural and regulatory boundaries
make it difficult for law enforcement and security agencies to gain a Common Operating Picture
(COP) of national cyberspace, and gather sufficient pre-warrant awareness to rapidly and effectively
respond to criminal and national security incidences occurring in this domain. Consequently,
cyberspace represents a global platform for criminal and espionage activities, whereas law-
enforcement and security actors are presently bound by institutional and national jurisdictions and
hampered by limited global reach.

Cyberspace also represents a unique challenge in that threats can occur to the network, and through
the network, requiring unique skills and permissions:

• Threats to the network - which includes bot nets, malware, and other means to exploit or
disable computer systems - require technical skill sets not commonly available within law
enforcement and security community. Moreover, standards of evidence are not fully
developed or consistent across national boundaries. Collection can be problematic as often
information gathered in the pre-warrent phase may be critical, but inadmissible as evidence in
a criminal case.

• Threats through the network - such as the exploitation of women and minors, illicit value
transfer, wire fraud, radicalization and information sharing among militant communities, and
other activities that leverage cyberspace - require capabilities to conduct covert and
undercover work in cyberspace.

They also require skills and capabilities to conduct rapid entity resolution, pinpoint attribution, and
collect evidence according to standards, which are admissible, resilient to charter challenges, and
likely to lead to successful prosecutions.

Addressing these challenges will take time. It requires changes at the policy and institutional level - to
establish new standards of evidence, investigatory powers, and legal permission to allow law
enforcement and security actors to monitor national cyberspace to a level sufficient to enable
effective policing. It will require the adoption of new tradecraft and techniques relevant to conducting
investigations in cyberspace, and the training of personnel tasked with these duties. Finally, it will
require law enforcement and security actors to enter into collaborative relationships with private
sector actors to source evidence, provide ongoing situational awareness, and to support and conduct
investigations and related activities.

9.10.1 Situational Understanding and a Common Operating Picture

Creating an effective policing mechanism for cyberspace, such as that required counter threat of bot
nets, is dependent on attaining situational understanding and a Common Operating Picture. A
rough analogy would be the establishment of a national air traffic control centre for cyberspace, but
one tasked with monitoring rather than directing traffic. Establishing such a capacity requires access
to relevant data sources, and an ability to 'fuse' and analyze these in a meaningful way, and in real
time. Situational understanding by the collection and processing, a presentation of data in a way
which would make it meaningful to decision-making requires two levels: the national jurisdiction - that
is to say the jurisdiction over which national law enforcement and security actors have direct
enforcement authority; and, of global level (which is necessitated by the common protocols and
routing infrastructure of the Internet). At present, attaining situational understanding is difficult.
Ownership over data is fragmented, and most of it resides with the private sector, including
telecommunications operators and firms specializing in gathering network metrics and intelligence.

A Common Operating Picture by contrast, would encompass multiple views onto the status of the
national infrastructure but segmented by specific needs of intelligence, law-enforcement, defense,
and telecommunications actors. In the UK, one of the early adopters of such an approach a the all-
of-government level, this function is handled by the Cyber Security Operations Center (CSOC), a
multiagency fusion center which will operate out of the GCHQ facilities in Cheltenham which
incorporates feeds from both private sector actors, and government sources. At the time of writing,
the Canadian government does not possess any shared capacity for situational understanding, nor a
Common Operating Picture.

9.10.2 Advanced Investigations Capabilities

Research into botnets has expanded in recent years, from a relatively small cottage industry involving
primarily technical experts to a budding cyber security industry, which now includes academia,
defense, intelligence, law enforcement, and private sector actors. The rapid rise of this industry is in
part recognition of the significant threat that these criminal ecosystems represent to critical
infrastructure, government systems, personal privacy, and defense. Several high profile cases and
events, including the release of the Ghostnet study detailing alleged Chinese cyber espionage, the
breaches at Google, as well is the way in which DDOS attacks were successfully leveraged during
the Russian Georgia war of 2009 - have underscored the growing threat environment.

Countering advanced persistent threats, criminal bot nets, and DDOS events is a time-consuming
and labor-intensive process. The challenges facing law enforcement are numerous. They include:
access to appropriate tools with which to conduct investigations, access to relevant data and
collection methods, clearly defined standards of evidence that will withstand charter and court
challenges, and, a properly trained workforce capable of undertaking these investigations in a cost-
effective manner is likely to lead to successful prosecutions. They also encompass methodological
approaches, techniques and tradecraft appropriate to gathering evidence, analyzing data, and
building cases involving complex cybercrime were multiple jurisdictions may be involved.

At an operational level, an investigation of Advanced Persistent Threat (APT) and Botnets can be
broken down into four phases, each requiring specific tools, tradecraft, and process. (The table below
provides a summary of an APT investigation process and associated requirements). Detection and triage of cyberthreats (APT, Botnets)

APTs and botnets are part of a broader global ecosystem of crimeware whose size and scope is
impressive. For example, the size of the cloud of computers currently infected by the Conficker virus
exceeds that of the largest commercial cloud-based providers such as Google and Amazon by a
significant margin. Consequently, early detection of threats is dependent upon the constant
monitoring of the ecosystem, which includes monitoring for the existence and or emergence of

criminal crimeware kits within national cyberspace, and awareness of overall global trends. It is also
dependent upon an ability to conduct open source monitoring of known hacker forums, and other
online sites used by cyber criminals and others distributing malware code. Finally, it is also
dependent upon maintaining up-to-date awareness of specific signatures and other characteristics of
targeted malware (APT). Often, this information will reside in private sector, or other difficult access
sources. Law-enforcement actors need to seek appropriate permissions for access to this data, and
often the process required to do so is difficult, and costly. Finally, it requires a trained workforce
capable of rapid processing of open source intelligence, and making it available to investigators and
other agencies.

9.10.3 Evidence collection

This is critical to combating APTs and botnets but can be difficult both in terms of the means required
to harvest relevant data, and the resulting size and quantity of data sets that must be processed and
analyzed. Often, investigators can not capture very large data sets because it is difficult to anticipate
ahead of time which streams of packets may be specifically important to a particular investigation.
Commonly used packet capture programs such as wire shark are problematic, because they capture
the entirety of the data stream and may inadvertently record traffic and data belonging to third parties.
Collection may also have to occur against a variety of different devices. These can include personal
computers, cell phones, or embedded devices such as smart meters. The requisite skills, and tools
required to recover data from these systems are quite wide, and challenge law enforcement agencies
with limited resources and skills. Finally, evidence collection presents investigators with a problem of
determining the status of evidence, and whether it is captured during an intelligence phases, or for
criminal prosecution purposes. In general terms, evidence collection in cyberspace investigations
can be divided into three broad streams:

• In-stream technical data collection - This is typically obtained through capturing data
streams between targeted systems and or communications pairs and is usually
accomplished through the use of packet capture program, or devices like Niksun, Cloud
Shield, Arbor or Nomimum. Data can be collected at the carrier level, targeting a particular
communication pair. It can also be collected from a direct connection to the targeted system
or systems (either through the installation of a warranted surveillance device or through
covert Sensitive Site Exploitation). Collection of technical data as close to the intended
victim, or perpetrator is a particularly effective means of identifying specific APTs as they
often use undocumented techniques or zero day exploits that lack a conventional signature
or fingerprints. Data obtained from the source can be rapidly parsed and analyzed and
imported into advanced link analysis and visualization software for further analysis, provided
that you have the right tools and skills.

• Forensic extraction - obtained from seized systems and or devices, including cellular phones.
The data could be used to reconstruct communications, established geo-temporal timelines
and a social and link analysis, and recover other critical data relevant to the investigation.
Advanced forensic extraction devices allow for rapid exploitation of data in a format that can
be readily analyzed and exploited by advanced link analysis and visualization software.

• Undercover or covert investigations - carried out through the use of specially constructed
online persona consistent with legends used for conventional undercover work. Online
personas need to possess identity congruence, ensuring that the characteristics of the
persona are consistent and include such things as the geolocation of the IP address from
which the persona accesses the Internet, the characteristics of the computer system, and
remotely searchable information such as cookies, browser type, etc. The working
assumption should be that a clever target will attempt to reverse interrogate and verify the
identity of any online persona attempting to enter the circle of trust. Consequently, online
personas require as much attention to detail as the construction of conventional undercover

and or covert identities for criminal and security investigations. Online personas can be
manually managed in order to collect evidence, or they can be programmed to automatically
harvest data, for example from message boards and other communication streams. Some
systems, such as Cloakroom allow for the tagging and structuring of data directly within the
investigation environment, which facilitates its incorporation into an investigation being
carried out within an advance link analysis and visualization platform (such as Palantir).

9.10.4 Analysis.

This step is most crucial, and perhaps most challenging component of an APT investigation. Often it
requires access to, and the ability to manipulate and correlate disparate data sets. Some of these
would be obtained from existing criminal intelligence and information systems. Others require access
to private sector data sources, such as the ability to do locate IP addresses and other technical data.
Establishing identity in cyberspace is particularly challenging because a single entity can take on
multiple personas. The ability to perform entity extraction and entity resolution is daunting, and
difficult to do without the necessary tools and methods. Investigations are also iterative, and often
information sources need to be added and/or eliminated throughout the lifetime cycle.

The evidence and data used for APT investigations is heterogeneous. It will include technical data
obtained through network and technical evidence gathering techniques ( as described above),
Conventional policing techniques and investigation methods including (DNR and Part VI records),
GPS tracking evidence, investigator notes, impossibly evidence gathered through MLATs will form
part of the evidence base. This evidence will need to be correlated and analyzed using a number of
different techniques depending on the nature of the case: geo-temporal analysis, to establish
timelines and locations; Social network analysis (often incorporating wiretap and data stream
information) to establish attribution, and complex analysis in order to demonstrate the operation,
function, and victims of particular APT's. This latter form of analysis can be quite daunting, as it
requires both a good understanding of the fusion and visualization environment, as well as a
substantive understanding of how APTs function at a technical level.

The field of data fusion analysis is rapidly evolving, but few of these systems have been adapted to
work with the very large and complex data sets required for conducting investigations inthe cyber
environment. A notable at exception is the Palantir intelligence analysis and visualization platform,
which is being used for several high-profile APT investigations, including the 2009 Ghostnet study.

9.10.5 Reporting and Case Management.

Communicating the results of complex investigations is critical. Unless the presentation of evidence
is clear, compelling, and understandable, legal challenges can tie up prosecutions, or lead to the
dismissal of cases (are pleading to lesser charges). The ability to render the salient and relevant
components of an investigation in a manner that is easy to communicate by a prosecutor, is key to
successful prosecutions. The form in which analysis takes place, as well as the way in which it can
be represented - as a timeline, as a chain of evidence leading to a perpetrator and as a means to
describe the significance of an act or event, all form critical components of building a successful
case. The majority of existing analytical packages, such as i2’s Analyst Notebook, visual analytics or
Palantir, offer an ability to export snapshots of the investigation as PowerPoint slides, or other data
simplify and highlight aspects of the investigation.

Archiving investigations, as complete data sets, is also an important component of building a

knowledge base that can be applied against future investigations. Often the modus operandi,
technical characteristics, or attack vectors will fall into specific patterns. The ability to mine and apply
these patterns as whole, rather than their technical components can often shortened and simplify the
investigation process in future iterations. Consequently, analytical environments that permit storage

and archiving of investigations as persistent search modules are preferred over those, which only
store a representative level of the outcome of an investigation, rather than the underlying data and
relationships. At present, I2 iBase and the Palantir intelligence platform offer this capability.

9.10.6 Summary of an Advanced Investigative Capability

Table X: Summary of an APT investigation phases and associated requirements: Tools, Tradecraft
and Process

Tools Tradecraft Process

Detection and triage
of cyberthreats (APT,
Pre-warrant monitoring Honeypots and • Training in open • Appropriate legal
and open/grey source/ Honeynets to detect source methods and permissions for
commercial intelligence. targeted attacks. methodologies; access to carrier
trained workforce of grade intelligence.
analysts. • Subscriptions to
• Training and commercially
expertise in available data
operating honeypots source
and honey nets • Subscriptions and
agreements with
private and
providers of
network monitoring
services (SOCS)
Evidence collection
In-stream technical data Packet capture and • Training in technical • Warranted data
collection analysis systems SSE capture and
(Niksun, Cloud Shield, collection (Criminal)
Arbor, Wireshark, • Technical SSE
CISCO DPI, Arcsight) (Intelligence)
Forensic extraction from Forensic extraction • Training in
digital devices tools (UFED, ENCASE) advanced digital
Undercover or covert Attribution free, identity • Appropriately • Warranted
data collection congruent network trained law surveillance
(surveillance of online access. (for eg,. Avatar, enforcement (criminal)
targets; automated Anonymizer, TOR, professionals • Mandated
retrieval of target data Cloakroom) (including civilians) intelligence
from public sites) for conduct of gathering
online (Intelligence)
investigations using
adopted personas.
Automated data • Appropriately • Admissibility of data
collection robots: IRC- trained staff obtained through
bots, Automated site capable of automated data
scrapers and parsers deploying and retrieval tools and
(for eg, Kapow) programming criminal cases.
automated tools,
and a low attribution
Data and Log file • Appropriately
analysis software (for trained workforce
eg, Netwitness, Arcsite) capable of

computer log file
analysis and
Link analysis, Geo- • Training in • Appropriate
temporal, social specialized permissions and
network and analytical practices permitting
relationship analysis environments cross database
systems, capable of (visual analytics); access within
fusion between trained workforce of criminal justice and
structured and analysts. intelligence
unstructured data institutions.
sources (Palantir, I2
Analysts Notebook,
Visual Analytics, etc.)
Presentation and Case
Intelligence and • Training in • Appropriate
evidence, archiving, specialized systems,
data search and fusion analytical permissions and
systems. Investigation environments procedures for case
archiving systems. (visual analytics, electronic case
Advanced geo-temporal data retrieval and files archiving and
visualization systems representation); retrieval
(for eg,. Palantir, I2 trained workforce of
IBase) analysts.

9.10.7 Covert Network Access

The network connection to the Internet that police use to conduct investigations or undercover
operations needs to be trusted, secure and non-attributable. It has to be robust enough to withstand
the most sophisticated attack and invisible so that it never has to. For much the same reason that the
US Comprehensive National Cyber Security Initiative uses Trusted Internet Connectivity.

The Trusted Internet Connection (TIC) initiative was formalized in November 2007 by the USA with
the goal of drastically decreasing the number connections to the Internet. The fewer connections to
the Internet, the easier it is to monitor and clean traffic. Under the TIC initiative, all agencies must
either work with an approved MTIPS (Managed Trusted Internet Protocol Service) provider (AT&T,
Sprint, Verizon, or Qwest have been approved by GSA thus far) or be approved by The Department
of Homeland Security to provide their own consolidated service by passing 51 requirements known as
a TICAP (Trusted Internet Connection Access Provider).

The provider that LEAs use for cyber investigations should be trustworthy. The company should hold
a facilities security clearance of Top Secret, controlled goods, COMSEC and be FOCI compliant. The
network itself should be certified and accredited.

Most LEAs use standard (untrusted) Internet connections for both administrative services and covert
operations. Using unverified paths to the Internet has a number of potential drawbacks for police:

• Most web browsing sessions are visible by all web site owners that investigators visit. The
identity of police can be easily ascertained and a case burnt. Even innocuous browsing to a
university site, political, media or activist site can have a chilling effect and raise unwarranted
allegations of investigation.

• If security has not been implemented at the cabling and network connectivity layers chances
are that all communications circuits into an organization are recorded by telecommunications
providers, sub-contractors, and help desks. These network configuration and contract
databases are accessible globally. The telephone numbers, circuits, domains, IPs are
attributable to the client. All global and domestic points of your organization are likely already
mapped, as are interdependencies to other agencies, partners and clients. This information is
available electronically in real time for traffic flow analysis and configuration management or
maintenance outsourced to third parties.

• Traffic flow and content is measurable by ISPs and carriers and third parties. No special
provisions to protect this information are usually made unless specifically contracted in SLAs.

• Traffic flows through routers that are build by foreign countries and billing is sometimes
managed by foreign applications. Rarely are there stated requirements by the client for
supply chain security or operational security.

• Without a trusted path, we know that an organization’s Internet traffic has gone through
known compromised hosts/servers.

• Public sector contracting information is publicly available. Aggregate data can provide an
accurate view into LEA operations, requirements, capabilities and gaps.

• ISP staff who handle LEA circuits and contracts are generally not required to go through
security vetting. Until recently this has included personnel performing lawful intercept
operations. Still most ISPs have no cleared staff or criminal background checks.

• All communications including network activity coming out of every facility operated by an
organization is subject to security monitoring. Red teaming and investigative activities such
as proactive cyber operations may be registered as malicious by the carrier and action taken
to mitigate.

• A substantial volume of traffic destine for LEAs is known to be malicious, because only
rudimentary upstream cleaning is required under the SLA.

The solution requires a certified and accredited trusted network path. Special non-attribution and
protection of the physical connections, circuits, billing, contract, help desk, and Internet Protocol (IP)
is essential for LEA operations:

• LEA investigative need operational security and 'private' billing for special circuits at physical
(cable plant and network level);

• All contracts should be sealed and guarded by cleared staff;

• Anonymous IP addresses should be provided and changed to provide non-traceable, non-

attributable Internet use. The link between the IP and organization will be protected by
cleared personnel;

• Surveillance of procedural and technical controls;

• The special circuits should be cleaned of malicious traffic,

• Surveillance at a core level to block attacks and egress of sensitive information (upstream
data leakage protection)

• Security monitoring and blocks can be lifted on circuits where proactive cyber operations are
likely to occur;

• IP attribution need to be backstopped to Canada or provides foreign attribution, through

trusted partners.

Transparent Internet Anonymity for the Investigations

As the global information network, the Internet is becoming an increasingly important research target
for Intelligence analysts. It offers unprecedented access to distributed sources that can act as key
constituents of analysis at both the macro, and micro, level. As the intelligence community turns to
the Internet as a primary data source, however, the issue of trace identification becomes increasingly
critical. Without proper protection, all Internet requests to target research sites leave a broad range of
potential trace identifiers – such as IP addresses – which can be used by research targets to directly
identify the analyst and/or supporting organization, or to develop profiles that serve as indirect
identifiers. Trace information can be further exploited to prevent access to target sites, or in more
sophisticated implementations, actually allow those sites to deliberately mislead the analyst.

Investigators need complete transparent anonymity services to prevent trace data from being
exploited by research targets.

Multi-tier Obfuscation: transparently passes all research requests made by the analyst through
special intervening servers (or “nodes”), preventing disclosure of the analysts’ own IP address to the
target servers. The only IP address that is revealed is that of the selected node, which has no
association with the analyst or sponsoring organization. Analyst should be able to choose a specific
node for the duration of the research session. Since nodes would be geographically distributed, the
analyst can choose a specific node (or “exit” location) to match specific research objectives and

Personas: To further “cloak” the identity, the analyst needs to construct, modify and select specific
personas. Each persona can have specific trace attributes assigned to it, presenting a very specific
profile to target research sites. These include identifiers such as the type and geographic location of
the node, the type of browser being used and specific settings such as connection speed, window
resolution and size, language preferences, etc. Additionally, the analyst should be able to choose
whether settings persist or are randomized at the start of each session.

Section 10 Future Work
This section will describe important topics that have not been covered in this report and any items
that are candidates for future investigation. One conundrum in the field of cyber security is that
organizations often find themselves planning for yesterday’s problem. To stay ahead of the threat,
applied research in 2010 should address:

• Advanced investigative network

• 4g Botnets;
• Dark Space analysis on core networks;
• targeted threat investigations against Spynets;
• proactive policing using computer network attack; and
• information peace keeping/ cyber crime prevention initiatives.

Convergence, globalization, and virtualization represent powerful forces having sweeping effects
across the business of security, intelligence, law enforcement and defence. The community is at the
cusp of radical changes involving: the emergence of disruptive technologies, dual-use opportunities,
and the commercialization and criminalization of information operations. There is also a confluence of
technological convergence and social networking of post 9/11 threat groups.

Regulatory compliance has accelerated the commercialization of communications intercept,

processing, analytics and storage technologies. A typical chartered bank requires their provider to
intercept, process and store all forms of communications (voice, data, keystrokes, wireless). In real
terms, this translates to 4,000-10,000 simultaneous VOIP intercept operations, terminals entry and
thousands of mobile devices. There exist commercial capacity to process millions of calls an hour and
identify callers instantly with 98% accuracy using voice-signature database of millions of individuals. It
is possible to build a forever machine, capable of collecting and storing all forms of electronic
communications from everyone, with today’s technology.

The World is rapidly approaching event horizon were traditionalist views of telephony, data and
knowledge management will become untenable. There will be heightened national exposures and an
embarrassment of riches – all enabled by technology.

Advanced Investigative Network

One of the top tactical initiatives should be to build your own MATH lab outfitted with the best
investigative tools. Start analysis on your own corporate traffic, establish source feeds including
those from core intelligence (upstream) and build tradecraft to detect, penetrate and take down
botnets. Eventually this capability could be moved from a laboratory setting to an operational one,
they would address the needs of all Canadians.

Analysis of Carrier Darknet Space for Predictive Indicators of Cyber Threat Activity Nationally

Canada’s critical infrastructure consists of the physical and information technology (IT) facilities,
networks, services, and assets essential to the health, safety, security, or economic well-being of
Canadians, and the effective functioning of government. The real value is that of the transportation
and transformation of information.

Dark space analysis has been shown to be the single most effective means of detecting zero-day
attacks and mounting an effective proactive-defence against modern cyber attacks. Understandably,
it is one of the highest priorities of the e-security cluster for R&D and acutely relevant to all

The size and scope of carrier autonomous dark address space managed in this program and the
bandwidth of malicious traffic detected, exceeds legitimate Internet services of most enterprises. The
outcome of a study must be grounded in evidence-based engineering science that relies on primary
and statistically-relevant (factual) sources of dark space data.

Broadband wireless, web3.0 and botnets cannot be overlooked. The security of 4g and a rapidly
expanding IP universe, represent unprecedented challenge. Wireless connectivity has already
perforated any notion of a closed network perimeter as earlier darknet reports have shown. This work
could also address wireless darkspace and critical dependencies with important control planes such
as DNS security and trusted internet connectivity (TIC); as recognized by the close integration of
Einstein in the US cyber security strategy.

Scale is important in darkspace management to correlate attack vectors, and identify control channels
of fast flux robot networks in particular. It is also imperative that darkspace be harvested upstream (in
the cloud) before significant cleaning/filtering has occurred. Success will be predicated on a deep
understanding of proactive cyber defence mechanisms in the core, global and enterprise
architectures, and engineering the interfaces between critical systems of systems.

Infrastructure Warfare and Target Templating

Knowledge of Proactive Cyber Defence and Critical Infrastructure Interdependencies can be used
effectively to more precisely template and target foreign infrastructures whilst defending our own, in
times of war and pseudo-peace. We understand now that, Information warfare necessarily invokes
the broader sphere of infrastructure warfare. Conversely, physical attacks will have an echo within
cyberspace (the infostructure). Militaries have been listening to signals attentively since WWII,
conducting critical node analysis, and deriving intelligence as a way and means for targeting. They
have been successful in the first-order target development and physical strikes against foreign
infrastructure components. What has been less effective is predicting the secondary and multi-order
cascading effects of an attack. To evolve the IW tradecraft further will required taking a page from
critical infrastructure protection and interdependency research book. Planners should study the tools
and models, which have been created for the purpose of CIP. It is possible to analyse national
infrastructures at a number of layers of depth by compositing network maps, payload information and
business metrics. Providers have excellent visibility in to business processes of key sectors,
criticalities, interdependencies and their vulnerabilities. One could conceivably map the national
information infrastructure with astonishing speed and accuracy. Theoretically this can be extended to
global foreign networks. There is evidence and reason to believe it is happening to us.

WEB3.0 Security

The next generation of the Internet, or Web 3.0., is envisioned to provide a 3D interface combining
virtual reality elements with a persistent virtual world. If such a version of Web 3.0 is realized, it is
likely that the current barrier between online gaming and the rest of the Internet will disappear,
allowing gamers to access all Internet applications through a game-like console. This could
revolutionize the way people will experience the Internet and would likely transform activities such as
research into more interactive experiences. The work could perform accurate technological
forecasting of Web3.0 and assess the potential security concerns, emerging threat activity and
exploitation opportunities.

Information Peace Keeping (IPK) and the Cyberterrorism3.0

Understanding wetware attacks and countermeasures to the hacking of belief systems. The challenge
of Information Peacekeeping is how to use the power of the cyberspace and leverage the global
information grid and its content, to act with effect within new social networks, political spaces and to
safeguard truth systems. If Information warfare is about destroying knowledge and truth, or hacking
belief systems; then the prime objective of Information Peacekeeping is to help us understand the
processes that validate what is ultimately trustworthy knowledge. Psychological operations thus need
to include a counter-radicalization strategy that intelligently craft the counteracting message and
engineers an effective electronic delivery system. One must reverse engineer the propaganda model
(black, white or grey) of your adversary to understand how ‘consent is being manufactured’ using this
new media. Analysis should identify: the source, ownership, funding, risk conductors, and ideological
subtext to the toxic messaging. How does one measure toxicity of content on the Internet? In the
interests of net-neutrality do we shape this traffic? E-guerrilla marketing is textbook asymmetric
warfare committed within the knowledge sphere. Cyberspace is the weapon. Knowledge operations in
the context of IPK would implicate tactics like viral marketing. Film and television, print media, music,
gaming and guerrilla marketing instruments such as, podcasting, video clips, rich content over mobile
phone networks, blogs, social networking, and a presence with massively multiplayer online gaming
environments. We need to re-shape influence operations by administering the antidote to toxic
ideological messaging, revisionism, and counter radicalization within the knowledge sphere and the
info-structure. Consider further that hostile actions often use shock doctrine a force-multiplier for
influence operations in war-torn states. Messaging for the purposes of IPK would use social networks
to benefit from the self-replicating viral processes which are enhanced by the network effects of the
Internet. Those who own the channel (the public information infrastructure) have distinct advantage in
potentially controlling its content. An IPK campaign in the knowledge sphere would need to identify
alpha sources with high social networking potential either within the threat space or where the threat
usurps a measure of control. It is now technically possible to isolate the focal point members of any
viral campaign and identify the hubs that are most influential. Harder still to detect and counter are the
stealth influence campaigns of threat groups who use of varied kinds tradecraft like astroturfing to
fake-grassroots urban support for a cause and launder this disinformation through legitimate sources.
Messaging is much more successful when introduced into Viral expansion loops such as social
networking engine Ning, Web 2.0 icons, PayPal, YouTube, Facebook, MySpace, Digg, eBay,
LinkedIn, Twitter and Flickr. Viral loops have emerged as perhaps the most significant business
accelerant since the search engine. A viral loop must be replicated whereas basic viral advertising
cannot be. Finally, IPK must be creative, and devise unconventional methods of effectively mixing the
message, media and medium. This is absolutely key to Computer Network and Influence Operations
Strategy in theatres like Afghanistan and in the current cyber war on terror and proactive preventative

Semantic weapons Research

The weapons of Information Warfare (IW) are Physical, Syntactical, or Semantic instruments of force.
The use of a physical weapon will result in the permanent destruction of physical components and
potentially a denial of service. A Syntactical weapon will focus on attacking the operating logic of a
system and introduce delays, permit unauthorized control or create unpredictable behaviours. Finally,
a Semantic weapon would focus its effects on destroying the trust and truth maintenance components

of the system. As an example: semantic tracing could involve seeding the knowledge sphere with
unique messaging that could be used to map otherwise invisible threat networks.

Quantitative Cyber-Crime Study

A study by the Canadian Association of Police Boards (CAPB) determined the magnitude and impact
of cyber crime on Canadians is likely greater than all other crime. Cyber crime is “now considered the
most significant challenge facing law enforcement in Canada.” A top priority recommendation is the
establishment of a dedicated centre where law enforcement, government, the private sector and
academia can co-ordinate the fight against cyber crime. The CAPB study was a qualitative study and
thus is subject to limitations. Reliance open-source of vulnerability advisories affords organizations
little choice than to stretch security budgets thinly across a broad defensive front; rather than steering
resources efficaciously, or interdicting the threat at a safe distance. Subjective assessments are
insufficient for establishing risk at a compelling level of detail or accuracy to predict and prevent cyber
crime. According to a recent survey conducted by the Federal Government, “large enterprises, on
average, report 10 cyber incidents per year.” The average cost of a breach was $423K. In contrast,
An exorbitant quantity of sophisticated attacks (in the billions) targeted to and originating from
compromised machines deep within these same organizations, has been detected. There exists a
clear perceptual gap. This work would formally document a methodology for the generation of valid,
malicious cyber-crime metrics based upon the analysis and correlation of multiple, carrier-grade
sources of threat information. The work would generate a strategic cyber-crime study based upon
type 1 evidence.

Cyber Intelligence Sharing Analysis Centre

Each one of the critical sectors has either developing or mature mechanisms for Information Sharing
and Analysis. The capability that this country is missing is a trusted means and mechanisms to share
crucial strategic, tactical and operational information between sectors. ISP support a private sector
initiative to establish a cross-sector ISAC to be administered by trusted 3rd parties. A ‘cognoscenti’
that would engage in a comprehensive dialog with key CI stakeholders to work together in closed-
door workshops to share tradecraft and sensitive operational security concerns specifically
addressing the building of a national strategy.

Semantic management
There is a nascent requirement for the development of a Semantic Management practice in many
organizations to address the rising economics inefficiencies of the present methods, techniques and
governance of data and information management. Information management challenges are owing to
the explosion in the quantity and variety of data and information across an organization. With the
emergence of the first generation of semantic technologies, semantic management new economic
value creation opportunities as well as new orders of increased productivity to organizations that
adopt Semantic Management practices. Semantic technologies are now available in the market
place. Their adoptions and integration into organizations as business value multiplier are now being
seen. More that 180 different companies offer semantic technologies. The review of their offering
shows the wide range of capabilities that are now available. They fall into the following areas:
Deployment and use of Intelligent agents, Semantic Web and Service Orchestration, Automated
Knowledge acquisition, Actionable content, Concept retrieval, Interoperability and Composite
applications, Smart Interfaces, Semantic Technologies organize semi and unstructured data and
information based on common concepts and definitions; they can organize and link data, information,
resources and models together; they can discovery, reasoning, and interpret over large data and
information assets; finally they can present, provision and communicate findings at a higher level of

Advanced Risk Methodology

Modern quantum physics can make predictions equivalent in accuracy to measuring North America to
within a width of a hair. However a current risk methodology assessment of the security of a single
PC is still just an educated guess. We propose to create a risk methodology for critical infrastructures

based upon: a solid and incontrovertible theoretical foundation, notably the synthesis of Critical
Infrastructure Protection (CIP) and sophisticated risk analytics with the Universal Systems Theory that
most correctly addresses the chaotic behaviour of complex dynamic and open systems like
Cyberspace. The pragmatics of real infrastructures is that they are influenced by advanced research
in contagion-borne interdependences, technological and threat convergence, globalization, risk
conductance, and critical node analysis. A new methodology would be validated by qualitative
statistical findings from thorough consultation with CI owners, and comprehensive quantitative
(empirical) metrics from synaptic and semantic coverage of the cyber infrastructure. The outcome is
to design an adaptive model of high-fidelity and capable of predictive accuracy. What is proposed
represents an essential departure from relying on doctrine and security policy as the common means
of managing risk - from faith to fact.

Anonymous & Clandestine Network access

Many private and public sector enterprises or citizens use anonymizers, or less-attributable
mechanisms for web browsing. Organized crime, terrorist cells and foreign espionage programs go
to extraordinary means to obfuscate their identity and activities on-line by constructing covert
communications access points. Most of the non-attributable systems used in public and private
sectors in Canada today remain attributable and insecure. The TOR Onion Routing software
provides a pseudo-anonymity layer for the internet users. PSIPHON is a social-network based
software that allows users to circumvent internet censorship in the various countries that use various
filters. A basic version is distributed free to respect the civil liberties environment from which these
products come from. Although both TOR and PSIPHON are popular with civil libertarians, privacy
advocates and criminals, neither is suitable for public safety or highly sensitive commercial
operations. The project would build high-assurance clandestine non-attributable network access. It is
imperative that an organization look after the network access all network layers including obfuscating
the financial trail, people, processes, gateway technologies, host machines, software configurations
and data leakage. Some innovated means already deployed by threat agents to avoid capture
include: virtual machines and spiders can act as transient instances of software implants leaving no
physical real estate within target environments; fast-flux nets using encrypted peer-to-peer
communications; Persistent virtual environments to establish electronic dead letter boxes, enable
brush meets, broker information and launder money; compromised launch points can be buried deep
into the host infrastructure. It would also be necessary to engineer broadband secure wireless
roaming capability established to provide covert communications globally.

Next Generation Lawful Access.

Legislation will place the onus, responsibility and capability mostly in the hands of the private sector.
Widespread convergence and globalization are fundamentally changing the way and means which
future communications intercept must mature. Commercialization of this technology for audit and
compliance has already eclipsed law enforcement use in quantity and sophistication, and the pace is
accelerating. Carrier networking is talking in terms of handling exobytes of data. Switches now
operate up to 92TBps speeds. Optical devices have arrived and quantum computing is at the horizon.
Deep packet inspection devices can now intercept and rewrite packets at carrier line speeds. Voice
analytics and accurate speech to text translation systems are deployed which handle tens of
thousand of simultaneous lines and over one million calls an hour. Almost a million Canadian’s are
accurately identified by their voice alone by automated systems. Next generation lawful access
capabilities need to be re-examined in the context of cyber crime.

Comprehensive Red Teaming and operational security methodology

Any system comprised people processes and technology, which have both a physical and logical
dimension. The most advantageous attack point lies at the demarcation of environments,
responsibilities and dimensions. Similarly, the most disastrous ramifications of an attack are the
cascading effects, which crossover critical interdependencies such as from physical to cyber.
Effective red teaming must necessarily combine both physical and cyber penetration methodologies.
This is an assessment of your organization’s capacity to withstand sophisticated HUMINT assisted
technical operations.

Near Real Time Integrated Risk Management
The real risks experienced by an organization have traditionally been viewed in silos of: financial
risks, business risks, operational risks and threat risks. An integrated risk management framework
establishes a formal analytical practice that correlates various perspectives of risk and delivers an
unequivocal and unified assessment of risk. Industry is working towards implementing this framework
through people, processes and technology. We are already seeing a convergence/synergy with
systems like: sales forecasting, market research, competitive intelligence, financial reporting,
infrastructure / network mapping, network operations centres, security operations centres, network
security event monitoring, tracking, logical and physical access control. We have found that
enormous sensor arrays can easily overwhelm correlation engines and analysis processes. Thus, a
significant about of development has already achieved management of security and intelligence

Disruptive Technologies
The introduction of disruptive technologies characteristically affect the fabric of the business space by
invoking social change (eg., ipod, blackberry, social networking, mp3, dvd), suppressing existing
technologies (eg., walkman, cell, cd, vhs). Technologies are rarely neutral when it comes to security.
They either favour the attacker or defender, or are adopted for early for illicit use. Often, information
technology has features which facilitate its re-use within a security context. Furthermore, for every
security use there is an anti-security component and an offensive mode of operation – hence dual-
use. It is therefore, highly advantageous to examine the emergence of new security mechanisms for
dual-use applications, particularly those that are disruptive.

Remote viewing
Extending our ability to predict; in time and cyberspace. The work would look at precognitive
cybernetic systems capable of accurately forecasting events in cyberspace using high-fidelity models,
and artificial intelligence tuned by operational research analysts. Trend forecasting and market
intelligence analysis, with real-time quantitative metrics, can make startling accurate predictions
particularly on strategic and operational timelines.Perhaps by analysing the past through a ‘forever
machine’ we can tune our view of the future.

Annex A - Mandates,
Authorities, Capabilities
Responsibilities and
The following chapter 68 is a detailed discussion on the interaction between mandates, authorities,
and capabilities, which intertwine with responsibilities and liabilities.

An inactive defence is a sin of indifference. Not applying proactive cyber defence in this country is
costing people their privacy to a far greater severity, than some of the most perceived invasive
security measures. Citizens don’t want their ISP or government delivering toxic content to their

There are a number of outstanding questions regarding cyber-security, privacy and law, which require
examination in the context of real-life botnet defence or cyber security at large.

Essentially they are two basic perspectives to consider:

• The obligation to protect.

• The authority in which to do so.

A.1 Extraordinary circumstances

Botnets present extraordinary circumstances and hard decisions for both ISPs and authorities. Just
consider the following real scenario:

Proactive Cyber Defence and the perfect storm, Dave McMahon, Bell Canada 2009

The Storm Worm began infecting thousands of computers in Canada on Friday, January 19, 2007.
This was a zero day exploit that was undetectable by all AV software solutions. Soon the Storm
Worm accounted for 8% of all malware infections globally, and grew to a size of 10 million machines,
with as much as 40000 online storm nodes. The Trojan horse it uses to infect systems changes its
packing code every 10 minutes, and, once installed, the bot uses fast flux to change the IP addresses
for its command and control servers. The worm also installs a rootkit on the infected machine. The
compromised machine is assimilated into a botnet. The Storm Worm seeds a botnet that acts in a
similar way to a peer-to-peer network, with no centralized control. Each compromised machine
connects to a list of a subset of the entire botnet that act as hosts. The infected hosts share lists of
other infected hosts, but no one machine has a full list of the entire botnet.

Pretend for a moment that you are a major ISP in Canada. In the first few hours tens of thousands of
client machines are compromised, network bandwidth is being consumed at an alarmed rate, spam
has quadrupled, Internet availability is slowing for business clients, phones at the help desk are
ringing. Financial metrics are showing losses in the millions per hour. A major bank has escalated a
problem ticket for slow connectivity that is affecting electronic funds transfer and market trading. The
problem worsens by the hour. The value of the Canadian dollar is falling. What do you do?

A.2 Outstanding Non-technical Questions

We asked the following questions of the legal and privacy community in public and private practice as
part of this study:

• In this instance, who is responsible, liable for the actions or inactions given the consequences and
pragmatic circumstances of botnet defence?

• What is the right balance in Canada today between security controls and privacy freedoms? What
legal and privacy discussions should we have to prepare us for tomorrow?

• Since much the contemporary legal and privacy debate related to cyber security is framed by
technologies, operations and tradecraft of the past… and laws, policies and precedence don’t
necessarily exist for what is happening right now in Computer Network Operations amidst the Global
threat. How do we conduct operations in this legal void?

• What is the public policy for handing-off between warranted, non-warranted and professional
consulting engagements? Who covers the costs in each case?

• What are the handling provisions/rules governing: handoff, spin-off and spin back provisions with
respect to cyber intelligence? This means if you go into investigate one crime and find state-
sponsored spying, how can you report something for which you did not have the original authority to

• How should equities be handled particularly involving critical infrastructure owners and national
security agencies?

• What are the chain of evidence requirements in investigating and taking down botnets given the
obfuscatory nature of the tradecraft? What would stand up in court?

• What are the legal and privacy arguments for citizens, businesses and governments using private
browsing and e-mail anonymization?

• Investigative, enforcement and proactive (preventative) police work will necessarily involve the use of
force - sometimes lethal. What are the parallel rules of engagement for the police using Computer
Network Exploitation and Attack in cyberspace? What if bounty hunters or third parties are used?

• Police require the ability to conduct counter-surveillance, and operational security and in
covert/undercover investigations or operations like witness protection. Are there any special
provisions to this work in cyberspace? Are police allowed to break the law in support of a cyber
investigation? If there are criminal code exemptions, are they still at risk of civil suits?

• When, to what extent and under what circumstances should traffic shaping be undertaken to stop
crime and mitigate risk owing to unauthorised use of services?

• To what extent can a carrier monitor sector specific legitimate and illegitimate (malicious) traffic for
assessing interdependency health, based upon synaptic, or semantic data flows for the purposes for
strategic analysis of cyber security?

• How should carriers provide malicious content filtering (clean pipes) to business clients and
consumers? Should there be an opt-out clause where clients get unfiltered connectivity at there own
risk? Who is then liable if that client engages in unlawful activity on-line, or gets infected and attacks
other clients or the rest of the network? Are carriers then justified in terminating service to stop an
attack and prosecute the client for the damaged caused from mounting the attack? Can other users
sue some one who attacked them because they opted for unfiltered access? When is filtering too

• In order to determine cyber-interdependencies and mitigate risks (like cascading nation wide failures)
in real-time, analysts need to see network connectivity (critical node analysis), traffic flows (signal
related information), malicious activity propagation, conduct traffic content analytics, security event
monitoring, and active network scanning. What controls, checks and balances need be in place? If
carriers are restricted from performing these activities, who is responsible for the ramifications?

• To what extent should network owners take unilateral action, to stop ongoing national level attacks or
outbreaks of malicious code, if there is no authority or police response available in time?

• Are citizens and business obliged to report a cybercrime as either a victim or witness?

• If a citizen or business reports a cyber crime to police, are they responsible for all the costs in
collecting evidence or assisting police in conducting analysis?

• If a police cyber investigation disrupts the business operations of a victim/witness what is fair

• What is the privacy debate surrounding convergence security and data aggregation considering that
non-aggregated data is of much higher risk of being breached without detection?

• What impact will ISO Cybersecurity 27032_20090701, professional engineer code of conduct and
similar defacto standards have in liability surrounding cyber security?

• To what extent are organizations and individuals liable for inaction in providing cyber-security to their
own systems and processes?

• What are the limits of active-defence? Given the consequences of not being active, who is liable who
assumes the risks? Since a purely defensive posture is ineffective in mitigating risks, are
organizations that do not have a proactive-cyber-defence program negligent?

• When police take control of a botnet control server that contains millions of identies and
private/personal records of Canadians are they bound to PCI compliance, data matching, privacy
databank requirements? Consider that they do not have informed consent. Are the security policy for
the handling of evidence sufficient?

• What is the public privacy debate about what organized crime and state-sponsored are doing to
Canadians over cyberspace?

• Are the mandates and laws regarding cyber security and Critical Information Infrastructure Protection
correct? Do the current legal interpretations and practices represent public expectations? Are the
sectors doing everything that the Nations expect that they should be doing? Or are they doing too
much or too little? What is the gap and where do the liabilities fall?

• Who enforces sector-wide cyber-security? What are the repercussions of poor performance?

• Who (name of person) has overall responsibility for the cyber security of the nation?

• If the government forces backdoors/vulnerabilities to facilitate lawful access, who is liable when the
privacy and security of businesses and Canadians is compromised through this vector?

• Do militaries require positive attribution and non-repudiation when they launch cyber attacks in a
limited conflict or war? Or can they act covertly?

• What does the public expect each sector should be doing in the field of cyber-security?

• Who has the roles, responsibility, authority and budget for protecting Canadians from the cyber-

• How much should Canadian public and private sectors be relying on the USA for its cyber-security

• Who validates cyber-security data the veracity of analysis upon which public policy is based?

• What is the clear written criteria that specifies between need-to-know, need-to-share and disclosure
in the public good? At what level are these decisions made?

• How far should information sharing between critical sectors and dispensation of equities be
encouraged or mandated? If one sector fails to share information are they entirely liable for damages
incurred by another sector that would have been prevented with that knowledge?

• If the public sector does not or cannot preserve sovereignty, provide national security or critical
infrastructure protection, then do these roles revert to the private sector? If the telecoms/financial
sector is de facto responsible for the security of their part of cyberspace, then do they have the
authority to protect it?

• Is war theory valid as it applies to cyberspace?

• What is Canada’s foreign jurisdiction in stopping the cyber-threat. What are the limits of action taken
in preventing an attack against Canadian interests from a foreign threat?

• What is the publicly acceptable National policy for computer network exploitation and attack in
support of foreign intelligence, counter-espionage, counter-terrorism, criminal prosecution and critical
infrastructure protection?

• With Lawful access, who pays, what are the controls checks and balances, who protects those
enabling access, who does the R&D to counter the threat?

• What is the Privacy and national security balance when you cannot have privacy without national
security given the clear and present deliberate threat?

• What are Canadian’s expectations for availability, integrity and confidentially on cyber-services?

• How far should information infrastructure owners provide censorship in an open society? How do you
handle exceptions? When do the exceptions become the rule?

• What are the rules and ethics regarding the extra-terrestrial intercept of communications?

• In what circumstances and to what extent should significant network owners engage in influence
operations in direct support of either force protection, military offensives or disruption of emerging
threats to the infrastructure?

• Should providers block access to toxic content, such as hate propaganda, obscenity, terrorist sites,
suicide assistance sites, copyright infringement, bomb-making sites, phishing sites, explicit child
pornography, snuff sites, malcode sites and other fraudulent or criminal sites?

• Is blocking criminal activity online a breach the prohibition in s. 36 of the Telecommunications Act
against a Canadian carrier controlling the content or influencing the meaning or purpose of
telecommunications carried by it for the public?

• Can a company serve an Anton Pillar warrant on an individual working in a government building or
against a government site? To gather evidence in the event of an attack or piracy etc where the
government systems or employees are implicated and the police choose not to investigate?

• SanctionsSection 10.16 of the Government Security Policy requires departments to apply sanctions
to IT security incidents when in the opinion of the deputy head there has been misconduct or
negligence - the Operational Security Standard on Sanctions and the Treasury Board Guidelines for
Discipline. To what extent has this provision been enforced given the number of incidents?

• Network owners become aware through security intelligence of a cyber incident involving (terrorism,
espionage or crime). How do they inform the authorities of the attacker?

• Network owners become aware through security intelligence of a cyber incident involving (terrorism,
espionage or crime). How do they inform the authorities of the victim?

• Personal Victim information collected at a crime scene. Does this form a personal data bank? How do
police inform victims? Are they required to? Victim is vulnerable and exploited while police know
about it.
• The authorities become aware through security intelligence of a cyber incident involving (terrorism,
espionage or crime). How do they get information about the victim from Network owners?

• What are the legal and privacy issues in geotracking persons of interest not specifically listed in a
warrant but whose IP, domain, phone number or e-mail has been associated with a threat group?

• May all the voice and data of all Canadians be stored in an encrypted form for release at a future date
under a specific warrant?

• How long should the data of a warranted intercept be stored by the provider for police? Who pays for
the storage? Who is responsible for the security? Are LEAs not responsible to store on their site?

• Since the Internet is made up of private network owners/operators, what rights to citizen’s have to

• When net neutrality affects privacy and security of Canadians, who is responsible?

• Does a provider have the right to Self-Defence and Citizen’s Arrest in cyberspace or real space to
stop a cyber crime?

• What criminal code exemption do private network carriers have with respect to security surveillance,
and enforcement?

• How long should raw data files be retained in support of a security investigation? If they are erased
how is the chain of evidence preserved? What about the forever engine/machine?

• PIPEDA requires ISPs to report when they do not provide LEA with subscriber information without a
warrant. Is this a typo in PIPEDA?

• How does the Telecommunications Act address the realities of proactive botnet defence?

• Will botnet defence become a matter of compliance SOX, PCI, SAS70 etc?

• Are mandamus prerogative writs applicable to national policing, security, CIP and defence where the
public sector has provided themselves national cyber mandates but are unable to exercise it?

• If a public or private organization is infected owing to poor security or fails to take action to prevent an
outbreak, and as a consequence attacks/infects others, can they be sued as a Public Nuisance ?

• What ought the government and ISPs do to limit access to illegal content in Canada?

A.3 The pragmatics of a national cyber security strategy 69

Public policy, legislation or standards offered little in the way of guidance for many of the pragmatics
when it comes to combating robot networks today. This section is intended to educate the reader to
some of the non-technical issues around botnet defence and cybersecurity in general.

A.3.1 P3
“Mandate without means and means without market.”

There currently is no public private partnership (P3) in Canada around a National Cyber Security
Strategy or Critical Infrastructure Protection. Notwithstanding, there is a general consensus that there
needs to be P3, to effectively combat botnets. The public sector has provided themselves the

Proactive Cyberdefence

mandate to combat cybercrime involving botnets but not necessarily the means to do so unilaterally.
Conversely, carriers have the capabilities to proactively engage the botnet problem, but no market to
fund initiatives and are fighting commoditization of services. The ISPs do not get contracts to stop a
botnets, so all botnet mitigation efforts are motivated by other factors. A market can be created by
combining the government mandates and inherent capabilities of the provider to create what
essentially is a public service for Canadians.

“The takeaway for the GC is that Tier-1 Telecommunications providers and ISPs are vital to its CND
activities as well as CNE and CNA activities, and individuals with Internet-connected computers can
also play a significant CND role.” - Evaluating Canada’s cyber semantic gap published 2009 by LCol
Francis Castonguay

A.3.2 Imperatives and Impediments

If one is to combat robot networks, expect no black and white answers, only hard choices. Do not
depend upon explicit policy guidance. You will have to assume some operational, business and legal
risks. There are things that you must do to solve the problem, but there will be road blocks placed in
your way by special interest or risk adverse groups who may not share the responsibility for the
outcome. It is all about making the best choice under the circumstances.

A.3.3 Cognitive dissonance

The two most frequently asked questions, after presenting cloud-based botnet mitigation strategies,
are: “why don’t carriers stop all malicious activity in cyberspace?” followed, in the same breath, by
“what authority do providers have to clean/shape any network traffic?” A similar line of contrary
questioning also arose during the senate hearings on the new cyber crime bill this year. - The
Standing Senate Committee on Legal and Constitutional Affairs, Evidence to Bill s-4, An Act to
Amend the Criminal Code (identity theft and related misconduct), Ottawa, Thursday, May 14, 2009.

The same people that would hold ISPs or police entirely responsible for the security of the network
would deny them the any authority or means to do so. The dilemma is that it is not possible to have
absolute net-neutrality and clean pipes, or complete privacy and security for that matter.

Those who live at the fringes of the security and privacy debate, will, in the end, be neither satisfied
nor successful.

Ironically, privacy cannot exist without some security, and security cannot exist without privacy. Take
for example, China is one of the most infected computing bases globally owing to blatant neglect for
privacy and copyright. Although China is one of the most aggressive threats to cyber security, they
have an oppressive internal-security regime. Most computers in China run pirated software that does
not receive updates/patches, and is thus vulnerable to botnet attacks.

The better question to ask might be “how do we strike the ‘right balance’ given that this balance-point
is in constant flux?” How much policing does society feel comfortable with today? An informed debate
and market research is required to accurately gage societal perceptions and expectations for private
and public sector entities entrusted to protect society in cyberspace.

Cyberspace has long since transcended an interconnection of networks. It embodies information

flows, knowledge, processes, and applications. We are now looking at a ‘knowledge-sphere’ that
augments reality in a virtual sense. Look no further than cloud computing and Massively Multiplayer
Online Role-Playing Games (MMORPG), augmented reality and the online crime build around them.

Cyberspace is a hyper-complex non-deterministic ecosystem like the Earth’s natural environment,

where security is modulated more by threats, market dynamics, and natural selection than by clear
intelligent design.

“It is the equation balancing itself out” – the Matrix

People still build houses on fault lines, below sea level, or in the path of planned auto routes because
the land is cheap, but act surprised when ‘stuff’ happens. Similarly, allowing the unrestricted
purchasing of the dirtiest/cheapest Internet connectivity, ignoring the proliferation of smart phone
(wireless) use in an organizations or the dangerous advancement of criminal botnets in the out side
World is a recipe for disaster. Often people do not calculate real risk or the total cost of ownership
(TCO). Choices around security ought to be made in a worldlier context, rely on evidence-based
decision making to deliver street-smart solutions.

The architecture of enterprise security solutions should harmonize with the natural flow of emerging
trends and security pressures in the cyber-ecosystem. Look at the neighbourhood before you build.
Here are some real-world pressures on the cyber security ecosystem:

A.3.3.1 Influencers determining level of security

The factors that shape the provisioning of security at the Internet core are:

The law of diminishing returns. Simply stated, costs rise dramatically as we try to approach the notion
of absolute security. Given that there is persistent threat-pressure on an enterprise, a good
measurement of security is the quantity of malicious traffic into and out of the perimeter. We can look
at this as what percentage of the pipe is clean. Most organizations find it relatively easy to obtain 30%
clean bandwidth and ultimately reach a steady-state at 50% clean. Most tier 1 carriers provide 95%
clean pipes to consumers. This is represents a break-away point in effort-cost for additional security.
More advanced architectures and tradecraft is required between 95-99%. From then on, every % you
want to achieve right of the decimal-point costs much more, with 99.99995% appearing to be the
theoretical limit for network confidentiality, integrity and availability. In summary, most bandwidth will
be delivered to consumers at 95% clean because it is not cost-competitive to clean it further. Some
businesses, like Banks, will pay extra for premium security services. Most organizations will start with
95% clean but degrade to 30-50% clean by exposing their network operations to dirty Internet
connections and unsafe practices.

Internal Economics. Botnets are driven by money as is the funding the defence against them. It is a
simple return on investment (ROI) calculation based upon actual losses and perceived total capital at
risk. The threat agent hopes to achieve high ROI in that they expect to extract orders of magnitude
more wealth from a botnet then the cost to operate one. Most down-stream organizations hope to
just break even, or stem the losses. Risk in Canada is estimated at $100B even after $5B of ICT
security expenditures. A typical organization in Canada loses 20x more than their security budget,
based upon what was observed of the business market in this study 70. This is just a rough estimate
based upon a cross-correlation of security market spend and a malicious activity detected at their
perimeter. Few organizations measure this ROI at all. An efficient enterprise security architectures
can save 100x more money than then they spend on security, but this is rare. Organizations that have
good security programs, also tend to measure ROI very accurately, and use it to drive decision
making around security investment. “You can’t manage what you can’t measure.”

Here is an example of how it might work in a carrier environment. When a botnet strikes, it has two
effects: bandwidth is consumed and client computers manifest problems. These effects give rise to
network load and customer complaints. The operation centres and help desks experience increase
loads. Now, bandwidth and help desk calls both cost money. When a carrier is facilitating EFT of
$174Billion a day, a drop in network performance can cost millions. There are steep financial
penalties under SLAs for poor throughput. Therefore the botnet attack has an immediate and

Based upon malicious traffic flows and value at risk

measurable first -order measurable cost to a carrier. These costs are registered by real-time financial
monitoring; the same systems which calculate bonuses for management. You can see how behaviour
is quickly shaped. Spend one million dollars on help desk calls in one hour from consumers
complaining about slow Internet, and there is pressure on security staff to do something. Justifying
capital purchases becomes easier when there is a measurable and compelling event. The first stage
is to provision more network bandwidth. Over-provisioning is a risk management strategy. The cost of
transporting and black black-holing malicious (dirty) traffic is borne by the carrier – the cost of doing
business. An elegant security solution is more efficient at steady-state but requires capital expense,
development, non-recurring engineering or a new product introduction – a painful procedural process
for a large carrier. Thus, the economic response to botnets is a balance between the financial impact
the carrier is experiencing and the cost of mitigation. The ROI is driven by margins and profitability
calculations, and the system takes care of the rest. For example, peer-to-peer traffic shaping of illicit
bandwidth (piracy and botnets) saves carriers billions of dollars in bandwidth costs alone. It also
prevented billions more in downstream losses. The cost to implement this program was orders of
magnitude less expensive to implement, thus realizing a good ROI, which was competitive with that of
the threat. The quarterly management review of total cost of ownership (TCO) of a major network re-
balances security expenditures up or down.

External economics. How much security an ISP puts into the network or how much the pipes are
cleans also depends upon how much people are willing to pay for it. This can occur in a very direct
way where a large enterprise contracts the ISP for a trusted and secure network access or indirectly
through market dynamics. The consumer and SMB market predominantly drives the assurance
levels. Even critical sectors purchase more bandwidth (almost 90% of it is total requirements) through
the cheapest consumer channels rather than through trusted procurement vehicles. The consumer
telecommunications market in 2010 is preoccupied with commoditized pricing and low-priced wireless
applications. Citizens, businesses and governments are buying the cheapest bandwidth not the
cleanest. Attempts to introduce cleaner and more secure pipes by ISPs, simply is not selling.
Cleaning consumer bandwidth past 95% is not sellable at this time, in a competitive market.
Particularly, when smaller unregulated providers and can purchase bandwidth from regulated tier 1s
and resell at cheaper rates. Regulation has had a profound negative impact on the security of
cyberspace for Canada, as you can see from studies like HostExploit Top 50 Bad Hosts and
Networks Dev 2009, which lists Canadian ISPs 2 worst behind Russia. The CRS Report to
Congress on the Economic Impact to Cyber-Attacks, 01 April 2004 foretells many of the observations
we are making here.

Tax Economics. Thirty-four per cent of the government managers interviewed in the Cyber Critical
Infrastructure Interdependencies Study 0D160-063075/A, dated 2006-04-28, thought that Bell
Canada and the Royal Bank were a crown corporations and had a public mandate to provide security
to Canadians. Often the terms ‘mandate’ and ‘governance’ are used in government literature and
discourse when talking about the private sector within P3. It I worth noting that Commercial
businesses do not have mandates and they are not governed in the same sense as the public sector.
Corporations are not responsible to the prime minister or the lead security agency named under GSP.
They report to a CEO, who is responsible to shareholders. They follow their own business plans and
are in business to make money albeit within an explicit legal and ethical framework. In fact,
businesses are required by tax law to make every effort to be profitable. Just try filing year-after-year
of business losses and CRA will likely audit the business and declare it an unviable enterprise. ISPs
are publicly traded companies with share holders. They are not public sector crown corporations nor
are they not-for-profit organizations. To date, they do not receive public funding for infrastructure
security. Tier 1 carriers paid $15 Billion in taxes for Government services. These services are to have
included national security, public safety, critical infrastructure protection and cyber security. Although,
the government does procure a great deal of services from carriers (including security) most of the
$2billion/yr that the government spends in IT security is in support of the public sector. The three
vehicles that Government has to influence cyber security from ISPs are through regulation, P3 and
taxes. A tax credit program for national cyber security initiatives was proposed by the Information
Technology Association of Canada (ITAC). Carriers must make significant investments to security to
stay ahead of the threat. The system lifecycle is often less than a year before a refresh or upgrades

are required. Yet tax regulation does not allow for amortization of these capital investments in these
modern time-lines.

Altruism and Code of Business Ethics. Corporations and departments are made up of employees.
People (citizens) possessing a moral centre, embodying values that accurately reflect contemporary
society. Employees of an ISP and government are also consumers on Internet services and would be
affected to the same degree as average Canadians by cyber-crime. Corporate responsibility and
collect altruism regularly provides more security onto the core networks than would otherwise be
commercially viable, if only driven by the markets. Clean feed child safety and kids help line programs
represents the case-in-point. [see also Bell Canada’s and Telus’ corporate responsibility reports]. As
much as corporate responsibility good business marketing, the intentions are genuine. However,
sponsorship, charity, community outreach, and extra security have to be justifiable to share holders,
customers and pension holders.

Shareholder value and Pensions. The viability and profitability of the telecommunications network is
important to many Canadians, if even indirectly. Consider how many investment portfolios have
shares and how the income trust scandal ended up costing Canadians dearly. Many citizens realized
for the first time how they are financially tied to a company. That company is built on a network
infrastructure that can be affected by botnets/crime. On paper, this thread may seem extended, but
in cyberspace risks are conducted along interdependencies at the speed-of-light. Going from
inadequate security, a botnet attack, to dramatic plunge in a citizen’s RRSP value or stock market
crisis is conceivable. Consider that today, trading on the stock market is driven on the network
performance measurements discriminated picoseconds by sophisticated instrumentation. Traders use
faster networks to push through transactions electronically ahead of slower networks.

Value-added discriminator. Security can be engineered into systems economically insomuch as it is

perceived and sold as a value-added discriminator in the market place. The first place where security
is becoming marketable is in SSL certificate / encryption use for electronic commerce and packaging
AV with Internet services. As 4G mobile devices draw more bandwidth for media rich applications,
they will become highly susceptible to threats to a limited availability of on-air bandwidth. Mobile 3.5G
botnets are already causing problems. Cell phone bandwidth (air minutes) are far more valuable than
wire-line rates. It is not a simple matter of carriers buying bigger routers (over provisioning) to suck
up malicious traffic. The demand for fast and reliable mobile networks will drive the market. Fast
means clean usable bandwidth. It is envisioned that carriers will uniformly need to improve security
to meet secondary demands for speed. This means combating 4G botnets.

Counter-commoditization. At the same time organized crime is paying ‘top dollar for top talent’ and is
refreshing themselves with the latest technology, market trends have shown that the procurement of
security and telecommunications products and services has been a race to the bottom. A perennial
review of the types of professional services awarded under standard contract vehicles shows a
propensity lowest cost bids for traditional ‘trade’ deliverables. The average TRA is less that $25k.
There are very few projects awarded which: require advanced skills, capitalize on corporate
competencies, or are complex in nature.

Fraud reduction. ISPs are motivated to reduce insider and client fraud. This also includes monitoring
for acceptable use and abuse of services. Systems that are put in place for network health
monitoring and load measurements are in a position to detect crime through anomalous activity or
excessive consumption of services. There are dual-use applications run between the NOC and SOC
environments. Network pressures can invoke security solutions, which ultimately can improve the
assurance level of the network. Botnet activity shows up vividly on network monitoring equipment
managing normal traffic loads, as it does on security appliances dedicated to their detection.

Compliance. The owner-operators of the National cyber infrastructure are not bound or influenced by
Government Security Policies (GSP), MITS, MAF, or FAA. These are internal government policy
documents only. Government exerts compliance through CRTC regulation, tax law, privacy acts, and
lawful access legislation. The compliance instruments that most effect critical cyber infrastructure are

commercial standards like: PCI, SAS70, ISO, and SoX. The effect that policies and standards have
on the assurance level of the backbone networks, cyber security in general or the magnitude of
malicious traffic is questionable. Some policies have had a negative effect; others just cost the
organization money but do nothing to improve security. In the best case, demonstrable-compliance
may facilitate trusted business relationships and enable commerce. Legislation and regulation
generally has had measurable negative impacts on the industry’s ability to secure cyberspace, and
cannot hope to be responsive to a rapidly evolving threat. Most of the Threat Risk Assessments
(TRA) from which security requirements are justified has no real threat or risk data or calculations, nor
do they follow a defendable scientific method. They are, for the most part, compliance audits against
policies and standards, and are based upon superficial (Type 1) evidence. An organization can be
fully certified and accredited and still be extremely exposed to a real threat. Having a generic incident
management plan sitting on the shelf may pass the audit but unless it is a good plan and
operationalized, it means nothing in terms of threat-risk. This statement is supported by actual
measurements of threat activity exfiltrating compliant networks.

Compliance-risk influences behaviour around security, but not always in a way that mitigates threat-
risk. The case in point is that there is nothing in MITS, MAF, FAA, GSP, CSEC Guidance, ISO
standards, SoX, or SAS70 that will stop a DDoS or sophisticated Botnet attack.

The question as to whether policies, standards or laws are good, bad or indifference is a good one.
The interpretation of the documents by lawyers and engineers makes all the difference. Private and
public sectors rely on compliance differently in managing risk. Along the lines of Napoleonic law, the
public sector is generally reticent to launch new security initiatives to unless explicitly mandated or
permitted to do so in policy or mandate, even though transgressions of policy have no real
ramifications for managers. Contrarily, industry tends to act unless explicitly prohibited by law, based
upon threat and market pressures, because failure to do so would bankrupt the business through

Measuring the success of legislation to reduce risk is difficult. Consider the following set pieces of
public cyber security related policy, standards and legislation from the last decade:

• Criminal Code;
• GSP;
• Harmonized TRA;
• Identity Theft Act;
• Personal Information Protection and Electronic Documents Act;
• Electronic Commerce Protection Act;
• Privacy Act;
• An Act respecting the mandatory reporting of Internet child pornography by persons who
provide an Internet service;
• An Act to amend the Criminal Code, the Competition Act and the Mutual Legal Assistance in
Criminal Matters Act; and
• An Act regulating telecommunications facilities to support investigations.

When placed on a time-line and correlated using a T-Test with network flow over the same time
period, there appears to be no measurable effect on the actual bandwidth of malicious traffic across
private and public sectors. Contrast this with other security initiatives, which have immediate and
profound effects on malicious traffic loads and measurable financial impacts.

Policy and law allows a company to transfer liability risk to those writing the standards. In some
cases it permits carriers to perform certain security operations without the fear of litigation, such as
child safety initiatives.

Professional Engineering Association in Canada is developing standards, which will apply to ICT
security with the force of legislation. No longer will it be acceptable to practice security engineering
without a licence.

Insurance. Operational risk has entered the assessment criteria of buyers and raters of equity and
debt, and become a component of financial risk management. Basel II started this trend in banks and
it has since flowed to the rating houses, dealers, traders and large corporate entities. Good
Operational risk management means lower set-asides, better ratings, cheaper capital and more
willing investors. The Financial risk-world is populated by bankers. The operational risk World holds
professions such as police, soldiers, technology and emergency management professionals, and
bureaucrats. The phasing-in of the Basel II Capital accord has nudged their practices and domains
from previously parallel trajectories into converging orbits and created quantifiable linkages between
Financial (Credit and Market) risk and Operational risk. Ratings houses are extending Operational
risk management requirements to all public entities. Operational risk management can have major
influence on organizational cost-of-capital, competitiveness and compliance. [Ref
%20Paper%20I%20-%20Feb%202008.pdf ]

Private sector organizations like ISPs and public sector entities such as the Government of Canada
or police services receive credit ratings. These rating affect their ability to borrow money, finance
programs or manage deficits. Their externally perceived cyber security assurance level influences
this rating. White listing of domains based upon quantitative metrics will likely formalize this process.
Bogus self-assessments will no longer satisfy compliance within a shared risk ecosystem.

Service Level Agreements Most of the extra-ordinary security engineering requirements levelled on
providers are precipitated by chartered Banks through and enforced through SLAs with strong
compliance to best practices (SAS70, ISO, PCI), which is verified through instrumentation. Network
performance is ensured through security.

Critical Infrastructure Protection (CIP) The telecommunications (Cyber) sector forms the nervous
system that binds all other sectors. Telecoms, finance and energy form a symbiotic balance; each
one depends on the others services, telemetry and instrumentation. CIs are highly susceptible to
disruptions owing to cyber threats like botnets. The majority of cross-border trade does not travel over
the Detroit-Windsor bridges in trucks. Wealth is transported electronically in cyberspace. ISPs invest
in security for reasons of Critical Infrastructure Protection. A comprehensive cyber CIP study was
provided to Public Safety Canada, which assessed the cyber interdependencies between all sectors.
[PSC Cyber Critical Infrastructure Interdependencies Study 0D160-063075/A, dated 2006-04-28] The
authors published a more detailed analysis [Critical Infrastructure: Understanding Its Component
Parts, Vulnerabilities, Operating Risks, and Interdependencies (Hardcover), CRC Press, ISBN-13:
978-1420068351, 2008, by Tyson Macaulay,] and advisd on the Omni films documentary series
called System Crash.[ System Crash, CLT Network, Omni Films 4 x 60 Documentary Series, 2008]

“The distinction and separation between government and private infrastructure practically impossible.”
- Evaluating Canada’s cyber semantic gap 2009, LCol Francis Castonguay

Peer pressure ISPs see themselves as innovators or fast followers. The desire to appear to conform
to the best industry standards is a powerful motivator for new product introduction, business
development and marketing. There is also business risk and liability associated with lagging explicit
standards or de facto best practices. International ISP security standards are currently being drafted
which will include best practices for botnet mitigation. Note: Government telecommunications
providers will also be bound by these standards.

Threat Effect “What does not destroy you, makes you stronger.”
The reason why anyone does security is to counter a perceived risk created by the existence of a
threat. The threat drives security innovation. Attacks destroy weak network components and motivate

us to build stronger defences and smarter tactics. Ironically, a persistent threat is vital to a healthy
environment and the natural evolution of systems. Without a deliberate threat, networks would not be
as robust as they are now, and we would not have the immunity to withstand accidental or natural
pressures. The threat accelerates innovation, competitiveness and gives birth to dual-use
applications. Too much threat pressure can overload the system’s ability to respond, to little threat
load and we become complacent – weak. The nature of conflict drives an organization to become
proactive. Since the telecoms and financial sectors experience the highest volume of cyber attacks, it
stands to reason that those networks have become the most secure and means of mitigating risks the
most proactive. Downstream organizations may have formed critical dependencies on upstream
services without necessarily being aware. A catastrophic failure of upstream services would cascade
through risk conductors formed by critical interdependencies. Organizations without exposure to the
threat may collapse because they have immunity. One way of maintaining immunity is to be
immunized with a vaccine. In the biological sense, this is usually involves injecting inactive viruses to
trigger your immune system to form antibodies. In the cyber sense, this means being exposed to core
intelligence from upstream providers to achieve situational awareness (SA) through a common
operating picture (COP). This insight, allows you to build the business case for security and then
install appropriate systems to match the threat at your doorstep. One also needs to measure and
manage critical interdependencies before treating or transferring risk effectively. Consideration of the
real threat is paramount to an effective security strategy and one that influences how much security is
put into the Internet core. There are an enormous number of threat-risks in the wild, but in a very
pragmatic sense, study the most dangerous cyber-criminal organization [RBN] using a sophisticated
botnet [Kneber, Zeus] to launch a devastating attack [40Gbps DDoS] against your organization, and
plan for that scenario. The beneficial effects of a real threat need to be considered. And that is why
cleaning the pipe 100% may not always be a good thing. Weak infrastructure and systems should be
culled. There is ample evidence of programs funded well beyond their best before date.

Integrated risk Integrated risk management considers business operations, threat and reputation
(perception). Security must be justified within this model for the business to remain viable and
network operational. The risk-of-action as well as the risk-of-inaction, need to be duly considered. In
big business, inaction is far more costly, and taking risks prevalent amongst the more successful
enterprises. Managing business risk is far more important to the private sector, who may choose to
relax some security mechanisms in order to manoeuvre in the marketplace at much higher velocities
that their competition or the threat. Threat-risk in the business environment where ISPs operate can
become muddled. Partners, competitors, regulators and threats regularly swap roles. We often refer
to this as ‘coopetition’. An integrated risk management framework would look objectively at who or
what is costing the organization money or poses the greatest risk. The Russian mob may cost a
provider several million but government regulation may result in billions. The greatest risk to capital
drives behaviour and resource allocation. The Government of Canada published the Integrated Risk
Management Framework and it became public policy several years ago. It is one of the best policy
documents written in the public sector for security, yet rarely followed or invoked by contracts or
internal risk assessments.

Competition Fair competition is generally seen as a good thing. It forces providers to continually offer
improved services at better prices. At a certain point, safety, like in the automobile industry, cyber
security will become a competitive advantage. Right now it is not.

Unfair competition hurts Internet safety and our ability to protect Canadians from botnets. There are a
number of issues that surface frequently for an ISP. Antitrust occurs when a competitor attempts to
force others providers out of the market through tactics such as predatory pricing, obtaining exclusive
access or gross underbidding of services to buy the business. Using public funds to provide
telecommunications/internet services, mandating use within the market and regulating pricing of
private sector entities is unfair competition and can be classified as a form of antitrust. Stated
intentions to deliberately erode the market share of a competitor through targeted regulation is to
knowingly engage in unfair competition. When a service is mandated it no longer is responsive to
market or competition drivers; like quality-of-service. Misappropriation of trade secrets or theft of
intellectual property can occur through espionage, partnership deals gone sour, head hunting or using

bid responses to build an internal competing solution, or by transferring this IP to a competitor.
Tortious interference occurs when an entity convinces a party to breach a contract with the
incumbent. The most grievous instance of conflict of interest occurs when a contract is awarded,
such that the person issuing those contracts stands to gain financially, without due process. Conflict
of interest should not be confused with having vested interest. Having an expert who won a
competitive contract for security architecture design, eventually build the system in subsequent work
is NOT conflict of interest. You want the most qualified person designing and building your enterprise
security. Disallowing designers from building is ill advised. Price fixing occurs when parties agree to
buy or sell a product or service only at a fixed price, or conspire to control the market by manipulating
supply and demand. Bid rigging is a form of collusion. It is said to happen when organizations
predetermine the winner to a competitive process pretend to respond to a contract. Directed bids, or
issuing qualification criteria that is clearly written for a specific individual or company is a form of bid

Technology The rapid evolution of science and technology has a profound effect on cyber security.
Make no mistake, cyberspace is a technology-heavy environment. The technology shapes processes,
policies and society. It facilitates criminal behaviour. Combating Botnets requires smart technological
solutions. This is a short paragraph because this topic alone could fill a book.

Social values and expectations If technology tells us what we can do, it is society that tells providers
and police what we may do. Authorities must fully investigate all technological solutions, means and
methods to combat botnets but temper operationalzing this potential solutions with a reading of public
social values and expectations. A fear of transgressing social values must not halt an objective
scientific examination of tradecraft or the prototyping future capabilities within a lab or staged
(realistic) environment on carrier premises. Authorities need to understand the engineering,
independently of yet-to-be-articulated laws, mandates, policies or social discourse. Our enemies,
including nation-states, are advancing CNO without the burden of privacy or human rights.
Operationalizing new potentially invasive techniques does require society’s involvement. Note that
lawyers, privacy advocates and policy makers can benefit more from seeing a working prototype in a
controlled lab, than writing policy in a vacuum. The difficulty in gauging social values and
expectations is that these issues are so vastly complex that the nuances are well beyond the grasp of
the average Canadian or executive, to give an informed opinion. Recent cyber security studies [PSC
Cyber Critical Infrastructure Interdependencies Study 0D160-063075/A, dated 2006-04-28]
interviewed and polled IT security professionals across all sectors. Overall, there was an expectation
that police, military, security and intelligence services, and the government in general have a much
more sophisticated capability to fight cyber threats like botnets than they may actually have. It is a
fair statement that this impression is echoed in society as a whole. Most representatives assumed
that someone else was doing security for the community.

Sovereignty The government in most countries has abdicated the role national security to the private
sector as it pertains to critical infrastructure protection. The effect is known as commercialization.
Sovereignty in cyberspace is an elusive thing. The majority of Canadian’s maintain cyber identities
exist on to servers globally. Interaction, information and wealth exist across cyberspace independent
of traditional borders or national infrastructure. Controlling national sovereignty within a global
infostructure is like grasping at smoke. Since governments no longer own, operate, or have any
meaningful view into, critical infostructures, it necessarily follows that any cyber-sovereignty issues
that remain, have shifted to the private sector. Globalization, virtualization, cloud computing have
rendered the concept of cyber sovereignty academic. The only question now is what roll can be
repatriated to the public sector.

“This level of fragmentation of the roles was in keeping with the various departmental mandates, as
assigned by the various Acts of Parliament such as the CSIS Act and the National Defence Act;
however, it in no way offered an integrated means of combating cyber threats.” - Evaluating
Canada’s cyber semantic gap, 2009, LCol Francis Castonguay

Cloud computing Cloud computing environments and value added services from the cloud are driving
cyber security from two perspectives: Security from the cloud is akin to upstream security services
offered by Tier 1 carriers/providers; Security for the cloud proposes solutions for safeguarding
processes, applications and information while they are in a cloud environment. Either trajectory
represents a brave new world for security experts and law enforcement, privacy and legal
communities. Canadian’s are subscribing to cloud services in the millions and the laws, standards
and best practices around security or privacy have yet to be written, and are likely decades behind.
Yet the threat is actively operating in the cloud.

Net neutrality In most cases privacy enhances security. However, too much freedom and lack of
security controls, in the name of privacy, actually backfires, and destroys privacy. Privacy significantly
affects the level of malicious traffic that can be cleaned from the Internet and to what degree ISPs can
combat botnets. The original Netneutrality concept comes from Lawful Access discussions, and
means the ability of an ISP to maintain the integrity of their network architecture without police
inserting intercept devices. The most contentious privacy topic has a newer take on net neutrality.

The privacy definition of network neutrality (also net neutrality, Internet neutrality) is a principle that
advocates no restrictions on the use of the Internet bandwidth, content, sites, or platforms, or modes
of communication allowed.

The principle states that if a given user pays for a certain level of Internet access, what they do on the
Internet is no one’s business, particularly the carrier. Net neutrality advocates view the ISPs roll only
to provide Internet access and routing. Net neutrality also subscribes that ISPs should not police the
Internet in any way. This includes a prohibition on any form of security monitoring, cleaning or traffic
shaping. Neutrality proponents have stated that ISP ought not suppress malicious traffic such as:
child pornography, criminal botnets, piracy, hacking, extortion, DDoS attacks, cyberwarfare, phishing,
spam or hate. This is only of concern to police.

Neutrality proponents claim that telecom companies seek to impose a tiered service model in order to
control the pipeline and thereby remove competition, create artificial scarcity, and oblige subscribers
to buy their otherwise uncompetitive services. There have also been allegations that ISPs would use
deep packet inspection devices to read or manipulate private communications for financial gain.
However, netneutrality advocates have failed to prove these allegations, and all claims have been
stuck down by Canadian courts.

Telecom Regulatory Policy CRTC 2009-657

Review of the Internet traffic management practices of Internet service providers
In this decision, the Commission sets out its determinations in the proceeding initiated by
Telecom Public Notice 2008-19 regarding the use of Internet traffic management
practices (ITMPs) by Internet service providers (ISPs). The Commission establishes a
principled approach that appropriately balances the freedom of Canadians to use the
Internet for various purposes with the legitimate interests of ISPs to manage the traffic
thus generated on their networks, consistent with legislation, including privacy legislation.

Internet service providers have intentionally slowed (shaped) peer-to-peer (P2P) communications to
combat illicit-use by pirates and botnet traffic. Data discrimination when it guarantees quality of
service is necessary.

However, “allowing broadband carriers to control what people see and do online would fundamentally
undermine the principles that have made the Internet such a success.”
—Vinton Cerf in testimony before Congress February 7, 2006

The fear is that “without net neutrality, the Internet would start to look like cable TV. A handful of
massive companies would control access and distribution of content, deciding what you get to see
and how much it costs.”—Lawrence Lessig & Robert W. McChesney

End-to-end principle states that all like Internet content must be treated alike and move at the same
speed over the network. The owners of the Internet's wires cannot discriminate. This is the simple but
brilliant "end-to-end" design of the Internet that has made it such a powerful force for economic and
social good. —Lawrence Lessig & Robert W. McChesney

Bret Swanson from the Wall Street Journal said that “YouTube, MySpace and blogs are put at risk by
net neutrality. Swanson says that YouTube streams as much data in three months as the world's
radio, cable and broadcast television channels stream in one year, 75 petabytes. He argues that
today’s networks are not remotely prepared to handle what he calls the "exaflood". He argues that net
neutrality would prevent broadband networks from being built, which would limit available bandwidth
and thus endanger innovation.”

“Poorly conceived legislation could make it difficult for Internet Service Providers to legally perform
necessary and generally useful packet filtering such as combating denial of service attacks, filtering
E-Mail spam and preventing the spread of computer viruses… it is very difficult to actually create
network neutrality laws which don't result in an absurdity like making it so that ISPs can't drop spam
or stop... attacks.” - Bram Cohen, the creator of BitTorrent,

“While the network neutrality debate continues, network providers often enter into peering
arrangements among themselves. These agreements often stipulate how certain information flows
should be treated. In addition, network providers often implement various policies such as blocking of
port 25 to prevent insecure systems from serving as spam relays, or other ports commonly used by
decentralized music search applications implementing peer-to-peer networking models. They also
present terms of service that often include rules about the use of certain applications as part of their
contracts with users.” - Wikipedia

See also

Mandate camping A condition politely called “mandate camping” can retard the progress to secure
cyberspace. This occurs when one party assumes the responsibility under a mandate to lead a cyber
security initiative, program or operation in a partnership or patrimony, but does not deliver. The effect
is to arrest the initiatives of other organizations and parallel programs.

Taxes and Revenue Recognition Security investments cost money and are taxed, but have no
extrinsic rate-of-return from an accounting perspective. To an audit-firm they are reported as a bad
investment. If telcos put a service in place to protect children from exploitation or to counter organized
crime or stop a DDoS attack against the government, there is no revenue recognition. Tax law is not
favourable to national cyber security initiatives.

Regulation Regulation is sometimes necessary to drive correct behaviour into areas where natural
market forces will not go, there is no paying client, but it is in the best interests of society in the future.
The challenge is that cyberspace is a complex ecosystem, and regulation is a blunt instrument. An
analog exists in natural science. Biologists have attempted to introduce non-indigenous species into
ecosystems to regulate pests and in nearly all cases has resulted in unforeseen disasters like cane
toads and killer bees. Similarly, regulating the telecommunications sector and cyber space these
days is fraught with danger, because regulators do not have the ability to run high-fidelity models that
can predict outcomes. Regulation in this environment is much like using the Farmer’s Almanac to
make long range weather predictions, or forecast severe weather events. A botnet is a severe
weather event. Regulation, if not both equitable and uniform, upsets the market and produces
undesirable results. Regulation which was intended to open up the telecommunications market was
also a contributing factor to: halting broad-band to rural areas, resulted in colossal increases in

malicious traffic on networks, enabled the proliferation of child pornography, created a barrier to
policing and lawful access, forced commoditization and the proliferation of dirty pipes and criminally
motivated unregulated providers. Complex systems theory, upon which the Internet can be explained,
does not respond predictably to ‘regulation’ in the mathematical sense, nor will it in a pragmatic
manner. Variables of market dynamics and the threat are too often underestimated. The existence of
unregulated public sector resellers of telecommunications services in the market is disruptive to the
ecosystem particularly, where adoption is deemed mandatory for a significant sector.

“The rapid technological change and the need to permit flexibility in implementing security measures
militate against codifying requirements by regulation”. - Jennifer Chandler, “Liability for Botnet attacks:
using tort to improve cybersecurity”, Canadian Journal of Technology Law. March 2006

Counter-Commoditization Traditional telecommunications services are rapidly being commoditized.

When we strip out security, many providers can offer similar Internet services - at least on the
surface. The discriminating factor is price, and this is the basis upon which bad decisions are made.
This represents a significant threat to large carriers who have invested billions in infrastructure that
others can use cheaply to compete in the same markets. The cost of Internet access and phone
service is going dropping rapidly. Carriers will not be able to remain profitable it they rely on
traditional telephony. This market-technology shift has necessitated that telcos build value-added-
services and find ways to distinguish their gateway/telephony services from the competition. Cloud
Security and clean pipes are some of these counter-commoditization initiatives. Commoditization
has also facilitated botnets. Trusted internet connectivity mitigates the ability of robot networks to
propagate and operate. The difference between a cheap circuit with no security and a trusted one can
be dramatic and demonstrable. [see HostExploit Top50 Bad Hosts and Networks]

Public apathy Cyber security is not ‘top-of-mind’ for the average Canadian. There is general
ignorance, apathy and a feeling of helplessness amongst the population. Security is simply not one of
the hottest iphone apps. It is very difficult for providers to sell security if no one feels like buying it.
Costs will remain high unless there are the economies of scale. A sophisticated public awareness
campaign requires public and private investment. A campaign must advertise security in a persuasive
manner to drive specific actions. It must put in place the necessary conditions to allow providers to
sell security, and invest more in fighting cybercrime such as botnets. Such a public awareness
campaign falls clearly in the ‘swing zone’ of the public sector.

Legal aversion There is debilitating reticence on the part of the legal community to tackle hard
security problems without precedence. There are notable few exceptions to this generalization. Most
of what is being done, or could and should be accomplished in fighting botnets is being discussed
amongst the operators. A legal debate, informed opinions and precedence simply has not kept pace
with technology and tradecraft. There is a widening gap between the laws and realities of cybercrime.
Engineers and other lay persons are making ethical, legal and privacy decisions about cyber-security
capability development and operations, without the benefit of legal context, because counsel often
chooses to recuse themselves from these complex discussions. There is also a dearth of useful legal
commentary and research in the area of botnets. That which is published, lacks the benefit
operational experience fighting real botnets and tends to be hypothetical and technically flawed. The
question is, do we have engineers, who are well studied in ethics and law, make the difficult decisions
or is it easier to train lawyers to understand the engineering and operational complexities?

Executive Focus Combating botnets is a complex affair; not one that can be easily compressed into a
one-sheet briefing for senior management. Botnets are implicated in most cyber threats and can have
enormous impact on an organization. Yet, there is seldom C-level representation at table to
coordinate a national response to DDoS attacks, or combating million machine botnets, whether
those decisions involve dismantling criminal organization such as the RBN or fighting a cyber-war
against the PLA.

Liability for action The threat of law suits hangs over the heads of the most effective security
solutions, particularly if they neither are main-stream nor well-understood. Taking action always

opens oneself up to criticism and requires justification. Within a public sector context, it is often more
likely to get into trouble by doing something then by doing nothing (risk of failure). This has been
particularly true in launching a proactive defence strategy or conducting CNO to support current wars.
In the private sector, it is much more likely to get into trouble by not doing something (risk of lost

Jennifer Chandler suggests that legal or regulatory action is necessary to force ISPs to better protect
their customers from Botnet attacks. She argues that ISPs could be held liable for their role in hosting
partly or entirely a Botnet attack. She suggests that ISPs could have a proactive role by monitoring
their end-users’ computers and quarantine any infected machines before they cause any harm. -
Jennifer Chandler, “Liability for Botnet attacks: using tort to improve cybersecurity” - Canadian
Journal of Technology Law. March 2006

Just war theory Politically, the concept of national self-defence to counter a war of aggression refers
to a defensive war involving pre-emptive offensive strikes, and is one possible criterion in the ‘Just
War Theory’. Proactive defence has moved beyond theory. It has been already been put into practice
in theatres of operation. However the theory states that only sovereign nation states can wage war.
Prima-facie evidence would tend to debunk this theory. Just war theorists argue that a private
company who launched an attack premptively in self-defence against an aggressor in a foreign
country may precipitate a ground war. Given that the state has seceded sovereignty in cyberspace,
and a de faco war is already being waged, would tend to render the theory more of a commentary.
NATO including Canada can not yet doctrinally defined what cyber war is. Nation crippling attacks
against critical infrastructures that are launched by foreign states are referred to a ‘hacking’ under the
criminal code.

Cdr Vida M. Antolin-Jenkins, “Defining the Parameters of Cyberwar Operations: Looking for Law in all
the Wrong Places?”, Naval Law Review, Vol 132, 2005, 166. General V.I. Tsymbal, in a speech at a
Russian-US conference on “Evolving Post Cold War Security Issues”, Moscow, 12-14 Sept., 1995
stated: “from a military point of view, the use of Information Warfare against Russia or its armed
forces will categorically not be considered non-military phase of a conflict whether there are
casualties or not… Russia retains the right to use nuclear weapons first against the means and forces
of Information Warfare, and then against the aggressor state itself.”

Chris Richter argues that pre-emptive self-defence has been in existence since the Caroline incident
of 1837.

Strike back doctrine Strike back doctrine aligns with pre-emptive and counter-attack tactics of a
proactive cyber defence strategy.

Marine Gen. James Cartwright told the House Armed Services Committee that “the best cyber
defence is a good offence. Federal Computer Week quotes the general as recounting “history
teaches us that a purely defensive posture poses significant risks," and that if "we apply the principle
of warfare to the cyber-domain, as we do to sea, air and land, we realize the defence of the nation is
better served by capabilities enabling us to take the fight to our adversaries, when necessary, to deter
actions detrimental to our interests."

On the home front, the entertainment industry has lobbied for the protection of the law to attack any
computer “suspected of distributing copyright-protected material.” This has motivated some to take a
generally opposing view to strike back, and by inference, proactive defence.

There is a dissenting viewpoint to strike back doctrine offered by Bruce Schneier.Bruce Schneier of
Counterpane Internet Security, Inc., waded in to the discussion of Strikeback as early as 15 Dec 2002
in his Crypto-Gram Newsletter and more recently in a 05 April 07 article Vigilantism Is a Poor
Response to Cyberattack.

Bruce Scheiner assesses that in warfare, the notion of counterattack is extremely powerful. He
agrees with author Richard Clarke's threat of ‘military-style reaction’ in the event of a cyberattack by a
foreign country or a terrorist organization. However, in times other that war Scheiner asserts that
“both vigilante counterattacks, and pre-emptive attacks, violate rights of the attackers who have not
seen a fair trial.” Although he “admits that it can be even harder to prove anything in court [and that]
the international nature of many attacks exacerbates the problems; more and more cyber criminals
are jurisdiction shopping: attacking from countries with ineffective computer crime laws, easily
bribable police forces and no extradition treaties.”

Mr. Scheiner observes “the common theme here is vigilantism: citizens and companies taking the law
into their own hands and going after their assailants. Viscerally, it's an appealing idea [and that]
revenge is appealingly straightforward, and treating the whole thing as a military problem is easier
than working within the legal system.”

Bruce Scheiner argues implicitly against proactive cyber defence including strike back regardless of
the magnitude of seriousness of the attacks or threat. He implies that to actively defend oneself
would infringe on the rights of that attacker. Bruce also suggests that the only option for non-
government organizations, which would include critical infrastructure owner/operators, businesses
and citizens, is to sustain the attack and seek justice through the legal system. He asserts that
“revenge only becomes justice if carried out by the State” and that only “the State has more
motivation to be fair.”

The Counter-counter argument to the strike back doctrine states that restricting strike back and
proactive defence for militaries in time of war has a number of errors. For one, it does not fully
appreciate the nature and magnitude of cyber-attacks and the necessity to manage the risk with
effective tradecraft. It does not recognize that ostensively we are already in a cyber war, that the
enemy is beyond the justice system, it is the critical infrastructures owned and operated by the private
sector that are on the front lines, and the only means left is a proactive cyber defence strategy that
includes strike-back against enemies of the state. Without this strategy, Cyberspace including all
telecommunications could go dark within the hour.

Laws of Armed Conflict (LOAC) “Much analysis into the implications of information operations using
computers and the topic of CNA has pointed toward the existing Laws of Armed Conflict (LOAC) and
concluded that in times of war, the LOAC are still valid with little or no change. The applicability
concerns are more with the construct of International Law, which rests on the Peace of Westphalia
treaties that introduced the concept of state sovereignty. In the event that a CNA against a state is
initiated by or involves a non-state actor, particularly outside the context of war, then the LOAC would
not apply. The interesting point about this conundrum is that it indirectly supports the notion of
conducting CNE operations and active defence CND operations in order to make the determination of
the source of an attack as being state or non-state. The second point is that not all computer attacks
will happen in times of declared war, this is where International treaties such as the United Nations
Charter can be of use. Again, these are restricted to the signatories that are all states, however,
states must conduct themselves within these agreements to keep any legitimacy on the international
stage, Canada included. The UN Charter’s Chapter VII Article 51 offers states some allowances to
act in self-defence.” - Evaluating Canada’s cyber semantic gap By LCol Francis Castonguay

The “fundamental principles of military necessity, unnecessary suffering, proportionality, and

distinction (discrimination), will apply to targeting decisions” - United States. Department of Defense.
Joint Targeting. JP 3-60, 13 April 2007, Appendix E-1.

Legal ambiguity by design As it pertains to the thorny issues of Botnet defence, the legal, privacy and
policy community has declined to wade into the discussion or publish strong opinions or straw man
policy. There is a ‘wait-and-see’ posture when it comes to official policy. In military circles, CNO

doctrine skirts around the key issues of Computer Network Exploitation and Attack, and obliquely
references the issues in netcentric warfare.

Accountability vacuum In a general sense, there is no single entity in Canada responsible for cyber
security. The vague response one often gets is that is a shared responsibility between a
parliamentary system of government and the private sector who owns the infrastructure. This
accountability structure fails when tested by a real botnet. It is much like a volleyball game, where
everyone on the team calls the ball, but noone goes for it, as it strikes the floor. A national response
to cybersecurity, or taking down the agents of a botnet requires close cooperation of authorities,
telecommunications carriers, and banks. Most of all it requires a single leader with vested authority,
accountability and resources to make it happen. In the absence of central authority, bilateral
relationships and partnerships will develop, an ad hoc matrix support system will emerge and
unilateral leadership action will be taken by those with the most vested interest in botnet defence.

A.3.3.2 Authority to conduct security

As much as providers are sometimes chastised for not cleaning all Internet of malicious content,
contradictory criticisms are levelled at ISP for conducting security operations on behalf of their
subscriber base, or for assuming activities, which may fall under national security, defence or
policing. So what authority do providers have to conduct security operations, including botnet
takedowns or play in areas historically the domain of government?

Private property The ISP’s network is private, not public. In point of fact, the Internet itself is private.
Negligible public funding goes into broadband services for Canadians. It is the prerogative of the
owner of private property to set the house rules. CRTC may establish broader framework. Typically
the rules are articulated in terms of an Acceptable Use Clauses, contracts and SLAs. ISPs are also
businesses and illegal activity, misuse or abuse of recourses is not good for business. So providers
have the right to keep their house in order.


You are prohibited from using the Service for activities that include, but are not limited to:

Uploading or downloading, transmitting, posting, publishing, disseminating, receiving, retrieving, storing or

otherwise reproducing, distributing or providing access to information, software, files or other material which (i)
are protected by copyright or other intellectual property rights, without prior authorization form the rights
holder(s); (ii) are defamatory, obscene, child pornography or hate literature; or (iii) constitute invasion of privacy,
appropriation of personality, or unauthorized linking or framing.

Transmitting, posting, receiving, retrieving, storing or otherwise reproducing, distributing or providing access to
any program or information constituting or encouraging conduct that would constitute a criminal offence or give
rise to civil liability.

Violating or breaching any applicable laws and/or regulations.

From TELUS’ Internet Services Account Agreement:

The TELUS Internet Services may be used only for lawful purposes. You agree that you will not:

Post, upload, reproduce, distribute, or otherwise transmit information or materials where such activity
constitutes a criminal offence, or otherwise engage in or assist others to engage in any criminal offence; such
offences include, but are not limited to, communicating hatred, pyramid selling, unauthorized use of a computer,
mischief in relation to data, fraud, defamatory libel, obscenity, child pornography, harassment, stalking and
uttering threats.

Strictly speaking, any malicious or suspicious traffic transiting the carrier networks is subject to
security monitoring and mitigation. Penetration testing, red team, computer network exploitation, or
attack over the provider’s network without explicit authorization by the carrier will likely be detected
and categorized as malicious. It is subject to monitoring and recording. Furthermore, such

unauthorised use of carrier facilities, constitute an offence or breach of contract (acceptable use
policy). Consequences may include blocking the traffic, black listing the organization or civil

Acts, Laws and Regulations Telecommunications and Internet providers are empowered under the
Telecommunications Act, Privacy act, within Regulatory Law and special exemptions within Criminal
code that allow them to monitor all voice and data traffic (communications intercept) for the purposes
of conducting business of a provider.

Criminal Code and Citizens arrest There exist provisions within the criminal code that permit self-
defence and citizen’s arrest. These laws also apply to cyberspace. If a provider sees a cyber crime
being committed, they are not obligated to intervene or report to the police. Notwithstanding, they are
fully within their rights to take all reasonable force to stop the crime from occurring, or to arrest those
responsible. Citizen’s arrest is meant for situations when police cannot respond in time to a serious
crime. In the case of a fast-flux botnet that is propagating across a provider’s subscriber base; unless
the police can detect, reverse-engineer, take down the control channels and neutralize up to a million
machines in under two minutes, then the provider is within their rights to take unilateral action. Since
reporting a crime is not obligatory in Canada, there is no incentive to follow-up with paperwork.

Pre-emptive proactive imperative. The pre-emptive proactive imperative is a fundamental ‘natural’ law
of warfare and jurisprudence. Essentially, if you fail to take the initiative and go on the offensive, you
will lose the battle. This principle applies to cyber security. The consequences of not dealing with
botnets in a proactive manner, necessarily means the collapse of Canada’s critical information
infrastructure, followed immediately by the financial sector and energy grid. Regardless of the
vulgarities of the legal, policy, privacy dissertations, proactive methods of botnet defence must be

International law, professional engineering and compliance to standards (ISO, SoX, PCI, SAS70) and
best practices obligate providers to combat robot networks if they are to maintain any security. The
direction is not explicit when it comes to robot networks, but actions of the ISP are justified in the spirit
of international law and standards.

Peering relationships. Providers have global peering connections and alliances that need to be
respected. Administrative action is taken collectively against ISPs who pass malicious traffic or fail to
cooperate with the community. An ISP will find themselves blocked, black-listed or communications
throttled by the community. Action is often swift, even automated, and shapes behaviour.

Lawful Access legislation exists in all countries. Essentially all communications devices and systems
put in place by providers have back doors or intercept points to facilitate lawful intercept of

Traffic Shaping Precedence. There is sufficient case law in Canada that justifies the provider’s right to
conduct traffic shaping.

Mandamus prerogative writs have been used by the private sector and individuals to compel the
government to fulfil their mandate and public service. Such writs can be used to force public
authorities to stop a botnet attack. Should the government not be able to fulfil its role in this regard,
then the private citizen or organization can use whatever means necessary to act, with the same
authorities vested in government. Jurisprudence supports the citizen’s right to self-defence using
reasonable force, including lethal force. In other case law, citizen’s have repaired roads as a urgent
matter of public safety, and invoiced the municipality, when public works failed to act. Similar
arguments hold for cyberspace

Public Nuisance A person or organization is guilty of a public nuisance who omits to discharge a
legal duty, or endangers the life, health, property, morals, or comfort of the public, or to obstruct the

public in the exercise or enjoyment of rights through their action or in failure to act. An organization
who propagates malicious network traffic is at risk of being sued/charged as a public nuisance.

Civil Action An Anton Piller warrant is a court order that provides ISPs the right to search premises
and seize evidence without prior warning. This prevents the destruction of incriminating evidence,
particularly in cases of alleged cyber crime, trademark, copyright or patent infringements. Typically
this order is only granted if there is an extremely strong prima facie case against the respondent; the
damage, is serious and there is the target of the investigation has evidence which they would move to
suppress or destroy if given the chance. Should a provider detect a botnet controller, they could
request an Anton Piller warrant to search and seize computers for forensic analysis and to be used in
a civil law suit. Police are not involved because this is a civil warrant. Court orders of this type have
been used effectively against pirating and theft of intellectual property. The warrant gives the provider
the authority to take extraordinary means to investigate cases including cyber attacks. An ISP can
use this authority to search and seize evidence where large amounts of malicious traffic or attacks
are emanating from an external organization (public or private) and either transiting the provider
facilities or directed at the provider’s infrastructure.

A.3.4 Risk and liability of inaction

There has been a considerable exposure created owing to an excessively cautious interpretation of
cyber security and infrastructure protection mandates. The source appears to be legal and policy
centres who have been so concerned about exceeding mandates that they have driven organizations
to fall short of achieving basic expectations a regarding public trust and in so doing, have raised risk
significantly to their organization and executive. This is what has cause botnet activity to rise steeply
in these organizations over the past few years.

“Canada has been slow at translating policy into capabilities. The lack of investment in Cyber-related
initiatives is a telling indicator of the maturity level of our current thinking about the Cyber
environment.” - Canada. Privy Council Office, Securing an Open Society : One Year Later : Progress
Report on the Implementation of Canada's National Security Policy (Ottawa: Privy Council Office,
2005), 53, 58, xv.

Introduction to Complex Risk Management and legal obligations.

An integrated risk management framework addresses the full spectrum of threats in a predictive and
scientific manner and puts a human face to issues. Probable and possible outcomes can be
presented qualified with measures of certainty, and communicated effectively to decision makers. In
theory, this is a far better way of implementing security than reacting to a crisis in an ad hoc and ill
informed manner driven by the emotion of the incident.

Existing Methodologies

The applied science of risk has evolved considerably since the executive failures to take action
leading up to 9/11, SARS, Air India and current large scale cyber attacks against Canada’s Critical
Infrastructure, but policy and legal institutions in both public and private sectors are by their very
nature risk-adverse, to the point of causing grave injury to the public interest, through highly restrictive
and unjustified self-limiting interpretations of business operations and mandates. These restrictions
are “out-of-touch” with public perceptions and expectations. Organizations have swung so far in a
regulated corporate culture so as to deliberately avoid a transgression of mandate, or taking a
positive risk, at the expense of acting in the best interests of either their shareholders or constituents.

How people perceive risk and implications for policy and procedures.

Recent research, by Slovic and his colleagues, Epstein, others on rational vs. affective perception of
risk, behavioural factors in compliance understand the concept of “probability neglect” and its
implications for policy. There is merit basing communication planning on this research and also
importance to the building and maintaining trust, which will help systems to function in an emergency
phase and in long-term recovery from a terrorist incident. People require, and reward, transparency,
openness, and the opportunity to be involved. Recent work on risk analysis and risk communication
suggests that the public is more aware of the range of threats, that they demands higher standards of
preparedness and communication. The public wants information and wants a sense of choice and
control. The challenge will be to translate this into policy guidance and practice. People fear
uncertainty and inaction – equating it with risk.

Contextual concepts of risk management

There are important contextual aspects to consider in modeling behavioural responses to actual and
perceived threats and to decision-making bound to effective risk management:

Risk exists throughout a behavioural system at physical and semantic levels, often in contradictory
ways. The government often addresses risk as a protector of rights and quality of life for its citizens,
as a source of economic development, or in the interests of self-preservation of the establishment.

Notwithstanding there is inherent risk in all decisions and indecision. It must be recognized that there
is a relationship between the cost of any particular action and the cost of not acting. The actions to
minimize risks cannot be taken "at whatever the cost". In practical terms, the implications of a given
risk must be measured against the cost of addressing that risk, or directing resources to other
priorities. Furthermore, avoiding or ignoring risk is to mismanage risk and ultimate accept the risk.

“A risk ignored is a risk accepted.”

Given that Ministers are accountable to the public, societal values and the public’s willingness to
accept or tolerate risk are relevant and legitimate considerations for public decision-making, whether
or not they are consistent with a scientific assessment of the risk. The common perception of risk
must be taken into account, as does the human behavioural response to a threat stimulus. As stated,
there is a correlation between the tolerance for risk and the perception of control over the activity
generating a given incident.

Risk typically has a negative connotation, but there are also positive opportunities arising from risk-
taking. Innovation and risk co-exist frequently. Under the National Security Act, Government Security
Policy and Financial Administration Act, Government managers are held responsible to ensuring
prudent risk management; this not only includes showing wise risk mitigation but demonstrating
appropriate risk taking in pursuing opportunities, and ensuring the safety of Canadians is a proactive
manner. Legal advisors and public policy centres who deliberately stall or interpret legal magnates in
a risk adverse manner must be held culpable for all ramifications resulting from a failure-to-act
including breach of public trust. Furthermore, individuals may be personally liable under the Financial
Administration Act and in civil suits for damages particularly in not allowing all means and measures
to be taken in the protection of the public and private critical infrastructures. In risk analysis,
exposures owing to inaction are tabled as losses or negative impacts.

A horizontal analysis of the public policy environment surrounding risk management identifies several
key elements:

• Relationships exist between making public policy decisions in an environment of risk with
• Managing uncertainty occurs within the process of making decisions;
• Public perceptions and concerns must be considered in all enterprise decision-making, and it
is a central and legitimate input to the process;

• Scientific risk uncertainty, competing policy interests and public perceptions or the reluctance
to accept risk with high uncertainty has led to increased focus on the precautionary approach;
• Failing-to-act to the full extent and interpretation of legal mandates.

Risk management must be practiced at all stages of the decision-making process within an enterprise
such as government. The role of public servants in serving the public interest is to incorporate the
public element into decisions. The recognition and understanding of public concerns is critical to the
resolution of issues and is fundamental to the operation of good government.

The public expects that their government will most importantly act in their best interests.

To present this concept within the decision-making process, the practice of risk assessment should
develop both empirical and public contexts. This approach leads to the development of relevant and
effective policy options. It should also be recognized that the development of policy options inherently
involves difficult trade-offs and the need to balance competing objectives and priorities, often leading
to good rather than perfect solutions when viewed from a singular perspective.

Precautionary Approach to Risk Management

The precautionary approach to risk management is an increasingly important element of public policy
particularly as it applies to the uncertainties of risk assessment. The approach forces a conscious risk
management decision to act, or to not act. The prevalence of the precautionary approach itself
reinforces the need for a risk management framework in public policy.

In order to protect the public trust, the precautionary approach should be widely applied where there
are threats of serious or irreversible damage. A lack of full scientific certainty is not considered
acceptable, as a matter of public policy, as a reason for postponing cost-effective measures to
prevent an event. In other words it is no longer acceptable for a lack of knowledge about possible
risks of a given situation to be used as an excuse to avoid action. In other words, there is no excuse
to failing to act to the full interpretation on legal mandate. Overly-conservative legal/policy direction is
considered as a premeditated circumvention of mandated responsibilities and the public trust.

The precautionary approach has been enacted in a number and provincial statutes and policies, and
there is a policy position advancing the application of the approach more broadly within the Canadian
government. Enactment of an integrated risk management framework including a precautionary
approach to addressing risk has been slow to be operationalized within the public sector owing to
paucity of skilled analysts, the level familiarity with these concepts by management, and the
inherently risk-adverse nature in policy makers and legal profession. The precautionary approach
should be applied during the development of options and the decision phases within the framework
on public risk management.

Constant considerations for risk management

A public policy decision-making process does not occur in isolation and there are considerations that
require ongoing attention throughout the process. Particular challenges in developing a more
integrated approach to communication and consultation activities throughout the decision process
need to focus the contextual aspects of risk including the degree of public tolerance for risk which
studies show us is much more that government.

Legal considerations are key concerns throughout a decision-making process. An initial review of the
main considerations in the context of public risk management suggests that they include an exposure
to legal liability associated with a breach of a duty of care and breach of the public trust for failing to
act within the public perceptions/expectations of the mandate.


Let’s look at three relevant case studies involving Botnets, providers and authorities:

Botnet crackdown in Canada. The police have no extraordinary view into the volume of malicious
traffic that flows over our national infostructure in a given day. So they need to ask the providers who
are the cyber criminals running botnets in Canada? To answer this question, a tier 1 network owner
would need to monitor all network traffic for anomalous or malicious behaviour over a period of time
with special sensors and tradecraft (discussed in earlier chapters). Critical node analysis could
identify addresses for more targeted investigation supported by cyber forensics. Eventually, virtual
identities could be resolved to a physical address in Canada and real name. Providers have it as their
prerogative to conduct such operations in the regular security maintenance of their private network,
and can take civil legal action against the subscriber running the botnet in Canada. Providers may
also hand this information over to police at any time to press criminal charges. Provisions under
privacy law allow for disclosure of subscriber information to authorities when there is a crime or
matter of national security. Child safety laws compel providers to inform police of child pornography
should they come across it in the normal course of network operations. So why are police not
overwhelmed by requests for support by ISPs to solve botnet crime? The primary reason is that it is
more expedient to take down a large bot, than to investigate who was behind it and take them to
court. Secondly, authorities do not possess the capability to resolve a botnet incident in an expedient
manner (typically minutes). The ISP would have to let the botnet operate while they conducted an
investigation at great expense, hand the evidence to police, and then still be left to resolve the botnet
incident. The third stumbling block is that subscriber information is personal information. There is still
a debate as to whether an IP address constituted personal information under certain circumstances.
The provider may provide an IP address and subscriber identifies to police, but they retain all the risk
of being sued or charged by federal authorities. This assumption of risk is seen as unreasonable by
all ISPs, who will demand a federal warrant before releasing Internet data of this type. Although, dial-
number or toll information generally does not require a warrant today, its release began with a
warranted process until the courts and providers felt comfortable. There are some exceptions that
include: emergencies, risk to life or child exploitation, where a provider has released subscriber
information in advance of paperwork. Several thousand requests are entertained by providers each
year, but none have involved botnets. In such cases, where subscriber information is released to
police, a letter is sent from the ISP to the individual notifying them of the release. Should an official
request from the police under PIPEDA be refused by the ISP, then the ISP is to send a notification to
the privacy commissioner. For these reasons of disclosure, requests to release subscriber information
using PIPEDA for criminal or national security investigations are not recommended. For police to ask
the question “who are the cyber-criminals” and get an answer, requires that a number of steps are
followed: The initial workup of cyber intelligence is a professional consulting engagement and the ISP
must be remunerated for work in this regard. Open-source security intelligence sources like black lists
can be mined for targeting information. A summary of the intelligence (less subscriber IPs) can be
used in an affidavit by police to get at warrant; after which, the established lawful access process and
billing can proceed.

Search for Victims. What if police have a domain, identity or IP of a foreign bot controller but have no
victims? In this case the police may be interested in stopping a crime or understanding why certain
Canadians were targeted in the case of terrorism, espionage or targeted exploitation or spear
phishing. It is possible to trace communications from foreign addresses to Computers in Canada.
Much of the same workup is required as discussed in the previous case. In this investigation, a
rolling surcease warrant could be sought to tap outgoing calls to a foreign IP address, domain or sub-
domain. As these identifiers change, the warrant can be updated. Victim machines could be
identified to police.

Clear policy guidance is required in all of these cases. We also recommend that Secure Liaison
Officers (SLOs) from the provider be imbedded within the police operation and investigations to
manage the information exchange. Peace officers could also participate in an industry exchange.

The process of data sharing could be highly automated but the access and control of subscriber
information must remain with the carrier.


Disclosure without knowledge or consent

(3) For the purpose of clause 4.3 of Schedule 1, and despite the note that accompanies that clause, an
organization may disclose personal information without the knowledge or consent of the individual only if the
disclosure is
(c.1) made to a government institution or part of a government institution that has made a request for the
information, identified its lawful authority to obtain the information and indicated that
(i) it suspects that the information relates to national security, the defence of Canada or the conduct of
international affairs,

Time limit
(3) An organization shall respond to a request with due diligence and in any case not later than thirty days after
receipt of the request.
(5) If an organization decides not to give access to personal information in the circumstances set out in
paragraph (3)(c.1), the organization shall, in writing, so notify the Commissioner, and shall include in the
notification any information that the Commissioner may specify.

Managed Security Service

What if clients want to monitor their own networks that are managed by the carrier?

Every system administrator is free to monitor their own network for security violations of instances of
botnets. Pragmatically, it is near impossible to catch a criminal robot network from inside your own
network. No organization in Canada really knows their complete network perimeter with absolute
certainty. Robot networks operating across your autonomous address space are best observed
upstream, within central locations at a major carrier, where your entire address space can be
monitored. However, the client cannot place their own sensors in the core for a number of reasons:
the risk of widespread disruption or interference is too great as is physical/logical access to sensitive
facilities and the private communications of all Canadians. In the case of government, tapping too
close to the core raises the additional spectre of big brother. There are two basic solutions: the
carrier provides a managed security service (MSS) to collate network presence, monitor, report and
clean; or the provider aggregates the clients traffic, taps the communications and delivers to the client
in a hosted or remote environment physically and logically isolated from the core. In either scenario,
the provide needs to provide verifiable oversight through architecture design, process controls and
technical means (deep packet inspection) that ensure the client receives all their traffic (nothing less
and nothing more). The direction of the feed can be also assured using technical and procedural

Client-to-Client Communications - In this context, this should be treated like any other network
management or perimeter security initiative. The client would have an implied (or express) authority
to monitor the communications that take place over their network between members or agencies of
their organization. In essence, the communications are all between one legal person and therefore
there is generally no issue of consent.

Organization-to-Citizen Communications - The infrastructure also facilitates communications from

Citizens to organizations. The issue in this case is whether or not the organization has advised their
clients that their information is going to be collected in the manner that is required and whether that
collection is on-side of the privacy rules that apply to the organization. The public are unlikely to
expressly view a transaction with one division to also be deemed to be consent to have information
received by other parts of the organization. Therefore, in these situations, the customary way to
control this kind of risk would be to get an indemnity from the organization for any claims that might

be brought forward by citizens (or any third party) against the carrier due to the organization's failure
to obtain consent.

Communications involving other levels of the organization – The infrastructure is also potentially in
use by other levels of organization. The other levels of the organization might not even be aware that
it's traffic is being captured centrally. The organization would not seek the necessary consents from
their citizens/third-parties. This risk would be controlled with the same indemnity recommended to
address the situation.

Ultimately, the issue here is not whether or not a carrier can provide the managed security services of
this type to and organization. It's also not as much about whether or not the solution is owned by the
organization or a carrier managed solution. However, just because the carrier is likely just a
"supplier" they are not exempt from potential risk and liability from third parties that may have issues
with the practice adopted by the organization. Therefore, the following elements in any change
request or service authorization that implements this solution should include:

Disclaimers - Ensure that carrier disclaims all liability to the organization in connection with any third-
party claims against the organization that are based upon the existence or use of this solution by the
organization. The carrier remains liable for failures to deliver the service but they are not liable if the
organization needs to pay out to a third-party to settle a lawsuit on this point.

Indemnity - Ensure that the organization provides the carrier an indemnity in respect of any third-party
claims that may be brought against the carrier occasioned by the existence or use of this solution by
the organization. Again, as the carrier does not control those third-party relationships, it's reasonable
to expect that the carrier would be made whole in the event that there is a claim directly against the

Annex B-
[Reference 1] Serial Variant Evasion Tactics Techniques Used to Automatically Bypass Antivirus
Technologies, Gunter Ollmann, VP of Research, Damballa, Inc. 2009
[Reference 2] Botnet Communication Topologies, Understanding the intricacies of botnet Command-
and-Control, Gunter Ollmann, VP of Research, Damballa, Inc. 2009
[Reference 3] The Botnet vs. Malware Relationship The one-to-one botnet myth, Gunter Ollmann, VP of
Research, Damballa, Inc. 2009
[Reference 4] The Honeypot project, aggregated blog, 27 January 2010
[Reference 5] Killing Botnets A view from the trenches, Ken Baylor, Ph.D. CISSP CISM, Chris Brown,
[Reference 6] DNS Amplification Attacks Preliminary release, Randal Vaughn and Gadi Evron, March
17, 2006
[Reference 7] "Your Botnet is My Botnet: Analysis of a Botnet Takeover,", Brett Stone-Gross,
Marco Cova, Lorenzo Cavallaro, Bob Gilbert, Martin Szydlowski, Richard Kemmerer,
Christopher Kruegel, and Giovanni Vigna, Department of Computer Science, University of
California, Santa Barbara, November 2009
[Reference 8] An Illustrated Guide to the Kaminsky DNS Vulnerability, Dan Kaminsky, August 2008
[Reference 9] Extracting C&C from Malware The Role of Malware Sample Analysis in Botnet Detection
By Gunter Ollmann, VP of Research, Damballa, 2009
[Reference 10] Net-Security, Top 10 botnets and their impact, Zeljka Zorz, 09 December 2009.
[Reference 11] MANDIANT M-TRENDS [ the advanced persistent threat ], 2010
[Reference 12] “ The Threat of Reputation Based Attacks”, Brian Krebs, Sept 18, 2007
[Reference 13] Kneber Botnet FAQ, Znet, 19 Feb 2010,

[Reference 14] “Kneber”=Zeus, Symantec, 18 Feb 2010
[Reference 15] “The Kneber Botnet – A Zeus Discovery and Analysis”, Netwitness Corporation, Feb 17

[Reference 16] “Zeus – King of the Underground Crimeware Toolkits”, Symantec , Aug 25 2009,
[Reference 17] Zeus Tracker:
[Reference 18] W.32 Waldec Threat Analysis, Gilou Tenebro, Symantec Response, 2009
[Reference 19] Kraken Botnet entry, Wikipedia:
[Reference 20] Storm Botnet entry, Wikipedia:
[Reference 21] “Conficker.C Analysis”, SRI International, 4 April 2009
[Reference 22] “Top Spam Botnets Exposed”, Joe Stewart, Director of Malware Research, SecureWorks,
April 8th 2008,
[Reference 23] Srizbi Botnet Entry, Wikipedia,
[Reference 24] MessageLabs Monthly Intelligence Reports, Symantec
[Reference 25] Password-stealing Trojan is spreading like a worm - and targeted directly at the enterprise,
By Tim Wilson, DarkReading, Jul 23, 2008
[Reference 26] Ivan A. Kirillov, Melissa P. Chase, Desiree A. Beck, Robert A. Martin ``The Concepts of
Malware Attribution Enumeration and Characterization (MAEC) Effort``, MITRE
[Reference 27] Common Attack Pattern Enumeration and Classification, MITRE Corporation
[Reference 28] Microsoft Security Intelligence Report Volume 7 (January - June 2009) :
[Reference 29] David Dagon, Guofei Gu, Christopher P. Lee, Wenke Lee; A Taxonomy of Botnet
Structures; ; 17/01/2010
[Reference 30] Michael Bailey, Evan Cooke, Farnam Jahanian, Yunjing Xu, Manish Karir; A Survey of
Botnet Technology and Defenses;;
[Reference 31] Lenny Zeltser; SANS Diary: Categories of Common Malware Traits; ; 17/01/2010 (pdf on-file)
[Reference 32] FIRST Discussion Group
[Reference 33]
[Reference 34] Developing Botnets - An Analysis of Recent Activity, Steve Santorelli, Team Cymru,
January 2010
[Reference 35] A Taste of HTTP Botnets, Marcel van den Berg, Team Cymru, July 200
[Reference 36] “What a Botnet Looks Like”, Scott Berinato, CSO Security and Risk, May 6th 2008
[Reference 37] “Inside the Global Hacker Service Economy”, Scott Berinato, CSO Security and Risk, Sept
1, 2007,
[Reference 38] “Security expert talks Russian gangs, botnets”, Robert Vamosi, CNET, Nov 7, 2008,

[Reference 39] “How to Steal a Botnet” (video),
[Reference 40] Neil Daswani, Michael Stoppelman; The Anatomy of Clickbot.A;; 05/01/2010


[Reference 42] Jose Nazario, Thorsten Holz; As the Net Churns: Fast-Flux Botnet Observations;; 05/01/2010

[Reference 43] Zhaosheng Zhu, Guohan Lu, Yan Chen, Zhi Judy Fu, Phil Roberts, Keesook Han; Botnet
Research Survey;
2008/csci8211/Readings/Sec-Botnet-Research-Survey.pdf; 05/01/2010

[Reference 44] David Dagon, Cliff Zou, Wenke Lee; Modeling Botnet Propagation Using Time Zones;, 08/01/2010

[Reference 45] Guofei Gu, Junjie Zhang, Wenke Lee; BotSniffer: Detecting Botnet Command and Control
Channels in Network Traffic;,

[Reference 46] Guofei Gu, Phillip Porras, Vinod Yegneswaran, Martin Fong, Wenke Lee; BotHunter:
Detecting Malware Infection Through IDS-Driven Dialog Correlation; http://www.cyber-; 07/01/2010

[Reference 47] Guofei Gu, Roberto Perdisci, Junjie Zhang Wenke Lee; BotMiner: Clustering Analysis of
Network Traffic for Protocol- and Structure-Independent Botnet Detection;; 07/01/2010

[Reference 48] Thorsten Holz, Christian Gorecki, Konrad Rieck, Felix C. Freiling; Measuring and
Detecting Fast-Flux Service Networks; http://pi1.informatik.uni-; 05/01/2010

[Reference 49] Paul Sroufe, Santi Phithakkitnukoon, Ram Dantu, João Cangussu; Email Shape Analysis
for Spam Botnet Detection;; 05/01/2010

[Reference 50] Mahathi Kiran.Kola; Botnets: Overview and Case Study;; 05/01/2010

[Reference 51] David Dittrich, Sven Dietrich; P2P as botnet command and control: a deeper insight;; 05/01/2010

[Reference 52] Ping Wang, Sherri Sparks, Cliff C. Zou; An Advanced Hybrid Peer-to-Peer Botnet;; 05/01/2010

[Reference 53] James R. Binkley, Suresh Singh; An Algorithm for Anomaly-based Botnet Detection;; 05/01/2010

[Reference 54] Anestis Karasaridis, Brian Rexroad, David Hoeflin; Wide-scale Botnet Detection and
hotbots.pdf; 05/01/2010

[Reference 55] Evan Cooke, Farnam Jahanian, Danny McPherson; The Zombie Roundup:
Understanding, Detecting, and Disrupting Botnets;; 05/01/2010

[Reference 56] Paul Barford, Mike Blodgett; Toward Botnet Mesocosms;; 05/01/2010

[Reference 57] Yao Zhao, Yinglian Xie, Fang Yu, Qifa Ke, Yuan Yu, Yan Chen, Eliot Gillum; BotGraph:
Large Scale Spamming Botnet Detection;;

[Reference 58] Brett Stone-Gross, Marco Cova, Lorenzo Cavallaro, Bob Gilbert, Martin Szydlowski,
Richard Kemmerer, Christopher Kruegel, Giovanni Vigna; Your Botnet is My Botnet:
Analysis of a Botnet Takeover;;

[Reference 59] Areej Al-Bataineh, Gregory White; Detection and Prevention Methods of Botnet-generated
Bataineh/Bataineh_Spambot.pdf; 11/01/2010

[Reference 60] Matthew Brand, Adam Champion, Derick Chan; Combating the Botnet Scourge;;

[Reference 61] Mohammad M. Masud, Jing Gao, Latifur Khan, Jiawei Han, Bhavani Thuraisingham; Peer
to Peer Botnet Detection for Cyber-Security: A Data Mining Approach; URL?; 05/01/2010

[Reference 62] Yinglian Xie, Fang Yu, Kannan Achan, Rina Panigrahy, Geoff Hulten, Ivan Osipkov;
Spamming Botnets: Signatures and Characteristics;; 12/01/2010

[Reference 63] Niels Provos, Dean McNamee, Panayiotis Mavrommatis, Ke Wang, Nagendra Modadugu;
The Ghost In The Browser_Analysis of Web-based Malware;; 15/01/2010

[Reference 64] Roger Clarke; Categories of Malware Working Paper Version of 21 September 2009;; 17/01/2009 (pdf on-file)

[Reference 65] Gunter Ollmann, Extracting CnC from Malware_The Role of Malware Sample Analysis in
Botnet Detection;
f; 17/01/2010
[Reference 66] Zeus: God of DIY Botnets 2009.October.14 Research and Analysis: Doug Macdonald
Editor: Derek Manky
[Reference 67] Control Systems Cyber Security: Defense in Depth Strategies, May 2006, Prepared by
Idaho National Laboratory


You might also like