You are on page 1of 6

International Journal of Application or Innovation in Engineering & Management (IJAIEM)

Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 1, Issue 3, November 2012 ISSN 2319 - 4847

Securing Cloud Servers against Flooding Based DDoS Attacks


Niraj Suresh Katkamwar1, Atharva Girish Puranik2 and Purva Deshpande3
1

Department of Computer Engineering, NCET, Nagpur University. Department of Computer Technology, YCCE, Nagpur University. Department of Computer Technology, YCCE, Nagpur University.

ABSTRACT
Cloud computing is still a juvenile and most dynamic field characterized by a buzzing IT industry. Virtually every industry and even some parts of the public sector are taking on cloud computing today, either as a provider or as a consumer. Despite being young it has not been kept untouched by hackers, criminals and other bad guys to break into the web servers. Once weakened these web servers can serve as a launching point for conducting further attacks against users in the cloud. One such attack is the DoS or its version DDoS attack. This paper presents a simple distance estimation based technique to detect and prevent the cloud from flooding based DDoS attack and thereby protect other servers and users from its adverse effects.

Keywords: Cloud, Attacks, Security, DDoS.

1. INTRODUCTION
Cloud Computing is a catchword in todays IT industry that nobody can escape. Cloud computing uses modern web and virtualization to dynamically provide various kinds of electronically provisioned services. In the last few years it has come into focus in the current IT industry and has served as a way to increase capacity or add new services without investing in new infrastructures, training new personnel, or licensing new software. It incorporates any paid or subscription based services over the Internet, extending existing capabilities of the IT industry. Moreover these services are available in reliable and scalable form to multiple consumers, whenever required. Other major advantage of cloud computing is, that it hides the complexity of IT technology from front end users and to some extent, developers. Number of different definitions of cloud has been proposed in the literature; however most of the definitions include some common features such as scalability, on-demand, pay-as-you-go, self-configuration, self maintenance and Software as a Service. Some of the definitions are listed below: A large-scale distributed computing paradigm that is driven by economies of scale, in which a pool of abstracted virtualized, dynamically-scalable, managed computing power, storage, platforms, and services are delivered on demand to external customers over the Internet. - Foster et al. [1] Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. - National Institute of Standards and Technology (NIST) [2] Cloud computing and virtualization can be abridged into a four layered model architecture as in Figure 1 [3]. Hardware - It refers to the highly capable computing and networking equipment, which includes efficient processing engines, storage solutions, networks, faster and larger memories. Infrastructure as a Service (IaaS) - In order to serve larger number of users with limited resources, a suitable allocation scheme is necessary. Infrastructure refers to the operating system and its virtualization. Different users will be allocated with dedicated CPU and memory virtually depending upon their accountability. Platform as a Service (PaaS) - It refers to the programming models, execution method and programming language environment, database, and web server. This can include aspects such as development, administration, management tools, run-time and data management engines along with security and user management services Software as a Service (SaaS) - This is most important from users perspective. In this model, cloud providers install and operate application softwares in the cloud. The cloud users can access these softwares from cloud clients and do not directly access the cloud infrastructure and platform on which the application is running i.e., the users here would be accessing the software online and storing the data back in the cloud eliminating the need of installing the application on the cloud user's own computers. This feature provides simplified maintenance and support for different levels of user accountability [2].

Volume 1, Issue 3, November 2012

Page 50

International Journal of Application or Innovation in Engineering & Management (IJAIEM)


Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 1, Issue 3, November 2012 ISSN 2319 - 4847

Figure 1 Cloud Layered Model. The defined cloud by NIST (National Institute of Standards and Technology) has three main deployment models and a fourth one which is the composition of others [2]. When a single organization operates the cloud infrastructure, the private cloud deployment model is used. The infrastructure in this deployment model can be administrated locally or by third parties, also resources may exist on premise or off premise. When several organizations, with similar goals, operate the cloud infrastructure, the community cloud model is used. Administration and resource location can be handled locally or by any third party. The third deployment model is the public cloud. The cloud infrastructure in this deployment model is available to the public. The responsible organization may provide variety of cloud services using the public cloud model. Hybrid cloud is a composition of several deployment models that supports the application portability. 1.1 DoS / DDoS Attack A Denial of Service (DoS) attack is a type of attack focused on disrupting availability. Such an attack can take many shapes, ranging from an attack on the physical IT environment to the overloading of network connection capacity, or through exploiting applications weaknesses. A DoS attack involves, using one computer or internet connection to flood a server with packets (TCP/UDP). The objective of this attack is to overload the servers bandwidth, and other resources, so that anyone who may be trying to get access to the server is not served, hence the term denial of service.

Figure 2: General architecture of DDoS attacks A DDoS (Distributed Denial of Service) attack is almost the same as a DoS attack, but the results of the DDoS attacks are massively destructive. As the name suggests, the DDoS attack is executed using a distributed computing method often called a botnet army, the creation process of which involves infecting computers with a form of malware that gives the botnet owner access to the computer. This could be anything from simply using the computers connection to attack on the service or all the way to gain complete control over the computer. One may aggregate the army together with hundreds or thousands or even more to attack the server so much that it has no choice but to shut down from the overload of bandwidth, RAM and CPU power. Therefore, it is much harder for a server to withstand a DDoS attack as opposed to the simpler DoS incursion.

Volume 1, Issue 3, November 2012

Page 51

International Journal of Application or Innovation in Engineering & Management (IJAIEM)


Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 1, Issue 3, November 2012 ISSN 2319 - 4847
Distributed denial-of-service (DDoS) attacks pose a serious threat to network security. There have been a lot of methodologies and tools devised to detect DDoS attacks and reduce the damage they cause. Still, most of the methods cannot simultaneously achieve (1) efficient detection with a small number of false alarms and (2) real-time transfer of packets. The DDoS attacks can be classified into following three main categories: Bandwidth Attacks are intended to overflow and consume resources available to the victim (i.e., network bandwidth and equipment throughput). Examples of Bandwidth DDoS attacks are TCP SYN Flood, ICMP Flood and UDP Flood. Protocol Attacks take advantage of protocol inherent design (i.e., SMURF and DNS). Software Vulnerability Attacks attempt to exploit a software program design flaw (i.e., Land attack, Ping of Death, and Fragmentation). Jelena Mirkovic, Janice Martin and Peter Reiher [5] have discussed detailed classification of DDoS attacks based on the Degree of Automation, Exploited Vulnerability, Attack Rate Dynamics and Impact. Some of the common DDoS attacks are discussed below: 1.1.1 SYN Flood Attack A SYN flood occurs when a host sends a flood of TCP/SYN packets, often with a fake sender address. Each of these packets is handled like a connection request, causing the server to spawn a half-open connection, by sending back a TCP/SYN-ACK packet (Acknowledge), and waiting for a packet in response from the sender address (response to the ACK Packet). However, because the sender address is fake and the responses never come. These half-open connections saturate the number of available connections that the server is able to make, keeping it from responding to legitimate requests until after the attack ends. 1.1.2. Smurf Attack A smurf attack is one particular variant of a flooding DoS attack on the public Internet. It relies on erratically configured network devices that allow packets to be sent to all computer hosts on a particular network via the broadcast address of the network, rather than a specific machine. The network then serves as a smurf amplifier. In such an attack, the perpetrators will send large numbers of IP packets with the source address faked to appear to be the address of the victim. The network's bandwidth is quickly used up, preventing legitimate packets from getting through to their destination [7]. 1.1.3. ICMP Flood Like the other flooding attacks, this one is accomplished by broadcasting a bunch of ICMP packets, usually the ping packets. The idea is to send large amount of data to the system, so that it slows down so much and gets disconnected due to timeouts. Particularly, Ping flood attacks attempt to saturate a network by sending a continuous series of ICMP echo requests over a high-bandwidth connection to a target host on a lower bandwidth connection. The receiver must send back an ICMP echo reply for each request. 1.1.4. Ping of Death A ping of death involves sending a malformed or otherwise malicious ping to a computer. A ping is normally 32 bytes in size. Ping of death attack is caused by an attacker deliberately sending an IP packet larger than the 65,536 bytes allowed by the IP protocol. Many operating systems dont know what to do when they receive an oversized packet, so they freeze, crash or reboot. Ping of death attacks were particularly nasty because the identity of the attacker sending the oversized packet could be easily spoofed and because the attacker didn't need to know anything about the machine they were attacking except for its IP address. By the end of 1997, operating system vendors had made patches available to avoid the ping of death. Many new variants of ping of death include jolt, sPING, ICMP bug, IceNewk, Ping o' Death [8]. However most modern day firewalls are capable of filtering such oversized packets. 1.1.5. Land Attack A LAND attack consists of a stream of TCP SYN packets that have the source IP address and TCP port number set to the same value as the destination address and port number (i.e., that of the attacked host). Some implementations of TCP/IP cannot handle this theoretically impossible condition, causing the operating system to go into a loop as it tries to resolve repeated connections to itself. Service providers can block LAND attacks that originate behind aggregation points by installing filters on the ingress ports of their edge routers to check the source IP addresses of all incoming packets. If the address is within the range of advertised prefixes, the packet is forwarded; otherwise it is dropped. 1.1.6. Teardrop The Teardrop, though, is an old attack that relies on poor TCP/IP implementation that is still around. It works by interfering with how stacks reassemble IP packet fragments. The trick here is that as IP packets are sometimes broken up into smaller chunks, each fragment still has the original IP packet's header, and field that tell the TCP/IP stack what bytes it contains. When it works right, this information is used to put the packet back together again. What happens

Volume 1, Issue 3, November 2012

Page 52

International Journal of Application or Innovation in Engineering & Management (IJAIEM)


Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 1, Issue 3, November 2012 ISSN 2319 - 4847
with Teardrop though is that your stack is buried with IP fragments that have overlapping fields. When the stack tries to reassemble them, it cannot do it, and if it does not know to toss these trash packet fragments out, it can quickly fail. Most systems know how to deal with Teardrops and a firewall can block Teardrop packets in return for a bit more latency on network connections since this makes it disregard all broken packets. Of course, if you throw a ton of Teardrop busted packets at a system, it can still crash. Many other variants, such as Targa, SynDrop, Boink, Nestea Bonk, TearDrop2 and NewTear are available to accomplish this kind of attack.

2. METHOD TO PREVENT DDOS ATTACK


Early fixes have focused on increasing the length of the queues and reducing a timeout value. The timeout value controls how long an entry waits in the queue until an acknowledgement is received. The problem with simply making the queue longer is that there are actually many queues (one for each TCP server on the system--HTTP, FTP, SMTP, etc.), and lengthening the queues to very large values, for example, eight kilobytes, results in an operating system requiring enormous amount of memory (over 100 megabytes for a system with 25 server applications). Shortening the timeouts can also help when used with longer queue lengths because the spoofed packets get removed from the queues more quickly. Shortening the timeouts also affects new outgoing connections, and remote users with slow links (these people would never get connected to this server otherwise). It is more advisable that Internet Service Providers (ISPs) filter packets that they receive from their customers. Packets that have return addresses (source addresses) that don't belong to the ISP's customers must be rejected, foiling this attack, as well as other attacks that rely on source address spoofing. Unfortunately, it is unlikely that all ISPs will ever filter all IP traffic coming from their customers any time soon. Some security product vendors, such as Checkpoint Technologies and Internet Security Systems (ISS) have announced products for dealing with TCP SYN flooding. Checkpoint offers an add-on module for their Firewall-1 product that is supposed to block this attack. Firewall-1 works by checking packets before they enter the IP layer of the TCP/IP stack, and could reasonably be expected to work. ISS sells a product named Real Secure that watches network traffic, detects the packets involved in a TCP SYN flood, and sends ``resets'' to the affected servers to prevent the queues from filling up. Again, this approach is feasible, but depends on how good the watcher is at determining which packets represent an attack and which are valid packets.

3. PROPOSED METHOD TO PREVENT DDOS ATTACK


Commonly used DDoS detection techniques fall into either IP Attributes-based DDoS Detection or Traffic Volumebased DDoS Detection. The first ones use such as IP protocol-type and packet-size, source IP prefix and TTL values, as well as server port number and protocol-type, etc. to determine the anomalous behavior. However the later ones use a multi-level tree that keeps packet rate statistics for subnet prefixes at different aggregate levels. Normal traffic usually has a proportional rate to or from hosts and subnets. Therefore, an attack will be detected when a disproportional rate of traffic is observed. Most of the techniques in these categories suffer either through large dependence on the attribute used for the computation of the entropy or too long time delay due to complex computation or weak connection between selected attributes and DDoS attacks, making the detection scheme ineffective. Another technique for detection of DDoS attack falls under distance estimation techniques. In this paper we have used average distance estimation based DDoS detection technique. In this technique we estimate the mean value of distance in the next time period by using the exponential smoothing estimation technique. This distance-based traffic separation DDoS detection technique uses MMSE (Minimum Mean Square Error) linear predictor to estimate the traffic rates from different distances. We calculate the distance value based on the TTL field of an IP header directly during transit, each intermediate router deducts one from the TTL value of an IP packet. Therefore, the distance of the packet is the final TTL value subtracted from the initial value. The challenge in distance calculation is how the victim derives the initial TTL value from the final TTL value. Fortunately, most of the operating systems use only a few selected initial TTL values: 30, 32, 60, 64, 128, and 255, according to [6]. Most of the Internet hosts can be reached within 30 hops. Therefore, the initial value can be determined by choosing the smallest initial value of all the possible values which are larger than the final TTL value. The detection of anomaly relies on the description of normality and deviation. The exponential smoothing estimation technique is chosen because of its successful application in the real time measurement of the round trip time of IP traffic. The exponential smoothing estimation model predicts the mean value of distance dt+1 at time t+1 using:

d t 1 d t w ( M t d t )
Here, dt is a distance value at time t predicted at time t-1, Mt is the measured distance value at time t, w is a smoothing gain, and Mt dtistheerrorinthatpredictionattimet.Ifwishigher,thelastpredictionerrorhasthemoreweightin predicting the next distance value. As a result, the predicted values represent the actual distance value fluctuation more closely. To determine whether the current distance value is abnormal or not, mean absolute deviation (MAD) can be utilized:

Volume 1, Issue 3, November 2012

Page 53

International Journal of Application or Innovation in Engineering & Management (IJAIEM)


Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 1, Issue 3, November 2012 ISSN 2319 - 4847
MAD 1 n * | et | n i 1

Where, n is the number of all past errors and et is the prediction error at time t. However, it is not realistic to maintain all the past errors. Therefore, we use the exponential smoothing technique to calculate MAD based on the approximation equation as defined below:

MADt 1 r | et | (1 r ) MADt
where, MADt is the MAD value at time t. r is a smoothing gain. If the real value at the next moment is out of the legal scope, an anomaly situation is detected. The above algorithm is implemented using NS-2 simulator and we have considered a situation with 100 nodes in the cloud as in figure 2, where * represents the edge router having direct 200Mbps link and + indicate the wireless cloud nodes. A number of CBR flows try to overwhelm the Web servers as attack traffic.

Figure 3 Cloud scenarios for simulation.

Figure 4 DDoS traffic detected.

Figure 5 DDoS detection rate v/s false positive rate graph..

4. CONCLUSION
In this paper, we have used a distance-based DDoS technique which uses a simple but effective exponential smoothing technique to predict the mean value of distance in the next time period. The proposed technique relies on MMSE to support efficient traffic arrival rate prediction for separated traffic. We tested the technique in the Internet-like network implemented on NS2 with over 100 nodes. The experimental results show that the proposed technique is effective and can detect DDoS attacks with high detection rate and low false positive rate.

Volume 1, Issue 3, November 2012

Page 54

International Journal of Application or Innovation in Engineering & Management (IJAIEM)


Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 1, Issue 3, November 2012 ISSN 2319 - 4847 REFERENCES
[1] Foster, Ian, Yong Zhao, Ioan Raicu, et al, "Cloud Computing and Grid Computing 360-Degree Compared", In Grid Computing Environments Workshop (GCE), Austin, 2008. [2] Peter Mell and Timothy Grance. The NIST Definition of Cloud Computing. Technical Report SP 800-145 Draft, National Institute of Standards and Technology, Information Technology Laboratory, January 2011. [3] Martin Litoiu, Murray Woodside, Johnny Wong, Joanna Ng, Gabriel Iszlai, A Buisness Driven Cloud Optimization Architecture, Proceedings of ACM in SAC10, pp.380 385. [4] Cai, M., K. Hwang and Y. Chen, Hybrid Intrusion and Anomaly Detection with Weighted Signature Generation, IEEE Trans. On Dependable and Secure Computing, revised Sept. 2005. [5] JelenaMirkovic, Janice Martin and Peter Reiher, "A Taxonomy of DDoS Attacks and DDoS Defense Mechanisms", Computer Science Department,University of California, Los Angeles. [6] The Swiss Education and Research Network, Default TTL values in TCP/IP,Available at http://secfr.nerim.net/docs/fingerprint/en/ttldefault.html, 2002. [7] Zhou, R. and K. Hwang, Trust-Preserving Overlay Networks for Global Reputation Aggregation in Scalable P2P Systems , IEEE transaction on Parallel and Distributed Systems, (TPDS), revised March 2006. [8] Houle, K., G. Weaver, N. Long, and R. Thomas, "Trends in Denial of Service Attack Technology", CERT Coordination Center Document, 2001, www.cert.org/archive/pdf/. [9] Dittrich, D., The Stacheldraft Distributed Denial of Service Attack Tool, http://staff.washington.edu/dittrich/, 2000. [10] C. Papadopoulos, R. Lindell, J. Mehringer, A. Hussain, and R. Govindan, COSSACK: coordinated suppression of simultaneous attacks, in Proceedings of DARPA Information Survivability Conference and Exposition, 2003, pp. 213.

Volume 1, Issue 3, November 2012

Page 55

You might also like