While some of the emerging threats GTISC and the panel experts identi\ue000ed \ue000t neatly within an established category or take advantage of a speci\ue000c application, others share common characteristics and are o\ue002en used in tandem by malicious elements. Financial gain is the primary motivator behind all \ue000ve emerging threat categories.
As Web 2.0 makes Web applications more interactive and improves the user experience, it also pushes more code execution onto the client browser. Web 2.0 describes advanced Internet technology and applications includ- ing blogs, wikis, RSS and social book-marking. The Ajax programming language enables many of the technological advances found in Web 2.0, which primarily allow greater collaboration and interaction among Internet users, con- tent providers, and enterprises. In the original Web me- dium, users simply viewed or downloaded Web site con- tent. Web 2.0 lets users have more input into the nature and scope of Web content, with much of the Web site code execution occurring on the user\u2019s browser.
When browsing a Web 2.0 site, the user\u2019s browser silent- ly makes requests and communicates with the Web appli- cation in the background. This scenario gives hackers the opportunity to embed malicious code on an otherwise le- gitimate Web site, which the user\u2019s browser will automati- cally execute. Ironically, the same time-tested exploit tech- niques\u2014HTML code injection, SQL injection, cross-site
Web applications to combine data/media from multiple sources, locations and coding styles. For example, a re- tail store locator Web page will add location information
each time a potential victim visits the malicious page\u2014ef- fectively creating a unique exploit with each request and making it impossible for signature-based protection en-
signature-based protection is able to detect one layer of Web exploit obfuscation, polymorphic exploitation poses a new problem. Proposed countermeasures for Web 2.0
\u2022 Educating Web developers on the need for secure
coding throughout the development lifecycle,
with emphasis on input validation.
\u201cAs the natural evolution of the Web progresses from 1.0 to 2.0 and beyond, more content and code from
multiple and varied sources will be housed together on the client side, creating a highly complex environ-
ment for securit y governance and protection. In 2008, expect to see underground organizations shift
tactics and focus more on Web 2.0, particularly mash-up technologies, leading to more abuses at the user
end wherever possible.\u201d
\u201cWith Web 2.0, applications and data are moving off the PC and into the cloud, and threats and attacks will follow. Attacks now live in social net works and other client-side applications, and the ever yday user is the target.\u201d
through e-mail, instant messaging (IM), peer-to-peer (P2P) networks and social networking communities will only increase.
a\ue005ackers have had to invent new spam techniques to bypass \ue000lters, \ue000rewalls, gateways and more suspicious, knowledgeable recipients. GTISC predicts the following trends related to targeted messaging a\ue005acks in 2008:
ware purges most blatant spam e-mails, incidents of spam designed to look like business content will increase. This in- cludes PDF spam, Excel spam and experimentation with oth-
embed links to malicious sites within otherwise legitimate instant messages. At the end of an IM conversation (and unbeknownst to the sender), a hacker will include a mes- sage urging the recipient to visit a malicious link. From there, various phishing scams will be carried out.
Evolving targeted messaging a\ue005acks will have far-reach- ing impacts on the Internet. Identity the\ue003 will increase, burdening \ue000nancial institutions that agree to bear the cost
of restitution. More importantly, the lack of trust on the Internet will challenge the productivity gains users expect from the online environment.
Filter security technologies such as anti-virus, anti- spam, anti-phishing and anti-malware will continue to make progress in blocking unwanted and/or malicious
currently estimates that 10 percent of computers on the Internet\u2014representing tens of millions of computers\u2014are infected by botnets. While the existence of botnets is not
been used to launch spam and denial of service a\ue005acks, they will increasingly be utilized for information the\ue003 in the future, whether as a form of \ue000nancial fraud or corpo-
million open recursive servers in a population of approxi- mately 24 million open resolvers. Of these, nearly 600,000 were botnet-related. Further, approximately 2.4 percent of the open recursive servers would provide misleading or incorrect answers to questions about \u201cphishable\u201d do- mains, such as banks and anti-virus update sites.
via P2P networks in 2008 to avoid detection by tradition- al intrusion detection and prevention systems (IDS/IPS). Once botnet activity is detected and traced back to a con-
Now bringing you back...
Does that email address look wrong? Try again with a different email.