You are on page 1of 20

Defragmentation

From Wikipedia, the free encyclopedia Jump to: navigation, search "Defrag" redirects here. For other uses, see Defrag (disambiguation). "Disk Defragmenter" redirects here. For the Microsoft Windows utility, see Disk Defragmenter (Windows). This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. Please help to improve this article by introducing more precise citations. (August 2010) This article may require cleanup to meet Wikipedia's quality standards. No cleanup reason has been specified. Please help improve this article if you can. (January 2010)

Visualization of fragmentation and then of defragmentation In the maintenance of file systems, defragmentation is a process that reduces the amount of fragmentation. It does this by physically organizing the contents of the mass storage device used to store files into the smallest number of contiguous regions (fragments). It also attempts to create larger regions of free space using compaction to impede the return of fragmentation. Some defragmentation utilities try to keep smaller files within a single directory together, as they are often accessed in sequence. Defragmentation is advantageous and relevant to file systems on electromechanical disk drives. The movement of the hard drive's read/write heads over different areas of the disk when accessing fragmented files is slower, compared to accessing the entire contents of a non-fragmented file sequentially without moving the read/write heads to seek other fragments.

Causes of fragmentation
See also: File system fragmentation#Cause Fragmentation occurs when the file system cannot or will not allocate enough contiguous space to store a complete file as a unit, but instead puts parts of it in gaps between other files (usually those gaps exist because they formerly held a file that the operating system has subsequently deleted or because the file system allocated excess space for the file in the first place). Larger files and greater numbers of files also contribute to fragmentation and consequent performance loss. Defragmentation attempts to alleviate these problems.

Example

Consider the following scenario, as shown by the image on the right: An otherwise blank disk has five files, A through E, each using 10 blocks of space (for this section, a block is an allocation unit of that system, it could be 1 KB, 100 KB or 1 MB and is not any specific size). On a blank disk, all of these files will be allocated one after the other. (Example (1) on the image.) If file B is deleted, there are two options, leave the space for B empty and use it again later, or move all the files after B so that the empty space is at the end. This could be time consuming if there were hundreds or thousands of files which needed to be moved, so in general the empty space is simply left there, marked in a table as available for later use, then used again as needed.[1] (Example (2) on the image.) When a new file, F, is allocated requiring six blocks of space, it can be placed into the first 6 blocks of the space formerly holding the file B and the four blocks following it will remain available. (Example (3) on the image.) If another new file, G is

added, and needs only four blocks, it could then occupy the space after F and before C. (Example (4) on the image). When F needs to be expanded, since the space immediately following it is no longer available, there are two options: 1. Move the file F to where it can be created as one contiguous file of the new, larger size. Relocating the file may not be possible as the file may be larger than any one contiguous space available. The file conceivably could be so large the operation would take an undesirably long period of time. Some filesystems relocate files as a background task with low priority. 2. Add a new block somewhere else and indicate that F has a second extent (Example (5) on the image.) Repeat this hundreds of times and the file system has many free segments in many places and many files may be spread over many extents. When a new file (or a file which has been extended) has to be placed in a large number of extents, access time for that file may become excessively long. The process of creating, deleting and expanding existing files, may be referred to as churn, and can occur at both the level of the general root file system or in subdirectories. Fragmentation not only occurs at the level of individual files, but also when different files in a directory (and maybe its subdirectories), that are often read in a sequence, start to "drift apart" as a result of "churn". A defragmentation program must move files around within the free space available to undo fragmentation. This is an intensive operation and cannot be performed on a file system with no free space. Performance during this process will be severely degraded. Depending on the algorithm used it may be advantageous to perform multiple passes. The reorganization involved in defragmentation does not change logical location of the files (defined as their location within the directory structure).

Common countermeasures
Partitioning A common strategy to optimize defragmentation and to reduce the impact of fragmentation is to partition the hard disk(s) in a way that separates partitions of the file system that experience many more reads than writes from the more volatile zones where files are created and deleted frequently. The directories that contain the users' profiles are modified constantly (especially with the Temp directory and web browser cache creating thousands of files that are deleted in a few days). If files from user profiles are held on a dedicated partition (as is commonly done on UNIX systems[citation needed]), the defragmenter runs better since it does not need to deal with all the static files from other directories. For partitions with relatively little write activity, defragmentation performance greatly improves after the first defragmentation, since the defragmenter will need to defrag only a small number of new files in the future.

Offline defragmentation
The presence of immovable system files, especially a swap file, can impede defragmentation. These files can be safely moved when the operating system is not in use. For example, ntfsresize moves these files to resize an NTFS partition. The tool PageDefrag could defragment Windows system files such as the swap file and the files that store the Windows registry by running at boot time before the GUI is loaded. Since Windows Vista, the feature is not fully supported and has not been updated. If the NTFS Master File Table (MFT) must grow after the partition was formatted, it may become fragmented, and in early versions of Windows it could not be safely defragmented while the partition was in use. An increasing number of defragmentation programs are able to defragment the MFT in versions of Windows since XP with API support for this.[2]

User and performance issues


In a wide range of modern multi-user operating systems, an ordinary user cannot defragment the system disks since superuser (or "Administrator") access is required to move system files. Additionally, file systems such as NTFS are designed to decrease the likelihood of fragmentation.[3][4] Improvements in modern hard drives such as RAM cache, faster platter rotation speed, command queuing (SCSI TCQ/SATA NCQ), and greater data density reduce the negative impact of fragmentation on system performance to some degree, though increases in commonly used data quantities offset those benefits. However, modern systems profit enormously from the huge disk capacities currently available, since partially filled disks fragment much less

than full disks,[5] and on a high-capacity HDD, the same partition occupies a smaller range of cylinders, resulting in faster seeks. However, the average access time can never be lower than a half rotation of the platters, and platter rotation (measured in rpm) is the speed characteristic of HDDs which has experienced the slowest growth over the decades (compared to data transfer rate and seek time), so minimizing the number of seeks remains beneficial in most storage-heavy applications. Defragmentation is just that: ensuring that there is at most one seek per file, counting only the seeks to non-adjacent tracks. When reading data from a conventional electromechanical hard disk drive, the disk controller must first position the head, relatively slowly, to the track where a given fragment resides, and then wait while the disk platter rotates until the fragment reaches the head. Since disks based on flash memory have no moving parts, random access of a fragment does not suffer this delay, making defragmentation to optimize access speed unnecessary. Furthermore, since flash memory can be written to only a limited number of times before it fails, defragmentation is actually detrimental.

Windows System Restore points may be deleted during defragmenting/optimizing


Running most defragmenters and optimizers can cause the Microsoft Shadow Copy service to delete some of the oldest restore points, even if the defragmenters/optimizers are built on Windows API. This is due to Shadow Copy keeping track of some movements of big files performed by the defragmenters/optimizers; when the total disk space used by shadow copies would exceed a specified threshold, older restore points are deleted until the limit is not exceeded.[6]

Defragmenting and optimizing


Besides defragmenting program files, the defragmenting tool can also reduce the time it takes to load programs and open files. For example, the Windows 9x defragmenter included the Intel Application Launch Accelerator which optimized programs on the disk.[7] The outer tracks of a hard disk have a higher transfer rate than the inner tracks, therefore placing files on the outer tracks increases performance.[8] In addition, the defragmenting tool may also use free space on other partitions or drives to be able to defragment volumes of low disk space.

How to Scan Disk


Scan Disk is one of the most common maintenance tools bundled with an operating system such as Windows. This is an application that checks the computers hard drive for errors and bad sectors. Once Scan Disk finds an error in the hard drive, it will attempt to fix it. There are a number of reasons for the errors found inside a hard drive. These include:

frequent system crashes critical system applications that have been improperly closed the existence of harmful programs such as viruses, trojans, etc.

What Does Scan Disk Do? Scan Disk is designed to repair damaged hard drive sectors and clusters on your computers hard drive. The majority of errors detected with the utility programs permit the application to recover the data stored in the damaged regions of the hard drive. When Scan Disk finds a bad or damaged cluster during a scan, it will move the information stored in that cluster to a new location on the computers hard drive. Scan Disk also checks and repairs the integrity of file systems such as FAT, FAT32, NTFS, etc.

Scan Disk requires exclusive access to a drive when it executes. Hence, if one or more of the files are open on the drive that you want to scan, Scan Disk may display a prompt asking if you want schedule the drive check for the next time that you restart your computer. Once Scan Disk finishes its task, it provides a report that contains the errors it has found and the amount of disk space it has scanned. It is therefore important for users to use the Scan Disk application because it ensures that their data is safe from being corrupted. Aside from this, there is a guarantee that their computer will perform at optimum levels. The earliest Scan Disk version appeared in MS DOS 6.2. In Windows 95 and 98, Scan Disk was given a graphical user interface (GUI). In this graphical environment, the user can find:

progress bars buttons information regarding the status of the scan and the errors (if any)

How to Run Scan Disk in Windows 2000 and Windows XP


Press the Start button on desktop Double click on My Computer Highlight the disk to be scanned for bad sector on the list of Hard Disk Drives. Open the File menu and select Properties option Select the Tools tab Click the Check Now button The scanning process will then initiate

How to Run Scan Disk in Windows Vista


Click the Computer icon on the desktop Right click the drive to be scanned with Scandisk and select Properties Click on the Tools tab. Under the Error-checking sub heading, click the Check Now button A window named Check Local Disk will appear. To attempt to correct errors, check the Scan for and attempt recovery of bad sectors checkbox Click Start to initiate the disk scan. In Vista, it is required to schedule the Scandisk to run at boot time as Vista has mechanisms that do not allow it to run while the system is operating.

Run Scan Disk on Windows 7


In the deployment of Windows 7, the Scan Disk utility underwent a name change and is now called CHKDSK that performs the same functions as the legacy application for the Operating System. Run CHKDSK Using the Grapical User Interface (GUI) Step 1 Select the Computer option from the start menu. Step 2 Right click the drive to check for errors followed by clicking the Properties menu button. Step 3 Select the Tools menu option followed by the Check Now menu button. If the computer drive is in use, the Operating System will display a dialogue menu asking if you desire to schedule a full scan in the future. Run CHKDSK from the DOS Command Prompt Alternatively, CHKDSK can be ran from the DOS or Command prompt on the Windows 7 OS. Step 1 Open the DOS prompt on your computer by selecting the Start and Run menu options. Step 2 Enter cmd followed by the enter key to open the command prompt. Step 3 Enter chkdsk c: to initiate a system check of the local hard drive. If you desire all errors to be fixed through invocation at the DOS prompt, enter CHKDSK c: /F /R to find and fix all errors on the local

drive. If your hard drive is labeled with a different letter than c just replace the letter in the above example with the actual hard drive letter on your computer.

Registry cleaner
From Wikipedia, the free encyclopedia Jump to: navigation, search A registry cleaner is a class of third party software utility designed for the Microsoft Windows operating system, whose purported purpose is to remove redundant items from the Windows registry. Registry cleaners are not supported by Microsoft, but vendors of Registry cleaners claim that they are useful to repair inconsistencies arising from manual changes to applications, especially COM-based programs. However a virtual machine or virtual application is a faster and more reliable means of reverting an operating system to a previous good known state in a testing or application sequencing scenario;[1]. The necessity and usefulness of registry cleaners is a controversial topic, with experts in disagreement over their benefits. The problem is further clouded by the fact that malware and scareware are often associated with utilities of this type.[2]

Advantages and disadvantages


This section does not cite any references or sources. (May 2011) Due to the sheer size and complexity of the registry database, manually cleaning up redundant and invalid entries may be impractical, so registry cleaners try to automate the process of looking for invalid entries, missing file references or broken links within the registry and resolving or removing them. The correction of an invalid[clarification needed] registry key can provide some benefits, as listed above; but the most voluminous will usually be quite harmless, obsolete records linked with COM-based applications whose associated files are no longer present. There is a popular misconception that the value of registry cleaning lies in reducing "registry bloat". Even a neglected registry will seldom contain more than two or three thousand redundant entries. Bearing in mind that the modern registry may contain more than a million entries, the elimination of two or three thousand is not going to save any noticeable amount of scanning time.

Registry damage
Some registry cleaners make no distinction as to the severity of the errors, and many that do may erroneously categorize errors as "critical" with little basis to support it.[2] Removing or changing certain registry data can prevent the system from starting, or cause application errors and crashes. It is not always possible for a third party program to know whether any particular key is invalid or redundant. A poorly-designed registry cleaner may not be equipped to know for sure whether a key is still being used by Windows or what detrimental effects removing it may have. This may lead to loss of functionality and/or system instability,[3][4][5] as well as application compatibility updates from Microsoft to block problematic registry cleaners.[6] The Windows Installer CleanUp Utility was a Microsoft-supported utility for addressing Windows Installer related issues,[7] however the program has subsequently been deprecated because of unintended damage that it caused.[8] The level of skill necessary to use a registry cleaner to actually improve the performance of a machine is higher than the level of skill necessary to configure an easy incremental backup solution. With such a solution, the OS can be restored if any recent changes proved to be bad ones. This is safer than most registry cleaners. While it is true that some registry cleaners are safe, these cleaners do not improve performance. The rest are a mix of powerful and dangerous tools unsuited to non-professionals, snake-oil, and actual malware.

Malware payloads

Registry cleaners have been used as a vehicle by a number of trojan applications to install malware, typically through social engineering attacks that use website popups or free downloads that falsely report problems that can be "rectified" by purchasing or downloading a registry cleaner.[9] The worst of the breed are products that advertise and encourage a "free" registry scan; however, the user typically finds the product has to be purchased for a substantial sum, before it will effect any of the anticipated "repairs". Rogue registry cleaners "WinFixer" have been ranked as one of the most prevalent pieces of malware currently in circulation.[10]

Scanners as scareware
Rogue registry cleaners are often marketed with alarmist advertisements that falsely claim to have preanalyzed your PC, displaying bogus warnings to take "corrective" action; hence the descriptive label "scareware". In October 2008, Microsoft and the Washington attorney general filed a lawsuit against two Texas firms, Branch Software and Alpha Red, producers of the "Registry Cleaner XP" scareware.[11] The lawsuit alleges that the company sent incessant pop-ups resembling system warnings to consumers' personal computers stating "CRITICAL ERROR MESSAGE! - REGISTRY DAMAGED AND CORRUPTED", before instructing users to visit a web site to download Registry Cleaner XP at a cost of $39.95.

Metrics of performance benefit


On Windows 9x computers, it was possible that a very large registry could slow down the computer's startup time. However this is far less of an issue with NT-based operating systems (including Windows XP and Vista) due to a different on-disk structure of the registry, improved memory management and indexing.[12] Slowdown due to registry bloat is thus far less of an issue in modern versions of Windows. Conversely, defragmenting the underlying registry files (e.g. using the free Microsoft-supported PageDefrag tool),[13] rather attempting to clean the Registry's contents, has a measureable benefit and has therefore been recommended in the past by experts such as Mark Russinovich (defragmentation capability is now built directly into Windows Vista, making these tools redundant). The Windows Performance Toolkit is specifically designed to troubleshoot performance-related issues under Windows, and does not include Registry cleaning as one of its optimisations.[14]

Undeletable registry keys


Registry cleaners cannot repair scenarios such as undeletable registry keys caused by embedded null characters in their names; only specialized tools such as the RegDelNull utility (part of the Sysinternals software) are able to do this.[15]

Recovery capability limitations


A registry cleaner cannot repair a registry hive that cannot be mounted by the system, making the repair via "slave mounting" of a system disk impossible.. A corrupt registry can be recovered in a number of ways that are supported by Microsoft (e.g. Automated System Recovery, from a "last known good" boot menu, by re-running setup or by using System Restore). "Last known good" restores the last system registry hive (containing driver and service configuration) that successfully booted the system.

Malware removal
These tools are also difficult to manage in a non-boot situation, or during an infestation, compared to a full system restore from a backup. In the age of rapidly evolving malware, even a full system restore may be unable to rid a hard drive of a bootkit. Registry cleaners are likewise not designed for malware removal, although minor side-effects can be repaired, such as a turned-off System Restore. However, in complex scenarios where malware such as spyware, adware and viruses are involved, the removal of system-critical files may result.[16]

Application virtualization

A registry cleaner is of no use for cleaning registry entries associated with a virtualised application since all registry entries in this scenario are written to an application-specific virtual registry instead of the real one.[17] Complications of detailed interactions of real-mode with virtual also leaves the potential for incorrect removal of shortcuts and registry entries that point to "disappeared" files, and consequent confusion by the user of cleaner products. There is little competent information about this specific interaction, and no integration. In general, even if registry cleaners could be arguably considered safe in a normal end-user environment, they should be avoided in an application virtualization environment.

Antivirus software
From Wikipedia, the free encyclopedia Jump to: navigation, search "Antivirus" redirects here. For antiviral medication, see Antiviral drug. Antivirus or anti-virus software is software used to prevent, detect and remove malware, such as: computer viruses, adware, backdoors, malicious BHOs, dialers, fraudtools, hijackers, keyloggers, malicious LSPs, rootkits, spyware, trojan horses and worms. Computer security, including protection from social engineering techniques, is commonly offered in products and services of antivirus software companies. This page discusses the software used for the prevention and removal of malware threats, rather than computer security implemented by software methods. A variety of strategies are typically employed. Signature-based detection involves searching for known patterns of data within executable code. However, it is possible for a computer to be infected with new malware for which no signature is yet known. To counter such so-called zero-day threats, heuristics can be used. One type of heuristic approach, generic signatures, can identify new viruses or variants of existing viruses by looking for known malicious code, or slight variations of such code, in files. Some antivirus software can also predict what a file will do by running it in a sandbox and analyzing what it does to see if it performs any malicious actions. No matter how useful antivirus software can be, it can sometimes have drawbacks. Antivirus software can impair a computer's performance. Inexperienced users may also have trouble understanding the prompts and decisions that antivirus software presents them with. An incorrect decision may lead to a security breach. If the antivirus software employs heuristic detection, success depends on achieving the right balance between false positives and false negatives. False positives can be as destructive as false negatives.[1] Finally, antivirus software generally runs at the highly trusted kernel level of the operating system, creating a potential avenue of attack.[2]

History
An example of free antivirus software: ClamTk 3.08. See also: Timeline of notable computer viruses and worms Most of the computer viruses written in the early and mid 1980s were limited to self-reproduction and had no specific damage routine built into the code.[3] That changed when more and more programmers became acquainted with virus programming and created viruses that manipulated or even destroyed data on infected computers. There are competing claims for the innovator of the first antivirus product. Possibly the first publicly documented removal of a computer virus in the wild was performed by Bernd Fix in 1987.[4][5] There were also two antivirus applications for the Atari ST platform developed in 1987. The first one was G Data [6] and second was UVK 2000.[7] Fred Cohen, who published one of the first academic papers on computer viruses in 1984,[8] began to develop strategies for antivirus software in 1988[9] that were picked up and continued by later antivirus software developers. In 1987, he published a demonstration that there is no algorithm that can perfectly detect all possible viruses.[10] In 1987 the first two heuristic antivirus utilities were released: Flushot Plus by Ross Greenberg and Anti4us by Erwin Lanting.[citation needed]

Also in 1988 a mailing list named VIRUS-L[11] was started on the BITNET/EARN network where new viruses and the possibilities of detecting and eliminating viruses were discussed. Some members of this mailing list like John McAfee or Eugene Kaspersky later founded software companies that developed and sold commercial antivirus software. Before internet connectivity was widespread, viruses were typically spread by infected floppy disks. Antivirus software came into use, but was updated relatively infrequently. During this time, virus checkers essentially had to check executable files and the boot sectors of floppy disks and hard disks. However, as internet usage became common, viruses began to spread online.[12] Over the years it has become necessary for antivirus software to check an increasing variety of files, rather than just executables, for several reasons:

Powerful macros used in word processor applications, such as Microsoft Word, presented a risk. Virus writers could use the macros to write viruses embedded within documents. This meant that computers could now also be at risk from infection by opening documents with hidden attached macros.[13] The possibility of embedding executable objects inside otherwise non-executable file formats can make opening those files a risk.[14] Later email programs, in particular Microsoft's Outlook Express and Outlook, were vulnerable to viruses embedded in the email body itself. A user's computer could be infected by just opening or previewing a message.[15]

As always-on broadband connections became the norm, and more and more viruses were released, it became essential to update virus checkers more and more frequently. Even then, a new zero-day virus could become widespread before antivirus companies released an update to protect against it.

Identification methods
Malwarebytes' Anti-Malware version 1.46 - a proprietary freeware antimalware product One of the few solid theoretical results in the study of computer viruses is Frederick B. Cohen's 1987 demonstration that there is no algorithm that can perfectly detect all possible viruses.[10] There are several methods which antivirus software can use to identify malware. Signature based detection is the most common method. To identify viruses and other malware, antivirus software compares the contents of a file to a dictionary of virus signatures. Because viruses can embed themselves in existing files, the entire file is searched, not just as a whole, but also in pieces.[16] Heuristic-based detection, like malicious activity detection, can be used to identify unknown viruses. File emulation is another heuristic approach. File emulation involves executing a program in a virtual environment and logging what actions the program performs. Depending on the actions logged, the antivirus software can determine if the program is malicious or not and then carry out the appropriate disinfection actions.[17]

Signature-based detection
Traditionally, antivirus software heavily relied upon signatures to identify malware. This can be very effective, but cannot defend against malware unless samples have already been obtained and signatures created. Because of this, signature-based approaches are not effective against new, unknown viruses. As new viruses are being created each day, the signature-based detection approach requires frequent updates of the virus signature dictionary. To assist the antivirus software companies, the software may allow the user to upload new viruses or variants to the company, allowing the virus to be analyzed and the signature added to the dictionary.[16] Although the signature-based approach can effectively contain virus outbreaks, virus authors have tried to stay a step ahead of such software by writing "oligomorphic", "polymorphic" and, more recently,

"metamorphic" viruses, which encrypt parts of themselves or otherwise modify themselves as a method of disguise, so as to not match virus signatures in the dictionary.[18]

Heuristics
Some more sophisticated antivirus software uses heuristic analysis to identify new malware or variants of known malware. Many viruses start as a single infection and through either mutation or refinements by other attackers, can grow into dozens of slightly different strains, called variants. Generic detection refers to the detection and removal of multiple threats using a single virus definition.[19] For example, the Vundo trojan has several family members, depending on the antivirus vendor's classification. Symantec classifies members of the Vundo family into two distinct categories, Trojan.Vundo and Trojan.Vundo.B.[20][21] While it may be advantageous to identify a specific virus, it can be quicker to detect a virus family through a generic signature or through an inexact match to an existing signature. Virus researchers find common areas that all viruses in a family share uniquely and can thus create a single generic signature. These signatures often contain non-contiguous code, using wildcard characters where differences lie. These wildcards allow the scanner to detect viruses even if they are padded with extra, meaningless code.[22] A detection that uses this method is said to be "heuristic detection."

Rootkit detection
Main article: Rootkit Anti-virus software can attempt to scan for rootkits; a rootkit is a type of malware that is designed to gain administrative-level control over a computer system without being detected. Rootkits can change how the operating system functions and in some cases can tamper with the anti-virus program and render it ineffective. Rootkits are also difficult to remove, in some cases requiring a complete re-installation of the operating system. [23]

Real-time protection
Real-time protection, on-access scanning, background guard, resident shield, autoprotect, and other synonyms refer to the automatic protection provided by most antivirus, anti-spyware, and other anti-malware programs. This monitors computer systems for suspicious activity such as computer viruses, spyware, adware, and other malicious objects in 'real-time', in other words while data loaded into the computer's active memory: when inserting a CD, opening an email, or browsing the web, or when a file already on the computer is opened or executed.[24] This means all data in files already on the computer is analysed each time that the user attempts to access the files. This can prevent infection by not yet activated malware that entered the computer unrecognised before the antivirus received an update.[citation needed] Real-time protection and its synonyms are used in contrast to the expression "on-demand scan" or similar expressions that mean a useractivated scan of part or all of a computer.[citation needed] Most real-time protection systems hook certain API functions provided by the operating system in order to scan files in real-time. For example, on Microsoft Windows, an antivirus program may hook the CreateProcess API function which executes programs. It can then scan programs which are about to be executed for malicious software. If malicious software is found, the antivirus program can block execution and inform the user.[citation needed]

Issues of concern
Unexpected renewal costs
Some commercial antivirus software end-user license agreements include a clause that the subscription will be automatically renewed, and the purchaser's credit card automatically billed, at the renewal time without explicit approval. For example, McAfee requires users to unsubscribe at least 60 days before the expiration of the present subscription[25] while BitDefender sends notifications to unsubscribe 30 days before the renewal.[26] Norton AntiVirus also renews subscriptions automatically by default.[27]

Rogue security applications


Main article: Rogue security software Some apparent antivirus programs are actually malware masquerading as legitimate software, such as WinFixer, MS Antivirus, and Mac Defender.[28]

Problems caused by false positives


A "false positive" is when antivirus software identifies a non-malicious file as a virus. When this happens, it can cause serious problems. For example, if an antivirus program is configured to immediately delete or quarantine infected files, a false positive in an essential file can render the operating system or some applications unusable.[29] In May 2007, a faulty virus signature issued by Symantec mistakenly removed essential operating system files, leaving thousands of PCs unable to boot.[30] Also in May 2007, the executable file required by Pegasus Mail was falsely detected by Norton AntiVirus as being a Trojan and it was automatically removed, preventing Pegasus Mail from running. Norton AntiVirus had falsely identified three releases of Pegasus Mail as malware, and would delete the Pegasus Mail installer file when that happened.[31] In response to this Pegasus Mail stated:

On the basis that Norton/Symantec has done this for every one of the last three releases of Pegasus Mail, we can only condemn this product as too flawed to use, and recommend in the strongest terms that our users cease using it in favour of alternative, less buggy antivirus packages.[31]

In April 2010, McAfee VirusScan detected svchost.exe, a normal Windows binary, as a virus on machines running Windows XP with Service Pack 3, causing a reboot loop and loss of all network access.[32][33] In December 2010, a faulty update on the AVG anti-virus suite damaged 64-bit versions of Windows 7, rendering it unable to boot, due to an endless boot loop created.[34] In October 2011, Microsoft Security Essentials (MSE) removed the Google Chrome web browser, rival to Microsoft's own Internet Explorer. MSE flagged Chrome as a Zbot banking trojan.[35] When Microsoft Windows becomes damaged by faulty anti-virus products, fixing the damage to Microsoft Windows incurs technical support costs and businesses can be forced to close whilst remedial action is undertaken.[36][37]

System and interoperability related issues


Running multiple antivirus programs concurrently can degrade performance and create conflicts.[38] However, using a concept called multiscanning, several companies (including G Data[39] and Microsoft[40]) have created applications which can run multiple engines concurrently. It is sometimes necessary to temporarily disable virus protection when installing major updates such as Windows Service Packs or updating graphics card drivers.[41] Active antivirus protection may partially or completely prevent the installation of a major update. Anti-virus software can cause problems during the installation of an operating system upgrade, e.g. when upgrading to a newer version of Windows "in place" without erasing the previous version of Windows. Microsoft recommends that anti-virus software be disabled to avoid conflicts with the upgrade installation process.[42][43][44] The functionality of a few software programs can be hampered by active anti-virus software. For example TrueCrypt, a disk encryption program, states on its troubleshooting page that anti-virus programs can conflict with TrueCrypt and cause it to malfunction or operate very slowly.[45] Support issues also exist around antivirus application interoperability with common solutions like SSL VPN remote access and network access control products.[46] These technology solutions often have policy assessment applications which require that an up to date antivirus is installed and running. If the antivirus application is not recognized by the policy assessment, whether because the antivirus application has been updated or because it is not part of the policy assessment library, the user will be unable to connect.

Effectiveness
Studies in December 2007 showed that the effectiveness of antivirus software had decreased in the previous year, particularly against unknown or zero day attacks. The computer magazine c't found that detection rates for these threats had dropped from 40-50% in 2006 to 20-30% in 2007. At that time, the only exception was the NOD32 antivirus, which managed a detection rate of 68 percent.[47] The problem is magnified by the changing intent of virus authors. Some years ago it was obvious when a virus infection was present. The viruses of the day, written by amateurs, exhibited destructive behavior or pop-ups. Modern viruses are often written by professionals, financed by criminal organizations.[48] Independent testing on all the major virus scanners consistently shows that none provide 100% virus detection. The best ones provided as high as 99.6% detection, while the lowest provided only 81.8% in tests conducted in February 2010. All virus scanners produce false positive results as well, identifying benign files as malware.[49] Although methodologies may differ, some notable independent quality testing agencies include AVComparatives, ICSA Labs, West Coast Labs, VB100 and other members of the Anti-Malware Testing Standards Organization.[50]

New viruses
Anti-virus programs are not always effective against new viruses, even those that use non-signature-based methods that should detect new viruses. The reason for this is that the virus designers test their new viruses on the major anti-virus applications to make sure that they are not detected before releasing them into the wild.[51] Some new viruses, particularly ransomware, use polymorphic code to avoid detection by virus scanners. Jerome Segura, a security analyst with ParetoLogic, explained:[52]

It's something that they miss a lot of the time because this type of [ransomware virus] comes from sites that use a polymorphism, which means they basically randomize the file they send you and it gets by well-known antivirus products very easily. I've seen people firsthand getting infected, having all the pop-ups and yet they have antivirus software running and it's not detecting anything. It actually can be pretty hard to get rid of, as well, and you're never really sure if it's really gone. When we see something like that usually we advise to reinstall the operating system or reinstall backups.[52]

A proof of concept virus has used the Graphics Processing Unit (GPU) to avoid detection from anti-virus software. The potential success of this involves bypassing the CPU in order to make it much harder for security researchers to analyse the inner workings of such malware.[53]

Rootkits
Detecting rootkits is a major challenge for anti-virus programs. Rootkits have full administrative access to the computer and are invisible to users and hidden from the list of running processes in the task manager. Rootkits can modify the inner workings of the operating system[54] and tamper with antivirus programs.

Damaged files
Files which have been damaged by computer viruses are normally damaged beyond recovery. Anti-virus software removes the virus code from the file during disinfection, but this does not always restore the file to its undamaged state. In such circumstances, damaged files can only be restored from existing backups; installed software that is damaged requires re-installation.[55]

Firmware issues
Active anti-virus software can interfere with a firmware update process.[56] Any writeable firmware in the computer can be infected by malicious code.[57] This is a major concern, as an infected BIOS could require

the actual BIOS chip to be replaced to ensure the malicious code is completely removed.[58] Anti-virus software is not effective at protecting firmware and the motherboard BIOS from infection.[59]

Other methods
A command-line virus scanner, Clam AV 0.95.2, running a virus signature definition update, scanning a file and identifying a Trojan Installed antivirus software running on an individual computer is only one method of guarding against viruses. Other methods are also used, including cloud-based antivirus, firewalls and on-line scanners.

Cloud antivirus
Cloud antivirus is a technology that uses lightweight agent software on the protected computer, while offloading the majority of data analysis to the provider's infrastructure.[60] One approach to implementing cloud antivirus involves scanning suspicious files using multiple antivirus engines. This approach was proposed by an early implementation of the cloud antivirus concept called CloudAV. CloudAV was designed to send programs or documents to a network cloud where multiple antivirus and behavioral detection programs are used simultaneously in order to improve detection rates. Parallel scanning of files using potentially incompatible antivirus scanners is achieved by spawning a virtual machine per detection engine and therefore eliminating any possible issues. CloudAV can also perform "retrospective detection," whereby the cloud detection engine rescans all files in its file access history when a new threat is identified thus improving new threat detection speed. Finally, CloudAV is a solution for effective virus scanning on devices that lack the computing power to perform the scans themselves.[61]

Network firewall
Network firewalls prevent unknown programs and processes from accessing the system. However, they are not antivirus systems and make no attempt to identify or remove anything. They may protect against infection from outside the protected computer or network, and limit the activity of any malicious software which is present by blocking incoming or outgoing requests on certain TCP/IP ports. A firewall is designed to deal with broader system threats that come from network connections into the system and is not an alternative to a virus protection system.

Online scanning
Some antivirus vendors maintain websites with free online scanning capability of the entire computer, critical areas only, local disks, folders or files. Periodic online scanning is a good idea for those that run antivirus applications on their computers because those applications are frequently slow to catch threats. One of the first things that malicious software does in an attack is disable any existing antivirus software and sometimes the only way to know of an attack is by turning to an online resource that is not installed on the infected computer.[62]

Specialist tools
Using rkhunter to scan for rootkits on an Ubuntu Linux computer. Virus removal tools are available to help remove stubborn infections or certain types of infection. Examples include Trend Micro's Rootkit Buster,[63] and rkhunter for the detection of rootkits, Avira's AntiVir Removal Tool,[64] PCTools Threat Removal Tool,[65] and AVG's Anti-Virus Free 2011.[66] A rescue disk that is bootable, such as a CD or USB storage device, can be used to run antivirus software outside of the installed operating system, in order to remove infections while they are dormant. A bootable antivirus disk can be useful when, for example, the installed operating system is no longer bootable or has malware that is resisting all attempts to be removed by the installed antivirus software. Examples of some of these bootable disks include the Avira AntiVir Rescue System,[64] PCTools Alternate Operating System Scanner,[67] and AVG Rescue CD.[68] The AVG Rescue CD software can also be installed onto a USB storage device, that is bootable on newer computers.[68]

Popularity
According to an FBI survey, major businesses lose $12 million annually dealing with virus incidents.[69] A survey by Symantec in 2009 found that a third of small to medium sized business did not use antivirus protection at that time, whereas more than 80% of home users had some kind of antivirus installed.[70]

System Restore
From Wikipedia, the free encyclopedia Jump to: navigation, search System Restore System Restore in Windows 7 Developer(s) Initial release Stable release Microsoft Windows Me / 4.90.3000 / June 19, 2000 Windows 7 / 6.1.7600.16385 / July 22, 2009 System administration Microsoft EULA

Operating system Microsoft Windows Type License

System Restore is a component of Microsoft's Windows Me, Windows XP, Windows Vista and Windows 7, but not Windows 2000,[1] operating systems that allows for the rolling back of system files, registry keys, installed programs, etc., to a previous state in the event of system malfunction or failure. The Windows Server operating system family does not include System Restore. The System Restore built into Windows XP can be installed on a Windows Server 2003 machine,[2] although this is not supported by Microsoft. In Windows Vista and later versions, System Restore has an improved interface and is based on Shadow Copy technology. In prior Windows versions it was based on a file filter that watched changes for a certain set of file extensions, and then copied files before they were overwritten.[3] Shadow Copy has the advantage that block-level changes in files located in any directory on the volume can be monitored and backed up regardless of their location.[4]

Overview
In System Restore, the user may create a new restore point manually, roll back to an existing restore point, or change the System Restore configuration. Moreover, the restore itself can be undone. Old restore points are discarded in order to keep the volume's usage within the specified amount. For many users, this can provide restore points covering the past several weeks. Users concerned with performance or space usage may also opt to disable System Restore entirely. Files stored on volumes not monitored by System Restore are never backed up or restored. System Restore backs up system files of certain extensions (.exe, .dll, etc.) and saves them for later recovery and use.[5] It also backs up the registry and most drivers.

Resources monitored
The following resources are backed up:[6]

Registry Files in the Windows File Protection (Dllcache) folder (under Windows XP). On Windows Vista and later versions, all system file types are monitored on all paths on a volume.

Local user profile COM+ and WMI Databases IIS Metabase Specific file types monitored [5]

The list of file types and directories to be included or excluded from monitoring by System Restore can be customized on Windows Me and Windows XP by editing %windir%\system32\restore\Filelist.xml.[7]

Disk space consumption


The amount of disk space System Restore consumes can be configured. Starting with Windows XP, the disk space alloted is configurable per volume and the data stores are also stored per volume. File are stored using NTFS compression and a Disk Cleanup handler allows deleting all but the most recent Restore Point to free up disk space. System Restore can be disabled completely to regain disk space. It automatically disables itself the disk free space is too low for it to operate.

Restore points
Restore points are created:

When software is installed using the Windows Installer, Package Installer or other installers which are aware of System Restore.[8] When Windows Update installs new updates to Windows. When the user installs a driver that is not digitally signed by Windows Hardware Quality Labs. On Windows XP or Windows Vista, every 24 hours of computer use or when the operating system starts after being off for more than 24 hours[8] (10 hours in Windows Me), or every 24 hours of calendar time, whichever happens first. This setting is configurable through the registry or using the deployment tools on Windows XP. Such a restore point is known as a system checkpoint. System Restore requires Task Scheduler to create system checkpoints. Moreover, system checkpoints are only created if the system is idle for a certain amount of time.[6] In Windows 7, automatic Restore Points are created only once every seven days, however a script can be used to silently create Restore Points more periodically. When the user manually creates a Restore Point.

On Windows Vista, because System Restore uses shadow copies, individual files or folders can also be restored, through the Previous Versions tab from Properties. In Windows XP, restore point files are stored in a hidden folder named System Volume Information on the root of every drive, partition or volume, including most external drives, and some USB flash drives. On drives or partitions that are not monitored by System Restore this folder will be very small in size or completely empty, unless Encrypting File System is in use or the Indexing Service is turned on. Note: If the System Volume Information folder is deleted, it will be recreated automatically. Older restore points are deleted as per the configured space constraint on a First In, First Out basis.

Implementation differences
There are considerable differences between how System Restore works under Windows XP and later Windows versions.

Configuration UI - In Windows XP, there is a graphical slider to configure the amount of disk space alloted to System Restore. In Windows Vista, the GUI to configure the disk space utilized for System Restore points is not available. Using the command-line tool Vssadmin.exe or by editing the appropriate registry key,[9][10] the space reserved can be adjusted. The GUI to configure disk space is available once again, starting with Windows 7. Maximum space - In Windows XP, System Restore can be configured to use up to a maximum of 12% of the volume's space for most disk sizes;[6] however, this may be less depending on the volume's size. Restore points over 90 days old are automatically deleted, as specified by the registry value RPLifeInterval (Time to Live - TTL) default value of 7776000 seconds.

In Windows Vista and later, System Restore is designed for larger volumes.[11] By default, it uses 15% of the volume's space.[8]

File paths monitored - Up to Windows XP, files are backed up only from certain directories. On Windows Vista and later, this set of files is defined by monitored extensions outside of the Windows folder, and everything under the Windows folder.[12] File types monitored - Up to Windows XP, it excludes any file types used for users' personal data files, such as documents, digital photographs, media files, e-mail, etc. It also excludes the monitored set of file types (.DLL, .EXE etc.) from folders such as My Documents. Microsoft recommends that if a user is unsure as to whether certain files will be modified by a rollback, they should keep those files under My Documents.[6] When a rollback is performed, the files that were being monitored by System Restore are restored and newly created folders are removed. However, on Windows Vista and later, it excludes only document file types; it does not exclude any monitored system file type whatsoever its location and operates on the entire volume. Configuring advanced System Restore settings - In Windows XP only, several System Restore settings can be configured via the Registry.[13] System Restore in Windows Vista and later versions no longer supports configuring its settings through the registry.[14] File types and file paths can also no longer be included or excluded from monitoring by System Restore by editing %windir%\system32\restore\Filelist.xml as was possible in Windows XP. This file no longer exists in Windows Vista and later.[7] FAT32 volume support: In Windows XP, System Restore works on FAT32 volumes and can be enabled for smaller disks, less than 1 GB. On Windows Vista and later, System Restore does not work on FAT32 disks and cannot be enabled on disks smaller than 1 GB.[11]

Restoring the system


Up to Windows XP, the system can be restored as long as is an online state, that is, as long as Windows boots normally or from Safe mode. It is not possible to restore the system if Windows is unbootable. Under Windows Vista and later, the Windows Recovery Environment can be used to launch System Restore and restore a system in an offline state, that is, in case the Windows installation is unbootable.[4] However, for all operating systems including Windows XP, the Diagnostics and Recovery Toolset (DaRT) tools from the Microsoft Desktop Optimization Pack can be used to create a bootable recovery disc that can log on to an unbootable Windows installation and start System Restore.

Limitations & complications


A limitation which applies to System Restore in Windows versions prior to Windows Vista is that only certain file types and files in certain locations on the volume are monitored, therefore unwanted software installations and especially in-place software upgrades may be incompletely reverted by System Restore.[15] Consequently, there may be little or no practical beneficial impact. Certain issues may also arise when attempting to run or completely uninstall that application. In contrast, various other utilities have been designed to provide much more complete reversal of system changes including software upgrades. However, beginning with Windows Vista, System Restore monitors all system file types on all file paths on a given volume, so there is no issue of incomplete restoration. It is not possible to create a permanent restore point. All restore points will eventually be deleted after the time specified in the RPLifeInterval registry setting is reached or earlier if allotted disk space is insufficient. Even if no user or software triggered restore points are generated allotted disk space is consumed by automatic restore points.[6] Consequently, in systems with little space allocated, if a user does not notice a new problem within a few days, it may be too late to restore to a configuration from before the problem arose. For data integrity purposes, System Restore does not allow other applications or users to modify or delete files in the directory where the restore points are saved. On NTFS volumes, the Restore Points are protected using ACLs. Since its method of backup is fairly simplistic, it may end up archiving malware such as viruses, for example in a restore point created before using antivirus software to clean an infection. Antivirus software is usually unable to remove infected files from System Restore;[16] the only way actually to delete the infected files is to disable System Restore, which will result in losing all saved restore points; otherwise

they will remain until Windows deletes the affected restore points. However stored infected files in themselves are harmless unless executed; they will only pose a threat if the affected restore point is reinstated. Changes made to a volume from another OS (in case of dual-boot OS scenarios) cannot be monitored. Also, a compatibility issue exists with System Restore when dual-booting Windows XP/Windows Server 2003 and Windows Vista or later operating systems which makes System Restore unusable in a dual-boot scenario. Specifically, the shadow copies on the volume are deleted when the older operating system accesses (and therefore mounts) that NTFS volume. This happens because the older operating system does not recognize the newer format of persistent shadow copies.[17]

Disk formatting
From Wikipedia, the free encyclopedia Formatting a hard drive using MS-DOS

Disk formatting is the process of preparing a hard disk drive or flexible disk medium for data storage. In some cases, the formatting operation may also create one or more new file systems. The formatting process that performs basic medium preparation is often referred to as "low-level formatting." The term "high-level formatting" most often refers to the process of generating a new file system. In certain operation systems (e.g., Microsoft Windows), the two processes[clarification needed] are combined and the term "format" is understood to mean an operation in which a new disk medium is fully prepared to store files. Illustrated to the right are the prompts and diagnostics printed by MS-DOS's FORMAT.COM utility as a hard drive is being formatted. As a general rule, formatting a disk is "destructive," in that existing data (if any) is lost during the process; for high-level formatting, some of this data might be recoverable with special tools.

History
A "block", a contiguous number of bytes, is the minimum unit of memory that is read from and written to a disk by a disk driver. The earliest disk drives had fixed block sizes (e.g. the IBM 350 disk storage unit (of the late 1950's) block size was 100 6 bit characters) but starting with the 1301[1] IBM marketed subsystems that featured variable block sizes - a particular track could have blocks of different sizes. The disk subsystems on the IBM System/360 expanded this concept in the form of Count Key Data (CKD) and later Extended Count Key Data (ECKD); however the use of variable block size in HDDs fell out of use in the 1990s; one of the last HDDs to support variable block size was the IBM 3390 Model 9, announced May 1993[2] Modern hard disk drives such as Serial attached SCSI (SAS)[3] and Serial ATA (SATA)[4] drives, appear at their interfaces as a contiguous set of fixed-size blocks; for many years 512 bytes long but beginning in 2009 and accelerating through 2011, all major hard disk drive manufacturers began releasing hard disk drive platforms using the Advanced Format of 4096 byte logical blocks.[5][6] Floppy disks generally only used fixed block sizes but these sizes were a function of the host's OS and its interaction with its controller so that a particular type of media (e.g., 5-inch DSDD) would have different block sizes depending upon the host OS and controller. Optical disks generally only use fixed block sizes.

Disk formatting process


Formatting a disk for use by an operating system and its applications involves three different steps.
1. Low-level formatting (i.e., closest to the hardware) marks the surfaces of the disks with markers indicating the start of a recording block (typically today called sector markers) and other information like block CRC to be used later, in normal operations, by the disk controller to read or write data. This is intended to be the permanent foundation of the disk, and is often completed at the factory. 2. Partitioning creates data structures needed by the operating system. This level of formatting often includes checking for defective tracks or defective sectors.

3. High-level formatting creates the file system format within the structure of the intermediate-level formatting. This formatting includes the data structures used by the OS to identify the logical drive or partition's contents). This may occur during operating system installation, or when adding a new disk. Disk and distributed file system may specify an optional boot block, and/or various volume and directory information for the operating system.

Low-level formatting of floppy disks


The low-level format of floppy disks (and early hard disks) is performed by the disk drive's controller. Consider a standard 1.44 MB floppy disk. Low-level formatting of the floppy disk, normally writes 18 sectors of 512 bytes to each of 160 tracks (80 on each side) of the floppy disk, providing 1,474,560 bytes of storage on the disk. Physical sectors are actually larger than 512 bytes, as in addition to the 512 byte data field they include a sector identifier field, CRC bytes (in some cases error correction bytes) and gaps between the fields. These additional bytes are not normally included in the quoted figure for overall storage capacity of the disk. Different low-level formats can be used on the same media; for example, large records can be used to cut down on inter-record gap size. Several freeware, shareware and free software programs (e.g. GParted, FDFORMAT, NFORMAT and 2M) allowed considerably more control over formatting, allowing the formatting of high-density 3.5" disks with a capacity up to 2 MB. Techniques used include:

head/track sector skew (moving the sector numbering forward at side change and track stepping to reduce mechanical delay), interleaving sectors (to minimize sector gap and thereby allowing the number of sectors per track to be increased), increasing the number of sectors per track (while a normal 1.44 MB format uses 18 sectors per track, it is possible to increase this to a maximum of 21), and increasing the number of tracks (most drives could tolerate extension to 82 tracks though some could handle more, others could jam).

Linux supports a variety of sector sizes, and DOS and Windows support a large-record-size DMF-formatted floppy format.[citation needed]

Low-level formatting (LLF) of hard disks


Low-level format of a 10-megabyte IBM PC XT hard drive.

Hard disk drives prior to the 1990s typically had a separate disk controller that defined how data was encoded on the media. With the media, the drive and/or the controller possibly procured from separate vendors, low level formatting was a potential user activity. Separate procurement also had the potential of incompatibility between the separate components such that the subsystem would not reliably store data.[7] User instigated low-level formatting (LLF) of hard disk drives was common for minicomputer and personal computer systems until the 1990s. IBM and other mainframe system vendors typically supplied their hard disk drives (or media in the case of removable media HDDs) with a low-level format. Typically this involved subdividing each track on the disk into one or more blocks which would contain the user data and associated control information. Different computers used different block sizes and IBM notably used variable block sizes but the popularity of the IBM PC caused the industry to adopt a standard of 512 user data bytes per block by the middle 1980s. Depending upon the system, low-level formatting was generally done by an operating system system utility. IBM compatible PCs used the BIOS which is involved using the MS-DOS debug program to transfer control to a routine hidden at different addresses in different BIOSs.[8] Low-level format function can also be called as "erase" or "wipe" in different tools. For best results it's highly recommended to use tools created by hard disk's manufacturer.

Transition away from LLF

Starting in the late 1980s, driven by the volume of IBM compatible PCs, HDDs became routinely available pre-formatted with a compatible low-level format. At the same time, the industry moved from historical (dumb) bit serial interfaces to modern (intelligent) bit serial interfaces and Word serial interfaces wherein the low level format was performed at the factory. Today, an end-user, in most cases, should never perform a low-level formatting of an IDE or ATA hard drive, and in fact it is often not possible to do so on modern hard drives outside of the factory.[9][10]
Disk reinitialization This section needs additional citations for verification. (July 2009)

While it is generally impossible to perform a complete LLF on most modern hard drives (since the mid1990s) outside the factory,[11] the term "low-level format" is still used for what could be called the reinitialization of a hard drive to its factory configuration (and even these terms may be misunderstood). Reinitialization should include identifying (and sparing out if possible) any sectors which cannot be written to and read back from the drive, correctly. The term has, however, been used by some to refer to only a portion of that process, in which every sector of the drive is written to; usually by writing a zero byte to every addressable location on the disk, sometimes called zero-filling. The present ambiguity in the term low-level format seems to be due to both inconsistent documentation on web sites and the belief by many users that any process below a high-level (file system) format must be called a low-level format. Since much of the low level formatting process can today only be performed at the factory, various drive manufacturers describe reinitialization software as LLF utilities on their web sites. Since users generally have no way to determine the difference between a complete LLF and reinitialization (they simply observe running the software results in a hard disk that must be high-level formatted), both the misinformed user and mixed signals from various drive manufacturers have perpetuated this error. Note: Whatever possible misuse of such terms may exist (search hard drive manufacturers' web sites for all these terms), many sites do make such reinitialization utilities available (possibly as bootable floppy diskette or CD image files), to both overwrite every byte and check for damaged sectors on the hard disk. One popular method for performing only the zero-fill operation on a hard disk is by writing zero-value bytes to the drive using the Unix dd utility with the /dev/zero stream as the input file and the drive itself or a specific partition as the output file, like: (fill the 2TB SCSI/SATA HDD at 2nd connection with zeros, this command may take days, even weeks to complete)
dd if=/dev/zero of=/dev/sdb bs=1M count=2097152

Another method for SCSI disks may use the sg_format[12] command to issue a low level SCSI FORMAT UNIT command.

Disk Partitioning
Main article: Disk partitioning

Partitioning is the process of writing information into blocks of a storage device or medium that allows access by an operating system. Some operating systems allow the device (or its medium) to appear as multiple devices; i.e. partitioned into multiple devices. On MS-DOS, Windows, and UNIX-based operating systems (such as BSD, Linux/GNU, OS X) this is normally done with a partition editor, such as fdisk, parted, and Disk Utility. These operating systems support multiple partitions. In current IBM mainframe OSs derived from OS/360 and DOS/360, such as z/OS and z/VSE, this is done by the INIT command of the ICKDSF utility.[13] These OSs support only a single partition per device, called a volume. The ICKDSF functions include creating a volume label and writing a Record 0 on every track.

Floppy disks are not partitioned; however depending upon the OS they may require volume information in order to be accessed by the OS. Partition editors and ICKDSF today do not handle low level functions for HDDs and optical disk drives such as writing timing marks, and they cannot reinitialize a modern disk that has been degaussed or otherwise lost the factory formatting.

High-level formatting
High-level formatting is the process of setting up an empty file system on the disk and, for PC's, installing a boot sector. This is a fast operation, and is sometimes referred to as quick formatting. The entire logical drive or partition may optionally be scanned for defects, which may take considerable time. In the case of floppy disks, both high- and low-level formatting are customarily performed in one pass by the disk formatting software. In recent years,[when?] most floppies have shipped pre-formatted from the factory as DOS FAT12 floppies. In current IBM mainframe operating systems derived from OS/360 or DOS/360, this may be done as part of allocating a file, by a utility specific to the file system or, in some older access methods, on the fly as new data are written.

Host protected area


The host protected area, sometimes referred to as hidden protected area,[14] is an area of a hard drive that is high level formatted so that the area is not normally visible to its operating system (OS).

Reformatting
Reformatting is a high-level formatting performed on a functioning disk drive to free the contents of its medium. Reformatting is unique to each operating system because what actually is done to existing data varies by OS. The most important aspect of the process is that it frees disk space for use by other data. To actually "erase" everything requires overwriting each block of data on the medium; something that is not done by many PC high-level formatting utilities. Reformatting often carries the implication that the operating system and all other software will be reinstalled after the format is complete. Rather than fixing an installation suffering from malfunction or security compromise, it is sometimes judged easier to simply reformat everything and start from scratch. Various colloquialism exist for this process, such as "wipe and reload", "nuke and pave", "reimage", etc.

Formatting
DOS, OS/2 and Windows
MS-DOS 6.22a FORMAT /U switch failing to overwrite content of partition.

Under MS-DOS, PC-DOS, OS/2 and Microsoft Windows, disk formatting can be performed by the format command. The format program usually asks for confirmation beforehand to prevent accidental removal of data, but some versions of DOS have an undocumented /AUTOTEST option; if used, the usual confirmation is skipped and the format begins right away. The WM/FormatC macro virus uses this command to format drive C: as soon as a document is opened. There is also the undocumented /U parameter that performs an unconditional format which under most circumstances overwrites the entire partition,[15] preventing the recovery of data through software. Note however that the /U switch only works reliably with floppy diskettes (technically because unless /Q is used, floppies are always low-level formatted in addition to high-level formatted). Under certain circumstances with hard drive partitions, however, the /U switch merely prevents the creation of unformat information in the partition to be formatted while otherwise leaving the partition's contents entirely intact (still on disk but marked deleted). In such cases, the user's data remain ripe for recovery with specialist tools such as EnCase

or disk editors. Reliance upon /U for secure overwriting of hard drive partitions is therefore inadvisable, and purpose-built tools such as DBAN should be considered instead. Under OS/2, if you use the /L parameter, which specifies a long format, then format will overwrite the entire partition or logical drive. Doing so enhances the ability of CHKDSK to recover files.

Unix-like operating systems


High-level formatting of disks on these systems is traditionally done using the mkfs command. On Linux (and potentially other systems as well) mkfs is typically a wrapper around filesystem-specific commands which have the name mkfs.fsname, where fsname is the name of the filesystem with which to format the disk.[16] Some filesystems which are not supported by certain implementations of mkfs have their own manipulation tools; for example Ntfsprogs provides a format utility for the NTFS filesystem. Some Unix and Unix-like operating systems have higher-level formatting tools, usually for the purpose of making disk formatting easier and/or allowing the user to partition the disk with the same tool. Examples include GNU Parted (and its various GUI frontends such as GParted and the KDE Partition Manager) and the Disk Utility application on Mac OS X.

Recovery of data from a formatted disk


This section may contain original research. Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed. (March 2011)

As in file deletion by the operating system, data on a disk are not fully erased during every[17] high-level format. Instead, the area on the disk containing the data is merely marked as available, and retains the old data until it is overwritten. If the disk is formatted with a different file system than the one which previously existed on the partition, some data may be overwritten that wouldn't be if the same file system had been used. However, under some file systems (e.g., NTFS, but not FAT), the file indexes (such as $MFTs under NTFS, inodes under ext2/3, etc.) may not be written to the same exact locations. And if the partition size is increased, even FAT file systems will overwrite more data at the beginning of that new partition. From the perspective of preventing the recovery of sensitive data through recovery tools, the data must either be completely overwritten (every sector) with random data before the format, or the format program itself must perform this overwriting, as the DOS FORMAT command did with floppy diskettes, filling every data sector with the byte value F6 in hex. However there are applications and tools, especially used in forensic information technology, that can recover data that has been conventionally erased. In order to avoid the recovery of sensitive data, governmental organization or big companies use information destruction methods like the Gutmann method or the DoD 5220 of the National Industrial Security Program[18]. For average users there are also special applications that can perform complete data destruction by overwriting previous information. Although there are applications that perform multiple writes a single write is generally all that is needed on modern hard disk drives.

You might also like