You are on page 1of 13


This term paper aims to provide an overview of the concepts of device management in Windows operating system. It deals with the meaning of the term device management, how device management is done in case of Windows operating system and various terms and concepts related with device management in Windows operating system. It contains all the basic information about the device management service provided by the windows operating system.

To perform useful functions, processes need access to the peripherals connected to the computer, which are controlled by the kernel through device drivers. A device driver is a computer program that enables the operating system to interact with a hardware device. It provides the operating system with information of how to control and communicate with a certain piece of hardware. The driver is an important and vital piece to a program application. The design goal of a driver is abstraction the function of the driver is to translate the OS mandated function calls (programming calls) into device specific calls. In theory the device should work correctly with the suitable driver. Engineers are most likely to write code for device drivers. They work on: Video cards, Sound cards. Printers, Scanners, Modems, LAN cards. The common levels of abstraction of device drivers are: 1. On the hardware side:

Interfacing directly. Using a high level interface (Video BIOS). Using a lower-level device driver (file drivers using disk drivers). Simulating work with hardware, while doing something entirely different.

2. On the software side:

Allowing the operating system direct access to hardware resources. Implementing only primitives. Implementing an interface for non-driver software (Example: TWAIN). Implementing a language, sometimes high-level (Example Postscript).

For example, to show the user something on the screen, an application would make a request to the kernel, which would forward the request to its display driver, which is then responsible for actually plotting the character/pixel.[5] A kernel must maintain a list of available devices. This list may be known in advance (e.g. on an embedded system where the kernel will be rewritten if the available hardware changes), configured by the user (typical on older PCs and on systems that are not designed for personal use) or detected by the operating system at run time (normally called plug and play). In a plug and play system, a device manager first performs a scan on different hardware buses, such as Peripheral Component Interconnect (PCI) or Universal Serial Bus (USB), to detect installed devices, then searches for the appropriate drivers. As device management is a very OS-specific topic, these drivers are handled differently by each kind of kernel design, but in every case, the kernel has to provide the I/O to allow drivers to physically access their devices through some port or memory location. Very important decisions have to be made when designing the device management system, as in some designs accesses may

involve context switches, making the operation very CPU-intensive and easily causing a significant performance overhead

Device driver
In computing, a device driver or software driver is a computer program allowing higher-level computer programs to interact with a hardware device. A driver typically communicates with the device through the computer bus or communications subsystem to which the hardware connects. When a calling program invokes a routine in the driver, the driver issues commands to the device. Once the device sends data back to the driver, the driver may invoke routines in the original calling program. Drivers are hardware-dependent and operating-system-specific. They usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface. A device driver simplifies programming by acting as translator between a hardware device and the applications or operating systems that use it. Programmers can write the higher-level application code independently of whatever specific hardware device. Device drivers can be abstracted into logical and physical layers. Logical layers process data for a class of devices such as Ethernet ports or disk drives. Physical layers communicate with specific device instances. For example, a serial port needs to handle standard communication protocols such as XON/XOFF that are common for all serial port hardware. This would be managed by a serial port logical layer. However, the physical layer needs to communicate with a particular serial port chip. 16550 UART hardware differs from PL-011. The physical layer addresses these chipspecific variations. Conventionally, OS requests go to the logical layer first. In turn, the logical layer calls upon the physical layer to implement OS requests in terms understandable by the hardware. Inversely, when a hardware device needs to respond to the OS, it uses the physical layer to speak to the logical layer. The Microsoft Windows .sys files modules contain loadable device drivers. The advantage of loadable device drivers is that they can be loaded only when necessary and then unloaded, thus saving kernel memory. BUS: In computer architecture, a bus is a subsystem that transfers data between components inside a computer, or between computers. Early computer buses were literally parallel electrical wires with multiple connections, but the term is now used for any physical arrangement that provides the same logical functionality as a parallel electrical bus. Modern computer buses can use both parallel and bit serial connections, and can be wired in either a multidrop (electrical parallel) or daisy chain topology, or connected by switched hubs, as in the case of USB.

At one time, "bus" meant an electrically parallel system, with electrical conductors similar or identical to the pins on the CPU. This is no longer the case, and modern systems are blurring the lines between buses and networks. Buses can be parallel buses, which carry data words in parallel on multiple wires, or serial buses, which carry data in bit-serial form. The addition of extra power and control connections, differential drivers, and data connections in each direction usually means that most serial buses have more conductors than the minimum of one used in the 1-Wire and UNI/O serial buses. As data rates increase, the problems of timing skew, power consumption, electromagnetic interference and crosstalk across parallel buses become more and more difficult to circumvent. One partial solution to this problem has been to double pump the bus. Often, a serial bus can actually be operated at higher overall data rates than a parallel bus, despite having fewer electrical connections, because a serial bus inherently has no timing skew or crosstalk. USB, FireWire, and Serial ATA are examples of this. Multidrop connections do not work well for fast serial buses, so most modern serial buses use daisy-chain or hub designs. Most computers have both internal and external buses. An internal bus connects all the internal components of a computer to the motherboard (and thus, the CPU and internal memory). These types of buses are also referred to as a local bus, because they are intended to connect to local devices, not to those in other machines or external to the computer. An external bus connects external peripherals to the motherboard. Network connections such as Ethernet are not generally regarded as buses, although the difference is largely conceptual rather than practical. The arrival of technologies such as InfiniBand and HyperTransport is further blurring the boundaries between networks and buses. Even the lines between internal and external are sometimes fuzzy, IC can be used as both an internal bus, or an external bus (where it is known as ACCESS.bus), and InfiniBand is intended to replace both internal buses like PCI as well as external ones like Fibre Channel. In the typical desktop application, USB serves as a peripheral bus, but it also sees some use as a networking utility and for connectivity between different computers, again blurring the conceptual distinction. PCI: Conventional PCI (PCI is an initialism formed from Peripheral Component Interconnect,[1] part of the PCI Local Bus standard and often shortened to PCI) is a computer bus for attaching hardware devices in a computer. These devices can take either the form of an integrated circuit fitted onto the motherboard itself, called a planar device in the PCI specification, or an expansion card that fits into a slot. The PCI Local Bus was implemented in PCs, where it displaced ISA and VESA Local Bus as the standard expansion bus, and it in other computer types. PCI is being replaced by PCI-X and PCI Express, but as of 2011, many motherboards are still made with one or more PCI slots The PCI specification covers the physical size of the bus (including the size and spacing of the circuit board edge electrical contacts), electrical characteristics, bus timing, and protocols. The specification can be purchased from the PCI Special Interest Group (PCI-SIG).

Typical PCI cards used in PCs include: network cards, sound cards, modems, extra ports such as USB or serial, TV tuner cards and disk controllers. PCI video cards replaced ISA cards until growing bandwidth requirements outgrew the capabilities of PCI; the preferred interface for video cards became AGP, and then PCI Express. PCI video cards remain available for use with old PCs without AGP or PCI Express slots.[2] Many devices previously provided on expansion cards are either commonly integrated onto motherboards, or more commonly available in USB and PCI Express versions. Modern PCs often have no cards fitted. However, PCI is still used for certain specialized cards

Context switch
A context switch is the computing process of storing and restoring state (context) of a CPU so that execution can be resumed from the same point at a later time. This enables multiple processes to share a single CPU. The context switch is an essential feature of a multitasking operating system. Context switches are usually computationally intensive and much of the design of operating systems is to optimize the use of context switches. A context switch can mean a register context switch, a task context switch, a thread context switch, or a process context switch. What constitutes the context is determined by the processor and the operating system. Switching from one process to another requires a certain amount of time for doing the administration - saving and loading registers and memory maps, updating various tables and list etc.

When to switch?
There are three situations where a context switch needs to occur. They are: [edit] Multitasking Most commonly, within some scheduling scheme, one process needs to be switched out of the CPU so another process can run. Within a preemptive multitasking operating system, the scheduler allows every task to run for some certain amount of time, called its time slice. If a process does not voluntarily yield the CPU (for example, by performing an I/O operation), a timer interrupt fires, and the operating system schedules another process for execution instead. This ensures that the CPU cannot be monopolized by any one processor-intensive application. [edit] Interrupt handling Modern architectures are interrupt driven. This means that if the CPU requests data from a disk, for example, it does not need to busy-wait until the read is over; it can issue the request and continue with some other execution. When the read is over, the CPU can be interrupted and presented with the read. For interrupts, a program called an interrupt handler is installed, and it is the interrupt handler that handles the interrupt from the disk.

When an interrupt occurs, the hardware automatically switches a part of the context (at least enough to cause the interrupt handler to start running). The handler may save additional context, depending on details of the particular hardware and software designs. Often only a minimal part of the context is changed in order to minimize the amount of time spent handling the interrupt. The kernel does not spawn or schedule a special process to handle interrupts, but instead the handler executes in the (often partial) context established at the beginning of interrupt handling. Once interrupt servicing is complete, the context in effect before the interrupt occurred is restored so that the interrupted process can resume execution in its proper state. [edit] User and kernel mode switching When a transition between user mode and kernel mode is required in an operating system, a context switch is not necessary; a mode transition is not by itself a context switch. However, depending on the operating system, a context switch may also take place at this time.

System Devices Pdf downloaded

Direct Access Storage Devices

In mainframe computers and some minicomputers, a direct access storage device, or DASD ( / dzdi/), is any secondary storage device which has relatively low access time relative to its capacity. Historically, IBM introduced the term to cover three different device types: 1. disk drives 2. magnetic drums 3. data cells The direct access capability, occasionally and incorrectly called random access (although that term survives when referring to memory or RAM), of those devices stood in contrast to sequential access used in tape drives. The latter required a proportionally long time to access a distant point in a medium. Components of I/O Subsystem

I/O Subsystem Enhancements

The I/O subsystem consists of kernel components that provide an interface to hardware devices for applications and other mandatory system components. Windows XP enhances the I/O subsystem while retaining complete compatibility with drivers written for Windows 2000. This compatibility was essential because the I/O subsystem provides the interface to all devices, and too many changes to process I/O can break existing applications and drivers. Enhancements were made by adding new routines, available to drivers written to take advantage of the new Windows XP functionality. For this reason, while existing Windows 2000 drivers will work with Windows XP, they must be rewritten to take advantage of the new I/O improvements, including the following:

New cancel queue File system filter driver routines Improved low-memory performance I/O throttling Direct Memory Access (DMA) improvements WebDAV Redirector System Restore Volume Snapshot Service

New Cancel Queue Rather than having drivers perform device queuing and handling the I/O request packet (IRP) cancellation race, Windows XP I/O automates this process. In Windows XP, drivers handle IRP queuing and do not have to handle IRP cancellations. Intelligence in the queuing process lets the I/O routines handle requests rather than drivers in cases where the I/O is canceled. A common problem with cancellation of IRPs in a driver is synchronization between the cancel lock or the InterlockedExchange in the I/O Manager with the driver's queue lock. Windows XP abstracts the cancel logic in the routines while allowing the driver to implement the queue and associated synchronization. The driver provides routines to insert and remove IRPs from a queue, and it provides a lock to be held while calling these routines. The driver ensures that the memory for the queue comes from the correct pool. When the driver actually wants to insert something into the queue, it does not call its insertion routine, but instead calls IoCsqInsertIrp. To remove an IRP from the queue, the driver can either specify an IRP to be retrieved, or pass NULL, and the first IRP in the queue will be retrieved. Once the IRP has been retrieved, it cannot be canceled; it is expected that the driver will process the IRP and complete it quickly.

File System Filter Driver Routines Several new kernel-mode support routines provide greater all-around reliability. Microsoft worked with third-party developers to test their filter drivers. If a driver crashed attempting to perform illegal functions, together we determined the functionality required, and provided kernel-mode support routines to let them accomplish what needed to be done without harming the rest of the system. These routines are included in the Windows Installable File System (IFS) Kit for Windows XP. Improved Low-Memory Performance Windows XP is more resilient during periods of low memory because must succeed allocations are no longer permitted. Earlier versions of the kernel and drivers contained memory allocation requests that had to succeed even when the memory pool was low. These allocations would crash the system if no memory were available. Two important I/O allocation routines used must succeed, with the first being for IRP allocation, and the other for Memory Descriptor List (MDL) allocations. If memory couldnt be allocated, the system would blue screen if these routines were used. For Windows XP, kernel components and drivers are no longer allowed to request must succeed allocations; memory allocation routines will not allocate memory if the pool is too low. These changes allow drivers and other components to take appropriate error actions, rather than an extreme approach such as bug checking a machine. I/O Throttling Another improvement for low-memory conditions is I/O throttling. If the system cant allocate memory, it throttles down to process one page at a time, if necessary, using freely allocated resources. This allows the system to continue at a slower pace until more resources are available. DMA Improvements Three new entries are added to the end of the DMA_OPERATIONS structure. These three entries will be accessible to any driver, which uses IoGetDmaAdapter. To safely check whether the new functionality exists, the driver should set the version field of the DEVICE_DESCRIPTION structure provided to IoGetDmaAdapter to DEVICE_DESCRIPTION_VERSION2. The current Hardware Abstraction Layers (HAL), which don't support the new interface, will fail the operation because of the version number. HALs that support this feature will understand the new version and will succeed the request, assuming all the other parameters are in order. The driver should try to access these new function pointers only when the driver successfully gets the adapter using DEVICE_DESCRIPTION_VERSION2. WebDAV Redirector Windows XP includes a new component the WebDAV redirector. The WebDAV redirector allows applications on Windows XP to connect to the Internet, and to natively read and write data on the Internet. The WebDAV protocol is an extension to Hypertext Transfer Protocol (HTTP) that

allows data to be written to HTTP targets such as the Microsoft MSN Web Communities. The WebDAV redirector provides file system-level access to these servers in the same that the existing redirector provides access to SMB/CIFS servers. One way to access a WebDAV share is to use the net use command, for example:
NET USE * http://webserver/davscratch

To connect to an MSN Community, use as the target. The credentials you need in this case are your Passport credentials; enter these details in the Connect Using Different User Name dialog if you are using mapped network drive, or use the /u: switch with the net use command. For example:
net use /

The simplest ways to create a WebDAV share are:

Use Microsoft Internet Information Server (IIS). In IIS, you only need to make a directory browsable to access it through WebDAV and allow writes, and you can also save to it. Use MSN Communities. File Cabinets in MSN Communities are WebDAV shares.

System Restore System Restore is a combination of a file system filter driver and user-mode services that provide a way for user to unwind configuration operations and restore a system to an earlier configuration. System Restore includes a file system filter driver called Sr.sys, which helps to implement a copyon-write process. System Restore is a feature only of Windows XP Personal and the 32-bit version of Windows XP Professional; and it is not a feature of the server versions of Windows XP. Volume Snapshot Service A volume snapshot is a point-in-time copy of that volume. The snapshot is typically used by a backup application so that it may backup files in a consistent manner, even though the files may be changing during the backup. Windows XP includes a framework for orchestrating the timing for a snapshot, as well as a storage filter driver, not a file system filter driver, that uses a copy-on-write technique in order to create a snapshot. One important new snapshot-related I/O Control (IOCTL) that affects file systems is IOCTL_VOLSNAP_FLUSH_AND_HOLD_WRITES. This is actually intended for interpretation by file systems, even though it is an IOCTL. This is because all file systems should pass the IOCTL down to a lower-level driver that is waiting to process the IOCTL after the file system. The choice of an IOCTL instead of an FSCTL ensures that even legacy file system drivers will pass the IOCTL down. This IOCTL is sent by the Volume Snapshot Service. When a file system such as NTFS receives the IOCTL, it should flush the volume and hold all file resources to make sure that nothing more

gets dirty. When the IRP completes or is canceled, the file system then releases the resources and returns. Changes in Existing I/O Features Windows XP includes several changes in existing I/O features, including:

FAT32 on DVD-RAM DVD-RAM disks can appear as both CD/DVD devices and as rewriteable disks. Windows XP will allow DVD-RAM media in DVD-RAM drives to be formatted and used with the FAT32 file system.

Defragmentation APIs Since the release of Windows NT 4.0, the NTFS file system has exposed APIs that allow a user-mode application to query the allocated ranges of files on disk, and optimize file arrangements in order to defragment (or carefully fragment) files in order to minimize seeks while processing file I/O. In Windows 2000, these APIs have a number of limitations; for example, they do not function on the master file table (MFT), the PageFile, or NTFS attributes. The feature set in Windows XP changes the behavior on NTFS as follows:

o o

The defragmentation APIs will no longer defragment data by using the system cache. This means encrypted files will no longer need to be opened with read access. NTFS will now defragment at the cluster boundary for noncompressed files. In Windows 2000, this was limited to the page granularity for noncompressed files. NTFS will now defragment the MFT. This was not allowed in Windows 2000. This is through the regular code path, so there is no limit to how much at once can be moved, and any part of it can be moved other than the first 0x10 clusters. If there is no available space in the MFT to describe the change, then it will be rejected. The API can move an MFT segment even if a file with its File Entry in that section is currently open. NTFS will now defragement for cluster sizes greater than 4 KB. NTFS will now defragment Reparse points, bitmaps, and attribute_lists. These can now be opened for file read attributes and synchronize. The files are named using the regular syntax (file:name:type); for example: foo:$i30:$INDEX_ALLOCATION foo::$DATA foo::$REPARSE_POINT foo::$ATTRIBUTE_LIST

NTFS's QueryBitmap FSCTL will now return results on a byte boundary rather than page boundary.

NTFS will now defragment all parts of a stream, up to and including the allocation size. In Windows 2000, it was not possible to defragment the file tail between valid data length (VDL) and end of file (EOF). o You can now defragment into or out of the MFT Zone. The MFT Zone is now just an NTFS-internal hint for the NTFS allocation engine. o It is possible to Pin an NTFS file so that it may not be defragmented using FSCTL_MOVE_FILE. This is done by calling FSCTL_MARK_HANDLE and passing MARK_HANDLE_PROTECT_CLUSTERS as an argument. This stays in effect until the handle is closed. Large Files

Windows XP and Windows 2000 Service Pack 2 are able to create sections on arbitrarily large mapped files. A constraint that had existed in earlier versions of the memory manager (creating Prototype Page Table entries for all pages in the section) does not apply, because the Windows XP memory manager can reuse prototype page table entries (PPTE) for any parts of a section that do not have a mapped view. In fact, it only creates PPTEs for active views based on the view size (not the section size).

Verifiers There are new Verifier levels in addition to a new deadlock verifier.

Read-only NTFS NTFS will now mount read-only on an underlying read-only volume. If the volume requires a log restart or a Chkdsk, the mount will fail.

New flag: FILE_READ_ONLY_VOLUME GetVolumeInformation now returns a FILE_READ_ONLY_VOLUME for read-only volumes.

Remote Storage Service (RSS) on MO media Encrypting File System (EFS) The Client Side Caching database can now be encrypted.

Default NTFS ACL The default access control list (ACL) on NTFS volumes has been strengthened. Read-only flag on directories The Read-only attribute has no defined effect on folders. However, in Windows XP, Windows Explorer is using this attribute on a folder to indicate that there is extra metadata in the folder that the shell should look at. This is a performance optimization.

Write-through mode

On hot-plug media, the FAT file system will work in WriteThrough mode. This is to eliminate corruption that could occur on media such as CompactFlash when it is unplugged from the system without using the Safely Remove Hardware user interface. Read-only Kernel and HAL Pages On many Windows XP-based systems, the kernel and HAL pages will be marked read-only. This has affected drivers that were attempting to patch system code, dispatch tables, or data structures. The change to read-only kernel and HAL does not happen on all systems:

On systems with less than 256 MB RAM, the read-only restriction is used. On systems with 256 MB or more RAM, the read-only restriction isnt used because Windows XP uses large pages to map the kernel and HAL. On all systems, the read-only restriction is used for all driver code because drivers are never mapped with large pages. Driver Verifier disables large pages, so you can enable this on any machine of any size in order to test your code.

New Filter Driver Functions Windows XP includes several new filter driver function, including:

SetFileShortName. This is a new Win32 function to set the short name of a file on NTFS. GetVolumePathNamesForVolumeName. This new function allows you to list all VolumePaths that a VolumeName may be mounted on.
BOOL GetVolumePathNamesForVolumeName( LPCWSTR lpszVolumeName, LPWSTR lpszVolumePathNames, DWORD cchBufferLength, PDWORD lpcchReturnLength )

This routine returns a Multi-Sz list of volume path names for the given volume name. The returned lpcchReturnLength will include the extra tailing null characteristic of a Multi-Sz, unless ERROR_MORE_DATA is returned. In such a case, the list returned is as long as possible and may contain a part of a volume path. Parameters lpszVolumeName Supplies the volume name. lpszVolumePathNames Returns the volume path names. cchBufferLength Supplies the size of the return buffer. lpcchReturnLength

Returns the number of characters copied back to the return buffer on success or the total number of characters necessary for the buffer on ERROR_MORE_DATA. Return Value FALSE Failure. TRUE Success.

FileIdBothDirectoryInformation and FileIdFullDirectoryInformation. These two new FileInfo changes have been added to the file information class enumeration. The new file information classes can be passed as FileInformationClass parameter values to ZwQueryInformationFile, ZwSetInformationFile, and IRP_MN_QUERY_DIRECTORY. SetFileValidData. NTFS has the concept of Valid Data Length on a file stream. This is a way to preserve the C2 Object Reuse requirement but not force file systems to write zeroes into file-tails. Definitions: o VDL = Valid Data Length. Each stream has such a value. o EOF = Allocated file length. Each stream has such a value. o File Tail = the region from VDL to EOF. Clearly, each stream has such a region and it may be zero length. By definition, VDL must be less than or equal to the EOF. Any reads from the file tail are implicitly returned as zeroes by NTFS. Any writes into the file tail cause VDL to be increased to equal the end of this write, and any data between the previous VDL and the start of this write are written to be zeroes. In Windows XP, we have added an NTFS-only function call to set the Valid Data Length on a file, available to administrative users with SeManageVolumePrivilege (described later in this section). Expected users include: A Restore application that has the ability to pour the raw clusters directly onto the disk through hardware channel. This provides a method for informing the file system that the range contains valid user data and it can be returned to the user. o Multimedia/database tools that need to create large files, but not pay the zero-filling cost during the following times: (a) file extend time (the cost here is to make the extend a synchronous operation) (b) create time (the cost here is filling the file with zeroes) o Served-metadata cluster file systems that need to remotely extend the file, then "pump in" the data directly to the disk device. SeManageVolumePrivilege. The SeManageVolumePrivilege will let nonadministrators and remote users do administrative disk tasks on a machine. With Windows XP, this privilege is only used to allow nonadministrators and remote users make the SetValidData call. In the future, it will allow validated users to perform actions on the disk currently restricted to administrators.

IoAllocateWorkItem and IoQueueWorkItem. These routines supersede ExInitializeWorkItem and ExQueueWorkItem, and are essential to support driver unloading. IoGetDiskDeviceObject. Returns the disk device object associated with a file system volume device object. The disk device object need not be an actual disk but in general associated with storage.