Geethanjali College Of Engineering and Technology Department Of CSE

SUB:Computer Programming and Data Structures: Topics: 1) Abstract Data Type: An Abstract Data type is defined as a mathematical model of the data objects that make up a data type as well as the functions that operate on these objects. There are no standard conventions for defining them. A broad division may be drawn between "imperative" and "functional" definition styles. Imperative abstract data type definitions In the "imperative" view, which is closer to the philosophy of imperative statement languages, an abstract data structure is conceived as an entity that is mutable — meaning that it may be in different states at different times. Some operations may change the state of the ADT; therefore, the order in which operations are evaluated is important, and the same operation on the same entities may have different effects if executed at different times — just like the instructions of a computer, or the commands and procedures of an imperative language. To underscore this view, it is customary to say that the operations are executed or applied, rather than evaluated. The imperative style is often used when describing abstract algorithms. Abstract variable Imperative ADT definitions often depend on the concept of an abstract variable, which may be regarded as the simplest non-trivial ADT. An abstract variable V is a mutable entity that admits two operations:
• •

store(V,x) where x is a value of unspecified nature; and fetch(V), that yields a value;

Page 1

Geethanjali College Of Engineering and Technology Department Of CSE

fetch (V) always returns the value x used in the most recent store(V,x) operation on the same variable V.

As in many programming languages, the operation store (V,x) is often written V ← x (or some similar notation), and fetch(V) is implied whenever a variable V is used in a context where a value is required. Thus, for example, V ← V + 1 is commonly understood to be a shorthand for store (V, fetch (V) + 1). In this definition, it is implicitly assumed that storing a value into a variable U has no effect on the state of a distinct variable V. To make this assumption explicit, one could add the constraint that

if U and V are distinct variables, the sequence { store(U,x); store(V,y) } is equivalent to { store(V,y); store(U,x) }.

More generally, ADT definitions often assume that any operation that changes the state of one ADT instance has no effect on the state of any other instance (including other instances of the same ADT) — unless the ADT axioms imply that the two instances are connected in that sense. For example, when extending the definition of abstract variable to include abstract records, the operation that selects a field from a record variable R must yield a variable V that is aliased to that part of R. 2) Bucket sorting: Bucket sort, or bin sort, is a sorting algorithm that works by partitioning an array into a number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. It is a distribution sort, and is a cousin of radix sort in the most to least significant digit flavour. Bucket sort is a generalization of Pigeon hole sort. Since bucket sort is not a comparison sort, the Ω(n log n) lower bound is inapplicable. The computational complexity estimates involve the number of buckets. Bucket sort works as follows: 1. 2. 3. 4. Set up an array of initially empty "buckets." Scatter: Go over the original array, putting each object in its bucket. Sort each non-empty bucket. Gather: Visit the buckets in order and put all elements back into the original array. Page 2

Geethanjali College Of Engineering and Technology Department Of CSE
SUB: Advanced Data Structures SECOND YEAR Topics:

1) Fundamental concepts of OOPS.

Not all of these concepts are to be found in all object-oriented programming languages, and so object-oriented programming that uses classes is called sometime prototype based programming. In particular, prototype based programming does not typically use classes. As a result, a significantly different yet analogous terminology is used to define the concepts of object and instance. Benjamin Cuire Pierce and some other researchers view as futile any attempt to distill OOP to a minimal set of features. He nonetheless identifies fundamental features that support the OOP programming style in most object-oriented languages Dynamic Dispatch: When a method is invoked on an object, the object itself determines what code gets executed by looking up the method at run time in a table associated with the object. This feature distinguishes an object from an abstract data type(or module), which has a fixed (static) implementation of the operations for all instances. It is a programming methodology that gives modular component development while at the same time being very efficient. Encapsulation (or multi methods, in which case the state is kept separate) Subtype Polymorphism Object inheritance Open Recursion– a special variable (syntactically it may be a keyword), usually called this or self, that allows a method body to invoke another method body of the same object. This variable is late-bound; it allows a method defined in one class to invoke another method that is defined later, in some subclass thereof. Similarly, in his 2003 book, Concepts in programming languages, John C. Mitchell identifies four main features: dynamic dispatch, abstraction, subtype Page 3

but rather.Michael Lee Scott in Programming Language Pragmatics considers only encapsulation. any connected graph without cycles is a tree. Page 4 . and inheritance. and a simple cycle is formed if any edge is added to G. the Lassie object is an instance of the Dog class. The set of values of the attributes of a particular object is called its state object consists of state and the behavior that's defined in the object's classes. G has no cycles. types of ordered directed trees. The various kinds of trees used as data structures in computer science are not really trees in this sense. A tree is an undirected simple graph G that satisfies any of the following equivalent conditions: G is connected and has no cycles. In programmer vernacular. 2)Trees and Graphs: A labeled tree with 6 vertices and 5 edges Vertices v Edges v-1 Chromatic number 2 v•d•e In mathematics. Instance One can have an instance of a class. A forest is a disjoint union of trees. a tree is an undirected graph in which any two vertices are connected by exactly one simple path. inheritance and dynamic dispatch. In other words. the instance is the actual object created at run-time.Geethanjali College Of Engineering and Technology Department Of CSE polymorphism. more specifically graph theory.

and the type of data being communicated. shared memory. and it is not connected anymore if any edge is removed from G. Processes may be running on one or more computers connected by a network. then the above statements are also equivalent to any of the following conditions: G is connected and has n − 1 edges. Modularity. If G has finitely many vertices. is the foundation for address space independence/isolation. G is connected and the 3-vertex complete graph K3 is not a minor of G. and Privilege separation. There are several reasons for providing an environment that allows process cooperation: • • • • • Information sharing speedup. IPC may also be referred to as inter-thread communication and inter-application communication. Any two vertices in G can be connected by a unique simple path. and remote procedure calls (RPC). synchronization. Convenience. G has no simple cycles and has n − 1 edges. Inter-process communication (IPC) is a set of techniques for the exchange of data among multiple threads in one or more processes. IPC. say n of them.[ Page 5 . on par with the address space concept. The method of IPC used may vary based on the bandwidth and latency of communication between the threads. IPC techniques are divided into methods for message passing.Geethanjali College Of Engineering and Technology Department Of CSE G is connected. SUB: UNIX Shell Programming: Topic: 1) Inter-process communication In computing.

but only analog and digital signals that are representations of analog physical quantities. Analog discrete-time signal processing is a technology based on electronic devices such as sample and hold circuits. This involves linear electronic circuits such as passive filters. and television systems. without taking quantization error into consideration. Signals are analog or digital electrical representations of time-varying or spatialvarying physical quantities. This technology was a predecessor of digital signal processing (see below). and many others. images. additive mixers. multiplicators (frequency mixers and voltage-controlled amplifiers). It also involves non-linear circuits such as compandors. Signals of interest can include sound. voltage-controlled filters. In the context of signal processing. for example biological data such as electrocardiograms. as in classical radio. in either discrete or continuous time. active filters. but not in magnitude. and is still used in advanced processing of gigahertz signals. integrators and delay lines. Categories of signal processing Analog signal processing Analog signal processing is for signals that have not been digitized. Page 6 . radar. The concept of discrete-time signal processing also refers to a theoretical discipline that establishes a mathematical basis for digital signal processing. control system signals. analog time-division multiplexers. arbitrary binary data streams and on-off signals are not considered as signals.Geethanjali College Of Engineering and Technology Department Of CSE 2)Signal processing Signal processing is an area of electrical engineering and applied mathematics that deals with operations on or analysis of signals. voltage-controlled oscillators and phase-locked loops. Discrete time signal processing Discrete time signal processing is for sampled signals that are considered as defined only at discrete points in time. time-varying measurement values and sensor data. telecommunication transmission signals such as radio signals. analog delay lines and analog feedback shift registers. to perform useful operations on those signals. telephone. and as such are quantized in time.

Fields of signal processing • • • • • • Statistical signal processing — analyzing and extracting information from signals and noise based on their stochastic properties Audio signal processing — for electrical signals representing sound. Typical arithmetical operations include fixed-point and floating-point. Wiener filter. Other typical operations supported by the hardware are circular buffers and look-up tables. Processing is done by general-purpose computers or by digital circuits such as ASICs. fieldprogrammable gate arrays or specialized digital signal processors (DSP chips). Examples of algorithms are the Fast Fourier transform (FFT). such as speech or music Speech signal processing — for processing and interpreting spoken words Image processing — in digital cameras.Geethanjali College Of Engineering and Technology Department Of CSE Digital signal processing Digital signal processing is for signals that have been digitized. and various imaging systems Video processing — for interpreting moving pictures Array processing — for processing signals from arrays of sensors Page 7 . multiplication and addition. computers. real-valued and complex-valued. Adaptive filter and Kalman filter. finite impulse response (FIR) filter. Infinite impulse response (IIR) filter.

[2] Source code quality A computer has no concept of "well-written" source code. modification and portability Low complexity Low resource consumption: memory. established by software fault injection Methods to improve the quality: • • • Refactoring Code Inspection or software review Documenting code Page 8 . CPU Number of compilation or lint warnings Robust input validation and error handling. which often stress readability and usually language-specific conventions are aimed at reducing the cost of source code maintenance. testing.[1] although there are several different definitions. Many source code programming style guides. However. software quality measures how well software is designed (quality of design). quality of design measures how valid the design and requirements are in creating a worthwhile product. and how well the software conforms to that design (quality of conformance).Geethanjali College Of Engineering and Technology Department Of CSE SUB: Software Engineering Topic: 1)Quality and Quality Assurance In the context of software engineering. from a human point of view source code can be written in a way that has an effect on the effort needed to comprehend its behavior. It is often described as the 'fitness for purpose' of a piece of software. fixing. Some of the issues that affect code quality include: • • • • • • Readability Ease of maintenance. debugging. Whereas quality of conformance is concerned with implementation (see Software Quality Assurance).

whereas much of software quality is subjective criteria. The causes have ranged from poorly designed user interfaces to direct programming errors. It is defined as "the probability of failure-free operation of a computer program in a specified environment for a specified time". Leveson's paper [1] (PDF). That desire is a result of the common Page 9 . both the Food and Drug Administration (FDA) and Federal Aviation Administration (FAA) have requirements for software development.Geethanjali College Of Engineering and Technology Department Of CSE 2)Software reliability • • Software reliability is an important facet of software quality. measurable. An example of a programming error that lead to multiple deaths is discussed in Dr. Software errors have even caused human fatalities. These measured criteria are typically called software metrics.[6] One of reliability's distinguishing characteristics is that it is objective. software failure has caused more than inconvenience. and can be estimated. In the United States. This has resulted in requirements for development of some types software.[7] This distinction is especially important in the discipline of Software Quality Assurance. History With software embedded into many devices today. The goal of reliability • The need for a means to objectively determine software reliability comes from the desire to apply the techniques of contemporary engineering fields to the development of software.

up to and including outright failure. the machinery on which the software runs. the argument goes. along with an accompanying dependency on the software by the systems which maintain our society. by both lay-persons and specialists. • Regardless of the criticality of any single software application. with consequences for the data which is processed. Page 10 . it is also more and more frequently observed that software has penetrated deeply into most every aspect of modern life through the technology we use.Geethanjali College Of Engineering and Technology Department Of CSE observation. the more important is the need to assess the software's reliability. the software should behave in the way it is intended. As software becomes more and more crucial to the operation of the systems on which we depend. The more critical the application of the software to economic and production processes. In other words. in the way it should. software is seen to exhibit undesirable behaviour. it only follows that the software should offer a concomitant level of dependability. or even better. and by extension the people and materials which those machines might negatively affect. In other words. or to life-sustaining systems. that computer software does not work the way it ought to. It is only expected that this infiltration will continue.

the size of the array will be 200 GB. A RAID 0 (also known as a stripe set or striped volume) splits data evenly across two or more disks (striped) with no parity information for redundancy. Page 11 . but the storage space added to the array by each disk is limited to the size of the smallest disk. For example. although it can also be used as a way to create a small number of large virtual disks out of a large number of small physical ones. RAID 0 is normally used to increase performance.Geethanjali College Of Engineering and Technology Department Of CSE SUB: Computer Organization Topic: 1) RAID Levels RAID 0: Diagram of a RAID 0 setup. if a 120 GB disk is striped together with a 100 GB disk. RAID 0 was not one of the original RAID levels and provides no data redundancy. A RAID 0 can be created with disks of differing sizes.

so the seek time of the array will be the same as that of a single drive. Data can be recovered using special tools. When a drive fails the file system cannot cope with such a large loss of data and coherency since the data is "striped" across all drives (the data cannot be recovered without the missing disk). however. reliability (as measured by mean time to failure (MTTF) or mean time between failures (MTBF) is roughly inversely proportional to the number of members – so a set of two disks is roughly half as reliable as a single disk. For reads and writes that are smaller than the stripe size. How much the drives act independently depends on the access pattern from the file system level. this data will be incomplete and most likely corrupt. the drives will be able to seek independently. though the group reliability decreases with member size. The transfer speed of the array will be the transfer speed of all the Page 12 . and data recovery is typically very costly and not guaranteed. that probability would be upped to . This lets each drive seek independently when randomly reading or writing data on the disk. If the sectors accessed are spread evenly between the two drives. an idealized implementation of RAID 0 would split I/O operations into equal-sized blocks and spread them evenly across two disks. it is almost always a multiple of the hard disk sector size of 512 bytes.Geethanjali College Of Engineering and Technology Department Of CSE [Although RAID 0 was not specified in the original RAID paper. such as database access. such as copying files or video playback. in a two disk array. the apparent seek time of the array will be half that of a single drive (assuming the disks in the array have identical access time characteristics). The reason for this is that the file system is distributed across all disks. Reliability of a given RAID 0 set is equal to the average reliability of each disk divided by the number of disks in the set: That is. RAID 0 implementations with more than two disks are also possible. RAID 0 performance While the block size can technically be as small as a byte. If there were a probability of 5% that the disk would fail within three years. For reads and writes that are larger than the stripe size. the disks will be seeking to the same position on each disk.

RAID 0 is useful for setups such as large read-only NFS server where mounting many disks is time-consuming or impossible and redundancy is irrelevant. ordinary wearand-tear reliability is raised by the power of the number of self-contained copies. limited only by the speed of the RAID controller. real-world tests with games have shown that RAID-0 performance gains are minimal." [3] RAID 1: Diagram of a RAID 1 setup A RAID 1 creates an exact copy (or mirror) of a set of data on two or more disks.Geethanjali College Of Engineering and Technology Department Of CSE disks added together. RAID 0 is also used in some gaming systems where performance is desired and data integrity is not very important. Such an array can only be as big as the smallest member disk. and can be addressed independently. but in most situations it will yield a significant improvement in performance. As a trivial example. which increases reliability geometrically over a single disk. Note that these performance scenarios are in the best case with optimal access patterns. although some desktop applications will benefit. However. consider a RAID 1 with two identical models of a disk drive with a 5% probability that the disk would fail within three years. Provided that the Page 13 . This is useful when performance read or reliability is more important than data storage capacity. Since each member contains a complete copy of the data. A classic RAID 1 mirrored pair contains two disks (see diagram).[1][2] Another article examined these claims and concludes: "Striping does not always increase performance (in certain situations it will actually be slower than a non-RAID setup).

25% over a three year period if nothing is done to the array. If the first disk fails and is never replaced. As long as a failed disk is replaced before the second disk fails. the probability of losing all data is 0. then the probability of both disks failing during the three year lifetime is . no data would be lost.bakwas Thus. the data is safe. If only one of the disks fails. Page 14 .Geethanjali College Of Engineering and Technology Department Of CSE failures are statistically independent. then there is a 5% chance the data will be lost.

Geethanjali College Of Engineering and Technology Department Of CSE include segments of both common operational and common user databases. as well as data generated and used only at a user’s own site. Page 16 .

multi-target compiler. often having a binary form known as Page 17 . A compiler is a computer program (or set of programs) that transforms source code written in a programming language (the source language) into another computer language (the target language.Geethanjali College Of Engineering and Technology Department Of CSE THIRD YEAR SUB:Formal Languages and Automata Theory Topic: Basic tools of compiler A diagram of the operation of a typical multi-language.

g. If the compiled program can only run on a computer whose CPU or operating system is different from the one on which the compiler runs the compiler is known as a cross-compiler. A language rewriter is usually a program that translates the form of expressions without a change of language. A compiler is likely to perform many or all of the following operations: lexical analysis.Geethanjali College Of Engineering and Technology Department Of CSE object code). The term compiler-compiler is sometimes used to refer to a parser generator. The most common reason for wanting to transform source code is to create an executable program. parsing. source to source translator. Program faults caused by incorrect compiler behavior can be very difficult to track down and work around and compiler implementors invest a lot of time ensuring the correctness of their software. preprocessing. semantic analysis. assembly language or machine code). A program that translates between high-level languages is usually called a language translator. and code optimization. a tool often used to help create the lexer and parser. A program that translates from a low level language to a higher level one is a decompiler. code generation. or language converter. Page 18 .. The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a lower level language (e.

in particular. • • • • Magnetic. or electrically erasable programmable read-only memory. or ferrite core. Typically magnetic media has maximum lifetime of about 50 years[4] although this assumes optimal storage conditions. data retention is dependent on the magnetic properties of iron and its compounds. a USB key survived boiling in a custard pie. PROM. Although physically damaged after the final test. or erasable programmable read-only memory. being run-over by a truck and fired from a mortar at a brick wall[3]. is similar to PROM but can be cleared by exposure to ultraviolet light. in a 2005 destructive test. EPROM. with data retention dependent on the life expectancy of the device itself. this was not always the case. The magnetic medium passes across a semi-fixed head which reads or writes data. make use of two forms of memory known as RAM or ROM and although the most common form today is RAM. some deft soldering restored the device and data was successfully retrieved. passive memory devices are now in common use in digital cameras. designed to retain data while the computer is powered on. or programmable read-only memory. Magnetic media Magnetic tapes consist of narrow bands of a magnetic medium bonded in paper or plastic. stores data in a fixed form during the manufacturing process. Nor is active memory the only form used. These devices tend to be extraordinarily resilient. is the format used by flash memory devices and can be erased and rewritten electronically. EEPROM.Geethanjali College Of Engineering and Technology Department Of CSE SUB: Design and Analysis of Algorithms Topic: Optimal storage on tapes Types of storage Solid-state memory devices Digital computers. life expectancy can decrease Page 19 .

CD-R o DVDs o multi-layer DVDs Printing technology Although not a digital storage medium in itself. Unfortunately the permanence of printed documents cannot be easily discerned from the documents themselves. such as: o pressed CD-ROMs o WORMs. More recent advances in printer technology have raised the quality of photographic images in particular. printing hard-copies of documents and images remains a popular means of representing digital data and possibly acquires the qualities associated with original documents especially their potential for endurance. • • • • • magnetic tape reels magnetic stripe cards magnetic cards cassette tapes video cassette tapes Magnetic disks and drums include a rotating magnetic medium combined with a movable read/write head. such as FAX rolls Page 20 .Geethanjali College Of Engineering and Technology Department Of CSE rapidly depending on storage conditions and the resilience and reliability of hardware components. • • • floppy disks zip drives hard disks and drums Non-magnetic media • • • punched paper-tape punched cards optical media (rotating media combined with a moveable read/write head comprising a laser). • • wet-ribbon inked printers heat sensitive papers.

unfortunately. QA cannot absolutely guarantee the production of quality products.Geethanjali College Of Engineering and Technology Department Of CSE SUB: Software Testing Methodologies Topic: Quality Assurance certifications Quality assurance. It is important to realize also that quality is determined by the intended users. QA is more than just testing the quality of aspects of a product. Two key principles characterise QA: "fit for purpose" (the product should be suitable for the intended purpose) and "right first time" (mistakes should be eliminated). QA includes regulation of the quality of raw materials. For this reason. and management. products and components. Even goods with low prices can be considered quality items if they meet a market need. King John of England appointed William Wrotham to Page 21 .5 arcseconds. not by society in general: it is not the same as 'expensive' or 'high quality'. refers to a program for the systematic monitoring and evaluation of the various aspects of a project. service. setting and maintaining certain standards for guild membership. or QA for short. During the Middle Ages. assemblies. guilds adopted responsibility for quality control of their members. but makes this more likely. production and inspection processes. It is important to realize also that quality is determined by the program sponsor. Royal governments purchasing material were interested in quality control as customers. service or facility. or facility to ensure that standards of quality are being met. services related to production. Early efforts to control the quality of production Early civil engineering projects needed to be built from specifications. it analyzes the quality to make sure it conforms to specific requirements and comply with established plans. for example the four sides of the base of the Great Pyramid of Giza were required to be perpendicular to within 3. clients or customers.

appointed multiple such overseers. Working conditions then were arguably more conducive to professional pride. Page 22 .Geethanjali College Of Engineering and Technology Department Of CSE report about the construction and repair of ships. Secretary to the British Admiralty. it was possible for workers to control the quality of their own products. Centuries later. Samuel Pepys. Prior to the extensive division of labor and mechanization resulting from the Industrial Revolution.

Page 23 .0. When this model is associated with a precise description of how the components are to be interpreted (viewing conditions. The origin. typically as three or four values or color components. corresponds to black. One can picture this space as a region in three-dimensional Euclidean space if one identifies the x. and z axes with the stimuli for the long-wavelength (L). The human color space is a horse-shoeshaped cone such as shown here (see also CIE chromaticity diagram below). (S. medium-wavelength (M).L) = (0. rather it is defined according to the color temperature or white balance as desired or as available from ambient lighting. y.).Geethanjali College Of Engineering and Technology Department Of CSE SUB: Computer Graphics Topic: Color Models A color model is an abstract mathematical model describing the way colors can be represented as tuples of numbers. extending from the origin to. and short-wavelength (S) receptors.M. In practice.0). the human color receptors will be saturated or even be damaged at extremely-high light intensities. etc. infinity. the resulting set of colors is called color space. in principle. This section describes ways in which human color vision can be modeled  Tristimulus color space 3D representation of the human color space. White has no definite position in this diagram.

describe the possible colors (gamut) that can be constructed from the red. green. The human tristimulus space has the property that additive mixing of colors corresponds to the adding of vectors in this space.  CIE XYZ color space Main article: CIE 1931 color space CIE 1931 Standard Colorimetric Observer functions between 380 nm and 780 nm (at 5 nm intervals). One can observe this by watching the screen of an overhead projector during a meeting: one sees black lettering on a white background. created by the International Commission on Illumination in 1931. This makes it easy to. See also color constancy. even though the "black" has in fact not become darker than the white screen on which it is projected before the projector was turned on. The most saturated colors are located at the outer rim of the region. The latter color names refer to orange and white light respectively. These data were measured for human observers and a 2Page 24 . with an intensity that is lower than the light from surrounding areas. with brighter colors farther removed from the origin.Geethanjali College Of Engineering and Technology Department Of CSE but such behavior is not part of the CIE color space and neither is the changing color perception at low light levels (see: Kruithof curve). The "black" areas have not actually become darker but appear "black" relative to the higher intensity "white" projected onto the screen around it. and blue primaries in a computer display. As far as the responses of the receptors in the eye are concerned. there is no such thing as "brown" or "gray" light. One of the first mathematically defined color spaces is the CIE XYZ color space (also known as CIE 1931 color space). for example.

However. Page 25 . Y and Z sensitivity curves can be measured with a reasonable accuracy. which is brighter than blue.y) = (0. the overall luminosity curve (which in fact is a weighted sum of these three curves) is subjective. y = Y/(X + Y + Z). In this diagram. The sensitivity curves in the CIE 1931 and 1964 xyz color space are scaled to have equal areas under the curves. This new color space would have a different shape. Along the same lines. blue has a strong coloring power when mixed with green or red. Even though the pure blue appears to be very dark and hardly discernible from black when observed from a distance. and Z curves are arbitrary. The shapes of the individual X. x and y are projective coordinates and the colors of the chromaticity diagram occupy a region of the real projective plane. Note that the tabulated sensitivity curves have a certain amount of arbitrariness in them. the relative magnitudes of the X. and Z tristimulus values under Human tristimulus color space above according to: x = X/(X + Y + Z).0. Y. Y. since it involves asking a test person whether two light sources have the same brightness. light with a flat energy spectrum corresponds to the point (x. and Z are obtained by integrating the product of the spectrum of a light beam and the published color-matching functions. green is brighter than red. Y.333. even if they are in completely different colors. Blue and red wavelengths do not contribute strongly to the luminosity. Mathematically.Geethanjali College Of Engineering and Technology Department Of CSE degree field of view. which is illustrated by the following example: red gree n blu e red+gree n green+blu e red+blu e red+green+blu e zero light For someone with normal color vision. One could as well define a valid color space with an X sensitivity curve that has twice the amplitude. In 1964. The values for X.333). x and y are related to the X. supplemental data for a 10-degree field of view were published. The figure on the right shows the related chromaticity diagram with wavelengths in nanometers. Because the CIE sensitivity curves have equal areas under the curves.

This is called "RGB" color space. This is why color television sets or color computer monitors need only produce mixtures of red. the additive mixture of two colors does generally not lie on the mid-point of this line. and the red is so dark it can barely be made out.  HSV and HSL representations Main article: HSL and HSV Page 26 . Unfortunately there is no exact consensus as to what loci in the chromaticity diagram the red. In the two-dimensional xy representation.  RGB color model Main article: RGB color model Media that transmit light (such as television) use additive color mixing with primary colors of red. green. However. green and blue light. but with red. The green traffic light appears dirty white and hard to distinguish from night street lights. and blue. green and blue the largest portion of the human color space can be captured. all possible additive mixtures of two colors A and B form a straight line. and blue colors should have. each of which stimulates one of the three types of the eye's color receptors with as little stimulation as possible of the other two. green.Geethanjali College Of Engineering and Technology Department Of CSE With some forms of "red-green color blindness" the green is very slightly brighter than the blue. Red traffic lights in bright daylight appear broken (no light). so the same RGB values can give rise to slightly different colors on different screens. Mixtures of light of these primary colors cover a large part of the human color space and thus produce a large part of human color experiences. The CIE-xyz color space is a prism. as opposed to the cone-shaped tristimulus space above. See Additive color. Other primary colors could in principle be used.

Geethanjali College Of Engineering and Technology Department Of CSE Recognizing that the geometry of the RGB model is poorly aligned with the colormaking attributes recognized by human vision. HSV models itself on paint mixture. The fully saturated colors of each hue then lie in a circle. with its saturation and value dimensions resembling mixtures of a brightly colored paint with. HSV and HSL improve on the color cube representation of RGB by arranging colors of each hue in a radial slice. HSL tries to resemble more perceptual color models such as NCS or Munsell. HSV and HSL (hue. so that lightness 1 always implies white. saturation. It places the fully saturated colors in a circle of lightness ½. and lightness 0 always implies black. white and black. a color wheel. computer graphics researchers developed two alternate representations of RGB. lightness). Page 27 . saturation. in the late 1970s. respectively. around a central axis of neutral colors which ranges from black at the bottom to white at the top. value and hue.

.g. pre-dates FPGAs) Prefabricated IC with large AND.OR structure Connections can be "programmed" to create custom circuit • Circuit shown can implement any 3-input function of up to 3 terms – e. F = abc + a'c • • Fuse based – "blown" fuse removes connection Memory based – 1 creates connection Unblown Fuse Blown Fuse Page 28 .Geethanjali College Of Engineering and Technology Department Of CSE SUB: Digital Logic Design Simple Programmable Logic Devices Simple Programmable Logic Devices (SPLDs) – – – Developed 1970s (thus.

and “On” (x=1) Transition from Off to On.Geethanjali College Of Engineering and Technology Department Of CSE Finite-State Machine (FSM) Finite-State Machine (FSM) • • • • • A way to describe desired behavior of sequential circuit Use state diagram to list states. or On to Off. and transitions among states Example: Make x change• Two states: “Off” (x=0). on rising clock edge Arrow with no starting state points to initial state (when circuit first starts by asynchronoust reset toggle (0 to 1. or 1 to 0) every clock cycle Page 29 .

Remember that k must be prime for n to be prime. The ring R has n2-1 nonzero elements. Let s = 2×(n+1). Suppose k is even (and greater than 2).2 mod n. Let e1 e2 e3 etc be the sequence where e1 = 4. Remember that a2k = -1. Multiply through by an appropriate power of a and get a2k = -1. we can assume k is odd. The 2ab becomes -2. Evaluate 2 = z2 = (a+b)2. so a2+b2 = 4 = e1. To deal with 3. Let k be the exponent. Let a and b be the roots of the quadratic. a+b = z. This is the aforementioned quadratic. In either case. by far. based on the exponent. The polynomial with roots a and an is the polynomial with roots a and b. Prove by induction that ei = a2i+b2i. which factors into n-1 times n+1. It is also easily factored. which is certainly larger than the square root of n. Hereinafter. while us/2-1 = -2. The conditions of the theorem are satisfied. The order of a is 2k+1. Technically we need to run one trial division. as it is a power of 2. and ei+1 = ei2 . These lie in Zn iff the quadratic splits. Page 30 . Thus n = 2k-1. which is not a residue mod n. Set u = a. The number n is prime iff ek-1 = 0. The condition ek-1 = 0 becomes a2k-1 = . giving an = b.b2k-1. 2 is a square mod n. An efficient algorithm. Get ready to apply the nst prime test. Looks like we're on the right track. Conversely. we need k to be odd. demonstrates primality. the largest primes known. and we're wondering if n is prime. and n is prime. so we can skip that step. and note that z is a square root of 2 mod n. Since k is not prime. the quadratic has discriminant 6. so ek-1 is not 0. and us = 1. which has integer coefficients. and divide by a. and each ei beyond e1 = 2 mod 3. let n be prime. but it's n into n. n is not prime. First. and a*b = -1. This is the mersenne prime test. which is a unit in Zn. Let R be the ring of polynomials mod n over the quadratic x2-zx-1.Geethanjali College Of Engineering and Technology Department Of CSE Sub: MFCS The Mersenne primes The mersenne primes are. Let z = 2(k+1)/2.

chop the number into two pieces and add them together. evaluate ek-1 to prove n is prime. ek-12 = a2k + b2k + 2 = an×a + bn×b + 2 = ba + ab + 2 = 0 Procedure Given an exponent k. 292. Similarly. This is 0 iff ek-1 is 0. raising to the nth power. This entails k iterations of a square mod n. Using this. 3 is not a square mod n. i. When k is in the millions. Every finite field extension is galois. With this in mind. and run the strong pseudoprime test to see if n is prime. which runs much faster than k2. computers use the fft multiplication algorithm. giving a finite field of order n2. reducing a decimal number mod 999 is easy. i. So the only difficult step is the square. consider the square of ek-1. given the structure of n. Page 31 . so there is but one nontrivial field automorphism. and by reciprocity.e. Assuming the numbers are in binary. The mod is a simple operation. which can be expressed as conjugation. and n is millions of bits long.681 becomes 292+681 = 973 mod 999. or as the frobenius automorphism. If it is. bn = a. as one would expect from a computer. swapping the roots a and b. an has to be a conjugate of a. Remember that ab = -1. For example.e. Thus.Geethanjali College Of Engineering and Technology Department Of CSE Thus n is 1 mod 3. Since 6 is not a square the quadratic is irreducible. derive n = 2k-1. namely b. the algorithm has complexity k2×log(k). By analogy.

Next let Sn stand for the sum of the Fibonacci numbers from F0 through Fn. 1. INITIAL CONDITIONS F0 = 0 and F1 = 1. F1. We now can define the Fibonacci numbers formally as the sequence F0. To aid in stating properties of the Fibonacci sequence. F2. ÿ having the two following properties. F0 = 0. 8. Also. When Fn stands for some term of the sequence. Leonardo of Pisa (c. RECURSION RULE Fn + Fn+1 = Fn+2 for n = 0. For example. 21. That is. F1. 3. the term after Fn+1 is Fn+2. The rule for obtaining more terms is as follows: RECURSIVE PROPERTY. 2. ÿ for the integers of the Fibonacci sequence. We study the sequence here because it provides a wonderful opportunity for discovering mathematical patterns. The solution of a problem in his book Liber Abacci uses the sequence (F) 0. F1 = 1. 1. F3. 34. ÿ One of the many applications of this Fibonacci sequence is a theorem about the number of steps in an algorithm for finding the greatest common divisor of a pair of large integers. we use the customary notation F0. who is known today as Fibonacci (an abbreviation of filius Bonacci). S0 = F0 = 0 S1 = F0 + F1 = 0 + 1 = 1 S2 = F0 + F1 + F2 = 0 + 1 + 1 = 2 S3 = F0 + F1 + F2 + F3 = S2 + F3 = 2 + 2 = 4 Page 32 . and so on. That is.Geethanjali College Of Engineering and Technology Department Of CSE THE FIBONACCI AND LUCAS NUMBERS THE FIBONACCI AND LUCAS NUMBERS The great Italian mathematician. the term after 55 in (F) is 34 + 55 = 89 and the term after that is 55 + 89 = 144. expanded on the Arabic algebra of North Africa and introduced algebra into Europe. F3 = F1 + F2 = 2. 55. 2. The sum of two consecutive terms in (F) is the term immediately after them. the term just before Fn is Fn-1. the term just after Fn is represented by Fn+1. the term just before Fn-1 is Fn-2 and so on. 1170-1250). F4 = F2 + F3 = 3. F6 = F4 + F5 = 8. 1. The numbers shown in (F) are just the beginning of the unending Fibonacci sequence. 13. F2 = F0 + F1 = 1. 5. and so on. ÿ. F5 = F3 + F4 = 5.

Page 33 . Sn = F0 + F1 + F2 + ÿ + Fn = Sn-1 + Fn.Geethanjali College Of Engineering and Technology Department Of CSE and in general. We tabulate some of the values and look for a pattern.

There are only two possible cases: Data and counter pointer are both null or both not null. Figure 2: Null smart pointer increment() decrement() and and The following member functions define code for increme nting decreme nting the shared counter. There are two things that are shared between the instances of smart pointers that reference the same data: The data pointer and the reference counter for that data. In the first part of this section a reference counting impleme ntation will be introduced.Geethanjali College Of Engineering and Technology Department Of CSE Sub: Advanced Data Structures Reference cou nting smart poi nters Reference counting is a powerful and often used strateg y. In the second part some issues that arise are discussed. Reference cou nting impleme ntation The target of the section is a simplyfied lightweight version of a reference counting smart pointer. void ref_ptr::increm ent { (void) Page 34 . Figure 1: Smart pointer structure To indicate a pointer with no target (a null pointer) the shared counter pointer and data pointer are set to null. That is because incrementation or decreme ntation of the counter in one pointer must be visible in all pointers for the appropriate data. The member count is of type int *.

++(*count).Geethanjali College Of Engineering and Technology Department Of CSE if (p tr == 0) re rn tu . } void ref_ptr::decrem ent { if (p tr == 0) re rn tu . if (--(*co n u t) == 0) { delete count. } } (void) Page 35 . delete p tr.

increment() and decrement() operations are forwarded to user defined member functions. The scoped pointer stares a pointer to is reclaimed either on destuction of re e ). The scoped pointer should not be copied. intrusive p r: The intrusive pointer is a shared ownership pointer (like t shared pointer) with embedded reference count. It is a lightweight shared pointer for objects which already have internal reference counting. weak p r: The weak pointer is a non-owning observer of a shared p rt t owned object. a dynamically allocated object. Thus. which the smart pointer on via explicit or ownership tranfer semantics and is safer and faster for objects that The scoped pointer does not meet the CopyConstructible and Assignable requireme nts for STL containers. shared p r: The shared pointer is an external reference counting pointer t with ownership shared among multiple pointers. The intrusive pointer does meet requireme nts for STL containers Page 36 .Geethanjali College Of Engineering and Technology Department Of CSE The Boost Library The Boost Library comes with several smart pointers which will be part of future versions of the C++ Standard Librar y. There is also a version called scoped array which is for sole ownership of arr ays. There is also a version called shared array which is for array ownership among muliple pointers. scoped p r: The scoped pointer is a lightweight and simple pointer for t sole ownership of single objects. The shared pointer does meet the requireme nts for STL containers. There is no need for an appropriate weak array varia nt because weak pointers never reclaim the data. The weak pointer does meet the requireme nts for STL containers. the intrusive pointer is smaller and faster than the shared pointer. It has no shared ownership s t( is non-copyable.

000 29. The first microprocessor to make a real splash in the market was the Intel 8088.100. If you are familiar with the PC market and its history. 4 can execute any piece of code that ran on the original 8088.8 1 5 20 100 1989 1.000 Pentium 1993 3. a complete 8-bit computer on one chip. The Pentium a home computer.200.Geethanjali College Of Engineering and Technology Department Of CSE SUB: Microprocessors and Interfacing Working of microprocessor Microprocessor Progression: Intel The first microprocessor to make it into a home computer was the Intel 8080. introduced in 1979 and incorporated into the IBM PC (which first appeared around 1982). All of these microprocessor in microprocessors are made by Intel and all of them are improvements on the basic design of the 8088.000 times faster! The following table helps you to understand the differences between the different processors that Intel has introduced over the years. Name 8080 8088 Date Transistors Microns 1974 1979 6.5 1.000 275. you know that the PC market moved from the 8088 to the 80286 to the 80386 to the 80486 to the Pentium to the The Intel 8080 was the first Pentium II to the Pentium III to the Pentium 4.5 1 0.000 Page 37 . but it does it about 5. introduced in 1974.000 1.000 6 3 Clock Data MIPS speed width 2 MHz 8 bits 16 bits 5 MHz 8-bit bus 6 MHz 16 MHz 25 MHz 60 MHz 16 bits 32 bits 32 bits 32 bits 64-bit 0.64 0.33 80286 80386 80486 1982 1985 134.

000 GHz 64-bit bus Pentium 4 2004 125.700 GHz 64-bit bus 32 3.000 0.000.500. Many processors are re-introduced at higher clock speeds for many years after the original release date. Data Width is the width of the ALU. in microns. An 8-bit ALU would have to execute four instructions to add two 32-bit numbers. Transistors is the number of transistors on the chip. Clock speed will make more sense in the next section.6 bits ~7. You can see that the number of transistors on a single chip has risen steadily over the years. two 8-bit numbers. the number of transistors rises.Geethanjali College Of Engineering and Technology Department Of CSE bus Pentium II 1997 7.000 III 0.000 "Prescott" 0. As the feature size on the chip goes down. of the smallest wire on the chip.18 32 1. a human hair is 100 microns thick.000 0. Microns is the width. while a 32-bit ALU can do it in one Page 38 .500.09 Compiled from The Intel Microprocessor Quick Reference Guide and TSCP Benchmark Scores Information about this table: • • • • • The date is the year that the processor was first introduced. For comparison.35 32 233 bits MHz 64-bit bus 32 450 bits MHz 64-bit bus ~300 Pentium 1999 9.25 ~510 Pentium 4 2000 42. Clock speed is the maximum rate that the chip can be clocked at.000. An 8-bit ALU can add/subtract/multiply/etc.5 bits ~1. while a 32-bit ALU can manipulate 32-bit numbers.

while the modern Pentiums fetch data 64 bits at a time for their 32-bit ALUs. but you can get a general sense of the relative power of the CPUs from this column. but not always. The 8088 had a 16-bit ALU and an 8-bit bus. In many cases.Geethanjali College Of Engineering and Technology Department Of CSE instruction. MIPS stands for "millions of instructions per second" and is a rough measure of the performance of a CPU. • Page 39 . Modern CPUs can do so many different things that MIPS ratings lose a lot of their meaning. the external data bus is the same width as the ALU.

The speedup factor S is a function S = F(Algorithm.Geethanjali College Of Engineering and Technology Department Of CSE SUB: Operating systems Distributed process scheduling The primary objective of scheduling is to enhance overall system performance metrics such as process completion time and processor utilization. System. Schedule) S can be written as: S =OSPT/CPT=OSPT/OCPTideal £ OCPTideal/CPT= Si £ Sd where ² OSPT = optimal sequential processing time ² CPT = concurrent processing time ² OCPTideal = optimal concurrent processing time ² Si = the ideal speedup ² Sd = the degradation of the system due to actual implementation compared to an ideal system Si can be rewritten as: Si =RC/RP £n where RP =Pm i=1 Pi/OSPT and RC =Pm i=1 Pi/OCPTideal £ n and n is the number of processors. The term Pm i=1 Pi is the total computation of the concurrent algorithm where m is the number of tasks in the algorithm. Sd can be rewritten as: Sd =1/1 + ½ where ½ =CPT ¡ OCPTideal OCPTideal2 RP is Relative Processing: how much loss of of speedup is due to the substitution of the best sequential algorithm by an algorithm better adapted for concurrent Page 40 . A system performance model Partitioning a task into multiple processes for execution can result in a speedup of the total task completion time. The existence of multiple processing nodes in distributed systems present a challenging problem for scheduling processes onto processors and vice versa.

The best possible schedule on a given system hides the communication overhead (overlapping with other computations). The ¯nal expression for speedup S is S =RC/RP £1/1 + ½ £n The term ½ is called efficiency loss. The untied speedup model integrates three major components ² algorithm development ² system architecture ² scheduling policy Page 41 . It is a function of scheduling and the system architecture. RC is the Relative Concurrency which measures how far from optimal the usage of the n-processor is.Geethanjali College Of Engineering and Technology Department Of CSE implementation. It would be decomposed into two independent terms: ½ = ½sched + ½syst. It re°ects how well adapted the given problem and its algorithm are to the ideal n-processorsystem. but this is not easy to do since scheduling and the architecture are interdependent.

[3] The terms are nowadays used in a much wider sense. each of which is solved by one computer. In distributed computing. A computer program that runs in a distributed system is called a distributed program. Distributed computing also refers to the use of distributed systems to solve computational problems. The computers interact with each other in order to achieve a common goal. A distributed system consists of multiple autonomous computers that communicate through a computer network. While there is no single definition of a distributed system. each of which has its own local memory. and distributed programming is the process of writing such programs. and "distributed algorithm" originally referred to computer networks where individual computers were physically distributed within some geographical area.   The entities communicate with each other by message passing. Other typical properties of distributed systems include the following: Page 42 .Geethanjali College Of Engineering and Technology Department Of CSE Distributed computing It is a field of computer science that studies distributed systems. a problem is divided into many tasks. Alternatively. and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users. the following defining properties are commonly used: There are several autonomous computational entities. The word distributed in terms such as "distributed system". "distributed programming". A distributed system may have a common goal. the computational entities are called computers or nodes. In this article. each computer may have its own user with individual needs. even referring to autonomous processes that run on the same physical computer and interact with each other by message passing. such as solving a large computational problem.

Each computer may know only one part of the input. Each computer has only a limited. and the system may change during the execution of a distributed program. number of computers) is not known in advance.Geethanjali College Of Engineering and Technology Department Of CSE   The system has to tolerate failures in individual computers. The structure of the system (network topology. network latency. the system may consist of different kinds of computers and network links.  Page 43 . incomplete view of the system.

Geethanjali College Of Engineering and Technology Department Of CSE Page 44 .

We shall implement a special parsing technique called "recursive descent parsing" (RDP). In RDP.Geethanjali College Of Engineering and Technology Department Of CSE SUB: Compiler Design Recursive Descent Parsing The manufacturing of an abstract syntax tree (AST) for the above grammar can be thought of a factory method. which comes from the rule for E. we can see that in order for RDP to make an AST. based on the value of the tokens. makeAST(). Here is some pseudo-code. As we look at the grammar rule for the start symbol E in the above. of some abstract factory. it needs to know how to make a Leaf AST and a BinOp AST. RDP needs to know how to make an AddOp AST and a MulOp AST. we use a "tokenizer" to scan and "tokenize" the input from left to right. makeAST(): • • asks the tokenizer for the next token t. o Making VarLeaf and IntLeaf is easy and can be done directly. Building an AST from the top down means building it by first looking at the grammar rule for the start symbol E and try to create an AST that represent E. we see that in order to make a BinOp AST. • • • Looking at the rule for Leaf. and build the AST from the top down. RDP needs to know how to make an AST. This completes the recursive construction. RDP needs to know how to make a VarLeaf and an IntLeaf. How the AST is created is a variant as there are many ways to parse the input stream. IASTFactory. Looking at the rule for AddOp and MulOp. and then asks t to call the appropriate factory method o the int token and the id token call makeLeaf() o the left parenthesis token calls makeBinOp() o all other tokens should flag an error! o does the above "smell" like the visitor pattern to you or not? Who are the hosts and who are the visitors? Page 45 makeLeaf(): . Looking at the rule for BinOp. we see that in order to make an AddOp AST and a MulOp AST. we see that in order to make a Leaf AST.

Geethanjali College Of Engineering and Technology Department Of CSE • • • the int token calls makeIntLeaf(). o the * token calls makeMulOp() o all other tokens should flag an error! makeAddOp(): • • • • calls makeAST() to create the first AST calls makeAST() again to create the second AST asks the tokenizer for the next token t. makeVarLeaf(): create the VarLeaf directly makeBinOp(): • • asks the tokenizer for the next token t. makeIntLeaf(): create the IntLeaf directly. then asks t to create the Add AST: o the right parenthesis token will instantiate an appropriate Add AST o all other tokens should flag an error. the id token calls makeVarLeaf(). then asks t to call the appropriate factory method o the + calls makeAddOp(). Page 46 . all other tokens should flag an error.

although it is not uncommon to use a parsing technique that is more powerful than that actually required. It occurs in the analysis of both natural languages and computer languages. an example of bottom-up parsing would be analyzing a sentence by identifying words first. Here is a trivial grammar: Page 47 . which can either parse or generate a parser for a specific programming language given a specification of its grammar. and then using properties of the words to infer grammatical relations and phrase structures to build a parse tree of the complete sentence. and then to infer higher-order structures from them. and combines them successively to produce nonterminals. We can gain some power by starting at the bottom and working our way up. In linguistics.Geethanjali College Of Engineering and Technology Department Of CSE Brute Force Parsing Technique Bottom-up parsing (also known as shift-reduce parsing) is a strategy for analyzing unknown data relationships that attempts to identify the most fundamental units first. In programming language compilers. Perhaps the most well known generalized parser generators are YACC and GNU bison. Example A trivial example illustrates the difference. It attempts to build trees upward toward the start symbol. This means that rather than beginning with the starting symbol and generating an input string. we shall examine the string and attempt to work our way back to the starting symbol. It is common for bottom-up parsers to take the form of general parsing engines. Different computer languages require different parsing techniques. bottom-up parsing is a parsing method that works by identifying terminal symbols first. The productions of the parser can be used to build aparse tree of a program written in human-readable source code that can be compiled to assembly language or pseudocode.

Geethanjali College Of Engineering and Technology Department Of CSE S → Ax A→a A→b Page 48 .

An LL(1) parser starts with S and asks "which production should I attempt?" Naturally. Done. The derivation tree is: S /\ A x | a Page 49 . From there it tries to match A by calling method A (in a recursive-descent parser). returns to S and matches x. the leftmost derivation is S → Ax → ax which also happens to be the rightmost derivation as there is only one nonterminal ever to replace in a sentential form. Lookahead a predicts production A→a The parser matches a. it predicts the only alternative of S.Geethanjali College Of Engineering and Technology Department Of CSE Top down example For the input sentence ax.

at regular time intervals. Page 50 .) The number 802 was simply the next free number IEEE could assign. IEEE 802 splits the OSI Data Link Layer into two sub-layers named Logical Link Control (LLC) and Media Access Control (MAC) . Isochronous networks.Geethanjali College Of Engineering and Technology Department Of CSE SUB: Computer Networks IEEE 802 IEEE 802 refers to a family of IEEE standards dealing with local area networks and metropolitan area networks. where data is transmitted as a steady stream of octets. Token Ring. An individual Working Group provides the focus for each area. the IEEE 802 standards are restricted to networks carrying variable-size packets. in cell relay networks data is transmitted in short. so that the layers can be listed like this: Data link layer LLC Sublayer MAC Sublayer Physical layer The IEEE 802 family of standards is maintained by the IEEE 802 LAN/MAN Standards Committee (LMSC). Bridging and Virtual Bridged LANs. More specifically. The most widely used standards are for the Ethernet family. though “802” is sometimes associated with the date the first meeting was held — February 1980. or groups of octets. In fact. uniformly sized units called cells. Wireless LAN. are also out of the scope of this standard. The services and protocols specified in IEEE 802 map to the lower two layers (Data Link and Physical) of the seven-layer OSI networking reference model. (By contrast.

the base station communicates with mobiles via a channel. The cellular radio equipment (base station) can communicate with mobiles as long as they are within range. millions of calls could be served. Radio energy dissipates over distance. Like the early mobile radio system. The cellular concept employs variable low-power levels. so the mobiles must be within the operating range of the base station. Page 52 . The channel is made of two frequencies. As the population grows. Speculation led to the conclusion that by reducing the radius of areas to a few hundred meters. Frequencies used in one cell cluster can be reused in other cells. one for transmitting to the base station and one to receive information from the base station. Conversations can be handed off from cell to cell to maintain constant phone service as the user moves between cells. cells can be added to accommodate that growth.Geethanjali College Of Engineering and Technology Department Of CSE service providers could increase the number of potential customers in an area fourfold. Systems based on areas with a one-kilometer radius would have one hundred times more channels than systems with areas 10 kilometers in radius. which allow cells to be sized according to the subscriber density and demand of a given area.

an encrypted message may be converted to an ASCII string using radix-64 conversion Radix-64 expands a message by 33% • Page 53 .Geethanjali College Of Engineering and Technology Department Of CSE SUB: Information Security Radix-64 Conversion • To provide transparency for e-mail applications.

Geethanjali College Of Engineering and Technology Department Of CSE
Information security management system An information security management system (ISMS) is a set of policies concerned with information security management. The idioms arose primarily out of ISO 27001. The governing principle behind ISMS is that an organization should design, implement and maintain a coherent set of policies, processes and systems to manage risks to its information assets, thus ensuring acceptable levels of information security risk. As with all management processes, an ISMS must remain effective and efficient in the long term, adapting to changes in the internal organization and external environment. ISO/IEC 27001 therefore incorporates the typical "Plan-Do-CheckAct" (PDCA), or Deming cycle, approach: The Plan phase is about designing the ISMS, assessing information security risks and selecting appropriate controls.
  

The Do phase involves implementing and operating the controls.

The Check phase objective is to review and evaluate the performance (efficiency and effectiveness) of the ISMS. In the Act phase, changes are made where necessary to bring the ISMS back to peak performance.

The best known ISMS is described in ISO/IEC 27001 and ISO/IEC 27002 and related standards published jointly by ISO and IEC. Another competing ISMS is Information Security Forum's Standard of Good Practice (SOGP). It is more best practice-based as it comes from ISF's industry experiences.

Page 54

Geethanjali College Of Engineering and Technology Department Of CSE
Other frameworks such as COBIT and ITIL touch on security issues, but are mainly geared toward creating a governance framework for information and IT more generally. Information Security Management Maturity Model (known as ISM-cubed or ISM3) is another form of ISMS. ISM3 builds on standards such as ISO 20000, ISO 9001, CMM, ISO/IEC 27001, and general information governance and security concepts. ISM3 can be used as a template for ISO 9001-compliant ISMS. While ISO/IEC 27001 is controls based, ISM3 is process based and includes process metrics. A Capability Maturity Model for system security was standardized in ISO/IEC_21827.

Page 55

Geethanjali College Of Engineering and Technology Department Of CSE
SUB: Wireless Service WIRELESS SERVICES DEFINITION: the early 20th century using radiotelegraphy (Morse code). Later, as modulation made it possible to transmit voices and music via wireless, the medium came to be called "radio." With the advent of television, fax, data communication, and the effective use of a larger portion of the spectrum, the term "wireless" has been resurrected. Common examples of wireless equipment in use today include: Cellular phones and pagers -- provide connectivity for portable and mobile applications, both personal and business Global Positioning System (GPS)

Page 56

Detecting events (e. the inverse of computer graphics.g. industrial inspection.g. and image restoration. an industrial robot or an autonomous vehicle). where see in this case means that the machine is able to extract information from an image that is necessary to solve some task. or multi-dimensional data from a medical scanner. While computer graphics produces image data from 3D models. Computer vision. Interdisciplinary exchange between biological and computer vision has proven fruitful for both fields. Computer vision is. Computer vision is closely related to the study of biological vision. indexing. object recognition.g.g.. studies and describes the processes implemented in software and hardware behind artificial vision systems.. The field of biological vision studies and models the physiological processes behind visual perception in humans and other animals.g. views from multiple cameras... Modeling objects or environments (e. There is also a trend towards a combination of the two disciplines. As a technological discipline. The image data can take many forms. Interaction (e. as the input to a device for computer-human interaction). computer vision often produces 3D models from image data. on the other hand. Organizing information (e.Geethanjali College Of Engineering and Technology Department Of CSE SUB: Human Computer Interaction Computer vision Computer vision is the science and technology of machines that see. e. video tracking. as explored in augmented reality. event detection. such as video sequences. Page 57 . Examples of applications of computer vision include systems for: • • • • • Controlling processes (e. computer vision seeks to apply its theories and models to the construction of computer vision systems.. motion estimation..g. computer vision is concerned with the theory behind artificial systems that extract information from images. medical image analysis or topographical modeling). Sub-domains of computer vision include scene reconstruction. for visual surveillance or people counting). learning. in some ways. for indexing databases of images and image sequences). As a scientific discipline.

the relationship. especially when combined with vocal sketching [7]. i. basic design ]. In design research sonic products a set of practices have been inherited from a variety of fields. in this context. Among these practices it suffices to mention: • • • • • body storming. are either tangible and functional objects that are designed to be manipulated or usable simulations of such objects as in virtual prototyping. the study of how the production and consumption of sound have changed throughout history and within different societies. video prototyping with sonic overlays. mediated through sound. between living beings and their environment. If Interaction design is about designing objects people interact with and such interactions are facilitated by computational means.e. Research in this area focuses on experimental scientific findings about human sound reception in interactive contexts. meaning.Geethanjali College Of Engineering and Technology Department Of CSE Sonic Interaction Design Sonic Interaction Design is the study and exploitation of sound as one of the principal channels conveying information. such as: • • • • • product sound quality acoustic ecology. Research and development in this area relies on studies from other disciplines. Such practices have been tested in contexts where research and pedagogy naturally intermix. i. Foley artistry in filmmaking acting out sound dramas Page 58 . in Sonic Interaction Design sound is mediating interaction either as a display of processes or as an input medium. Sonic Interaction Design is at the intersection of Interaction Design and Sound and Music Computing. where participants produce vocal imitations to mimic the sonic behavior of objects while they are being interacted with. based on demonstrations and intersubjectivity.e. Products. film sound Computer and video game sound Sound culture. and aesthetic/emotional qualities in interactive contexts. Product design in the context of Sonic Interaction Design is dealing with methods and experiences for designing interactive products having a salient sonic behavior.

Geethanjali College Of Engineering and Technology Department Of CSE Interactive art and music Artistic research in Sonic Interaction Design is about productions in the interactive arts and performing arts. Sonic Interaction Design acknowledges the importance of human interaction for understanding and using auditory feedback Page 59 . exploiting the role of enactive engagement with sound– augmented interactive objects Sonification Within the field of Sonification.

Electronic mail area synchronous communications. and easily share information. It lets users communicate with one another. but as network bandwidth and multimedia capabilities improve. coordinate activities. Groupware also comes in the form of bulletin board. See Instant. These applications provide a place to post messages that other users see and can respond to. Some example groupware applications are outlined here: • • • • A scheduling program that schedules a group of people into meetings after evaluating their current personal schedules.Geethanjali College Of Engineering and Technology Department Of CSE SUB: Client Server Computing Group Ware Groupware is software that groups of people use together over computer networks and the Internet. Electronic mail is the foundation and data transport system of many groupware applications. interactive conferencing. Some amount of time may pass before a person responds to a message. All dialogs can be archived for future reference and users can respond to them at any time. a chat or instant messaging session is live and each user responds to the other in real time. The archive provides a record of events. these messages are typed. Instant messaging. A videoconferencing application that works in conjunction with the network meeting applications described above so attendees can see one another and collaborate on computers at the same time. Electronic mail is a form of groupware. and solutions that can be referred to at any time. Like a voice telephone call. instant messaging will use voice and video. a message sits in a message queue for other people to read and respond to at any time. or until the message falls out of the Page 60 . and chat room applications. Chat and instant messaging are forms of synchronous communications. which lets people contact one another in real-time via pop-up messages? Initially. It is based on the assumption that computer networks can help people increase their productivity by collaborating and sharing information. threaded discussions. either in real time or over a period of time. Attendees sit at their workstations and collaborate on a joint project by opening documents on the screen and working on those documents together. In a discussion forum. activities. problems. A network meeting application that allows users to hold meetings over the network.

Ideally. Here are some expectations and advantages of groupware: • • • • • • Groupware stimulates cooperation within an organization and helps people communicate and collaborate on joint projects. See Workflow Management for more information. delayed communication in which respondents have time to think about their response and gather information from other sources before responding. Once groupware applications are in place and users begin to take advantage of them. meetings become events that take place over days with attendees making contributions via electronic mail or the bulletin board system. A document moves through various stages of processing by being sent to appropriate people who work on the documents. traditional methods of communicating fall by the wayside. compound documents. Groupware helps define the flow of documents and then defines the work that must be done to complete a project. Groupware coordinates people and processes. rather than being a special application from a single vendor. groupware simply defines ways of using existing applications to share information and help users collaborate. One aspect of groupware is called workflow. which combines electronic messaging with document management and imaging. for example. Ideally. They promote a new form of instant global communication and collaboration. The messaging system is used as a transport for documents flow sequentially through different processes. which are accessible to any Internet user from just about any Web-attached system may be the most profound aspect of the Internet. Part of this automated process is the use of digital certificates so that a person receiving a document knows that it is has come from an authorized person. authorize the documents. These two forms of communication. In the case of discussion forums and email. and validate them. In fact. Meetings seem inconvenient due to travel and an inefficient use of time. Accounting and procurement systems can use workflow management.Geethanjali College Of Engineering and Technology Department Of CSE queue. Page 61 . groupware should be able to help each person in a collaborative project perform his or her specific job in a more efficient way. The document becomes the central place where shared information is stored. Groupware provides a unique way for users to share information by building it into structured.

and uses a combination of: • • • • XMLHttpRequest object (to exchange data asynchronously with a server) JavaScript/DOM (to display/interact with the information) CSS (to style the data) XML (often used as the format for transferring data) Page 63 .Geethanjali College Of Engineering and Technology Department Of CSE How AJAX Works AJAX is Based on Internet Standards AJAX is based on internet standards.