P. 1


|Views: 3|Likes:

More info:

Published by: Kalyana Chakravarthi on Apr 10, 2012
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as DOCX, PDF, TXT or read online from Scribd
See more
See less






A process is a unique combination of tools, materials, methods, and people engaged in producing a measurable output; for example a manufacturing line for machine parts. All processes have inherentstatistical variability which can be evaluated by statistical methods. The Process Capability is a measurable property of a process to the specification, expressed as a process capability index (e.g., Cpk or Cpm) or as a process performance index (e.g., Ppk or Ppm). The output of this measurement is usually illustrated by a histogram and calculations that predict how many parts will be produced out of specification (OOS). Process capability is also defined as the capability of a process to meet its purpose as managed by an organization's management and process definition structures ISO 15504. Two parts of process capability are: 1) Measure the variability of the output of a process, and 2) Compare that variability with a proposed specification or product tolerance.

Measure the process
The input of a process usually has at least one or more measurable characteristics that are used to specify outputs. These can be analyzed statistically; where the output data shows a normal distribution the process can be described by the process mean (average) and thestandard deviation. A process needs to be established with appropriate process controls in place. A control chart analysis is used to determine whether the process is "in statistical control". If the process is not in statistical control then capability has no meaning. Therefore the process capability involves onlycommon cause variation and not special cause variation. A batch of data needs to be obtained from the measured output of the process. The more data that is included the more precise the result, however an estimate can be achieved with as few as 17 data points. This should include the normal variety of production conditions, materials, and people in the process. With a manufactured product, it is common to include at least three different production runs, including start-ups. The process mean (average) and standard deviation are calculated. With a normal distribution, the "tails" can extend well beyond plus and minus three standard deviations, but this interval should contain about 99.73% of production output. Therefore for a normal distribution of data the process capability is often described as the relationship between six standard deviations and the required specification. [edit]Capability


The output of a process is expected to meet customer requirements,specifications, or product tolerances. Engineering can conduct a process capability study to determine the extent to which the process can meet these expectations. The ability of a process to meet specifications can be expressed as a single number using a process capability index or it can be assessed using control charts. Either case requires running the process to obtain enough measurable output so that engineering is confident that the process is stable and so that the process mean and variability can be reliably estimated. Statistical process control defines techniques to properly differentiate between stable processes, processes that are drifting (experiencing a long-term

change in the mean of the output), and processes that are growing more variable. Process capability indices are only meaningful for processes that are stable (in a state of statistical control). For Information Technology, ISO 15504 specifies a process capability measurement framework for assessing process capability. This measurement framework consists of 5.5+0.5 levels of process capability from none (Capability Level 0) to optimizing processes (CL 5). The measurement framework has been generalized so that it can be applied to non IT processes. There are currently two process reference models covering software and systems. The Capability Maturity Model in its latest version (CMMI continuous) also follows this approach.

Geometric tolerance
Geometric dimensioning and tolerancing (GD&T) is a system for defining and communicating engineering tolerances. It uses a symbolic language on engineering drawings and computer-generated three-dimensional solid models for explicitly describing nominal geometry and its allowable variation. It tells the manufacturing staff and machines what degree of accuracy and precision is needed on each facet of the part.

Geometric dimensioning and tolerancing (GD&T) is used to define the nominal (theoretically perfect) geometry of parts and assemblies, to define the allowable variation in form and possible size of individual features, and to define the allowable variation between features. Dimensioning and tolerancing and geometric dimensioning and tolerancing specifications are used as follows:   Dimensioning specifications define the nominal, as-modeled or as-intended geometry. One example is a basic dimension. Tolerancing specifications define the allowable variation for the form and possibly the size of individual features, and the allowable variation in orientation and location between features. Two examples are linear dimensions and feature control frames using a datum reference (both shown above).

There are several standards available worldwide that describe the symbols and define the rules used in GD&T. One such standard isAmerican Society of Mechanical Engineers (ASME) Y14.5-2009. This article is based on that standard, but other standards, such as those from the International Organization for Standardization (ISO), may vary slightly. The Y14.5 standard has the advantage of providing a fairly complete set of standards for GD&T in one document. The ISO standards, in comparison, typically only address a single topic at a time. There are separate standards that provide the details for each of the major symbols and topics below (e.g. position, flatness, profile, etc.). [edit]Dimensioning

and tolerancing philosophy

According to the ASME Y14.5-2009 standard, the purpose of geometric dimensioning and tolerancing (GD&T) is to describe the engineering intent of parts and assemblies. This is not a completely correct explanation of the purpose of GD&T or dimensioning and tolerancing in general. The purpose of GD&T is more accurately defined as describing the geometric requirements for part and assembly geometry. Proper application of GD&T will ensure that the allowable part and assembly

Unless explicitly stated. Every dimension and tolerance required to define the finished part shall be shown on the drawing. The geometry should be described without explicitly defining the method of manufacture. therefore. It is not mandatory that they apply at other drawing levels. stock materials).3 kPa unless stated otherwise. Plus and minus tolerances may be applied directly to dimensions or applied from a general tolerance block or general note.             3 rd no answer MECHATRONICS HUMAN INTERFACE . etc. The only exceptions are for dimensions marked as minimum. Dimensions and tolerances apply to the full length. unless the specifications are repeated on the higher level drawing(s).g. all dimensions and tolerances are only valid when the item is in a free state. Dimensioning and tolerancing shall completely define the nominal geometry and allowable variation. width. but are not required. stock or reference. 180°. For basic dimensions. they may be marked as reference. the dimension(s) shall be included with the gage or code number in parentheses following or below the dimension. but no angular dimension is explicitly shown. Descriptions of manufacturing methods should be avoided. Dimensions and tolerances only apply at the level of the drawing where they are specified. geometric tolerances are indirectly applied in a related Feature Control Frame. Measurement and scaling of the drawing is not allowed except in certain cases. There are some fundamental rules that need to be applied (these can be found on page 6 of the 2009 edition of the standard):  All dimensions must have a tolerance. Dimensions should be applied to features and arranged in such a way as to represent the function of the features.) Dimensions and tolerances are valid at 20 °C / 101. and depth of a feature including form variation. Engineering drawings define the requirements of finished (complete) parts. maximum. (This also applies to other orthogonal angles of 0°. If certain sizes are required during manufacturing but are not required in the final geometry (due to shrinkage or other causes) they should be marked as non-mandatory. All dimensioning and tolerancing should be arranged for maximum readability and should be applied to visible lines in true profiles. Angles of 90° are assumed when lines (including center lines) are shown at right angles. 270°.geometry defined on the drawing leads to parts that have the desired form and fit (within limits) and function as intended. If additional dimensions would be helpful. the limits of allowable variation must be specified. When geometry is normally controlled by gage sizes or by code (e. Every feature on every manufactured part is subject to variation.

allowing the users to manipulate a system Output. activityoriented design. programming languages. and provide a means of:   Input. The term Human-computer interface refers to this kind of systems. etc. The user interface includes hardware (physical) and software (logical) components. constrain enforcement by interaction protocols (intended to avoid use errors). Tools used for incorporating the human factors in the interface design are developed based on knowledge of computer science. operating systems. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems. scenario-based design. The corresponding disciplines are Human Factors Engineering (HFE) and Usability Engineering (UE). in the industrial design field of human–machine interaction. efficient. A user interface is the system by which people (users) interact with amachine. Common practices for interface software specification include use cases.resiliency design. [edit]Interface design Typical human-machine interface design consists of the following stages: interaction specification. and processcontrols. interface software specification and prototyping:  Common practices for interaction specification include user-centered design. User interfaces exist for various systems.Human-machine interface is the part of the machine that handles theHuman-machine interaction [edit]Overview In complex systems. and feedback from the machine which aids the operator in making operational decisions. decoration.   MACHINE TO MACHINE INTERFACE The user interface. Primary methods used in the interface design include prototyping and simulation. heavy machinery operator controls. allowing the system to indicate the effects of the users' manipulation Generally. hand tools. is the space where interaction between humans and machines occurs. Common practices for prototyping are based on interactive design based on libraries of interface elements (controls. The engineering of the human-machine interfaces is by consideringergonomics (Human Factors). the goal of human-machine interaction engineering is to produce a user interface which makes it easy. persona. This . and enjoyable to operate a machine in the way which produces the desired result. which is part of Systems Engineering.). The design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics andpsychology. such ascomputer graphics. the human-machine interface is typically computerized. The goal of interaction between a human and a machine at the user interface is effective operation and control of the machine.

the driver uses the steering wheel to control the direction of the vehicle.generally means that the operator needs to provide minimal input to achieve the desired output. when driving an automobile. a computerized library databasemight provide two user interfaces. For example. Ever since the increased use of personal computers and the relative decline in societal awareness of heavy machinery. optimized for ease of use) and the other for library personnel (wide set of functions. one for library patrons (limited set of functions. and also that the machine minimizes undesired outputs to the human. The user interface of the automobile is on the whole composed of the instruments the driver can use to accomplish the tasks of driving and maintaining the automobile. and the accelerator pedal.  An HMI is typically local to one machine or piece of equipment. Other terms for user interface include human–computer interface(HCI) and man–machine interface (MMI). [edit]Terminology There is a difference between a user interface and an operator interface or a human–machine interface.  [clarification needed] The system may expose several user interfaces to serve different kinds of users. and is the interface method between the human and the equipment/machine. the term user interface has taken on overtones of the graphical user interface. [clarification needed] . An Operator interface is the interface method by which multiple equipment that are linked by a host control system is accessed or controlled. brake pedal and gearstick to control the speed of the vehicle. The driver perceives the position of the vehicle by looking through the windshield and exact speed of the vehicle by reading thespeedometer.  The term "user interface" is often used in the context of (personal) computer systems and electronic devices  Where a network of equipment or computers are interlinked through an MES (Manufacturing Execution System)-or Host. while industrial control panel and machinery control design discussions more commonly refer to humanmachine interfaces. users have to be able to control and assess the state of the system. optimized for efficiency). Introduction To work with a system. For example.

efficient and satisfying. Usability is the degree to which the design of a particular user interface takes into account the human psychology and physiology of the users.g. and react according to their actions without specific commands. List of humancomputer interaction topics User interfaces are considered by some authors to be a prime ingredient of Computer user satisfaction. A means of tracking parts of the body is required. needed] However it is abbreviated. HMI is sometimes used to refer to what is better described as direct neural interface. This is particularly relevant [citation needed] to immersive interfaces. andergonomics. and makes the process of using the system effective. a vehicle or an industrialinstallation is sometimes referred [citation needed] to as the human–machine interface (HMI). In science fiction. this latter usage is seeing increasing application in the real-life use of (medical) prostheses— [citation needed] the artificial extension that replaces a missing body part (e. . and sensors noting the position of the head. the terms refer to the 'layer' that separates a human that is operating [citation needed] a machine from the machine itself. cochlear implants). the abbreviation MMI is still frequently used needed] [who?] although some may claim that MMI stands for something different now. In practice. In some circumstance computers might observe the user. Another [citation needed] abbreviation is HCI. usability testing. [edit]Usability Main article: Usability See also: mental model.. and how much effort it takes to learn how to do this. human action cycle. However. Other [citation terms used are operator interface console (OIC) and operator interface terminal (OIT).HMI of a machine for the sugar industry with pushbuttons  The user interface of amechanical system. but is more commonly used for human-computer interaction. The design of a user interface affects the amount of effort the user must expend to provide input for the system and to interpret the output of the system. HMI is a modification of the original term MMI [citation needed] [citation (man-machine interface). direction of gaze and so on have been used experimentally.

It describes how well a product can be used for its intended purpose by its target users with efficiency. They supplement or replace other forms of output with hapticfeedback methods. There are at least two different principles widely used in GUI design: Object-oriented user interfaces(OOUIs) [verification needed] and application oriented interfaces .  Other types of user interfaces: . using actions that correspond at least loosely to the physical world. [edit]Types Direct manipulation interface is the name of a general class of user interfaces that allow users to manipulate objects presented to them. effectiveness. the user interface (of a computer program) refers to the graphical. servers and networked computers are often called control panels. Administrative web interfaces for web-servers. eliminating the need to refresh a traditional HTML based web browser. Touchscreens are displays that accept input by touch of fingers or astylus.   User interfaces that are common in various fields outside desktop computing:  Command line interfaces. Touch user interface are graphical user interfaces using atouchpad or touchscreen display as a combined input and output device. Adobe Flex. self-service machines etc. movements of thecomputer mouse. and by technically advanced personal computer users. [edit]User interfaces in computing In computer science and human-computer interaction. Used in a growing amount of mobile devices and many types of point of sale. AJAX. also taking into account the requirements from its context of use. textual and auditory information the program presents to the user.NET. or similar technologies to provide real-time control in a separate program. industrial processes and machines.Usability is mainly a characteristic of the user interface. and the control sequences (such as keystrokes with the computer keyboard. Microsoft . but is also associated with the functionalities of the product and the process to design it. Used by programmers and system administrators. Web-based user interfaces or web user interfaces (WUI) are a subclass of GUIs that accept input and provide output by generatingweb pages which are transmitted via the Internet and viewed by the user using a web browser program. in engineering and scientific environments. Currently (as of 2009) the following types of user interface are the most common:  Graphical user interfaces (GUI) accept input via devices such as computer keyboard and mouse and provide articulated graphicaloutput on the computer monitor. and selections with the touchscreen) the user employs to control the program. and satisfaction. Used in computerized simulators etc. Newer implementations utilizeJava. where the user provides the input by typing a command string with the computer keyboard and the system provides output by printing text on the computer monitor.

Gesture interfaces are graphical user interfaces which accept input in a form of hand gestures. employ multiple displays to provide a more flexible interaction.                 . Intelligent user interfaces are human-machine interfaces that aim to improve the efficiency. domain. Task-Focused Interfaces are user interfaces which address theinformation overload problem of the desktop metaphor by making tasks. graphics.  Attentive user interfaces manage the user attention deciding when to interrupt the user. allowing users to manipulatesimulated objects and their properties. and receives the output when all the processing is done. and the level of detail of the messages presented to the user. the kind of warnings. the primary unit of interaction Text user interfaces are user interfaces which output text. Typically this is only possible with very rich graphic user interfaces. for instance to change itscommand verbs. or other character (such as Microsoft's Clippy the paperclip). which observe the user to infer his / her needs and intentions. and media (e. Tangible user interfaces. natural language. where the user specifies all the details of the batch job in advance to batch processing. not files. [1] currently being developed by Apple Multi-screen interfaces. Motion tracking interfaces monitor the user's body motions and translate them into commands. reasoning. This is often employed in computer game interaction in both the commercial arcades and more recently the handheld markets. which place a greater emphasis on touch and physical environment or its element. robot. Natural-Language interfaces . without requiring that he / she formulate explicit commands.g. Batch interfaces are non-interactive user interfaces. The computer does not prompt for further input after the processing has started. effectiveness. Noncommand user interfaces. The user input is made by pressing keys or buttons. Zero-Input interfaces get inputs from a set of sensors instead of querying the user with input dialogs.Used for search engines and on webpages. which accept input and provide output by generating voice prompts. User types in a question and waits for a response. gesture). or mouse gestures sketched with a computer mouse or a stylus. task. Reflexive user interfaces where the users control and redefine the entire system via the user interface alone. but accept other form of input in addition to or in place of typed command strings. Crossing-based interfaces are graphical user interfaces in which the primary task consists in crossing boundaries instead of pointing. Conversational Interface Agents attempt to personify the computer interface in the form of an animated person. Object-oriented user interfaces (OOUI) are based on object-oriented programming metaphors. or responding verbally to the interface. and naturalness of human-machine interaction by representing.. Voice user interfaces. Zooming user interfaces are graphical user interfaces in which information objects are represented at different levels of scale and detail. and present interactions in a conversational form. discourse. and acting on models of the user. and where the user can change the scale of the viewed area in order to show more detail.

a violation of consistency principles can provide sufficiently clear advantages that a wise and careful user interface designer may choose to violate consistency [3] to achieve some other important goal. [4][dubious – discuss][not in citation given] First. Consistency is one quality to trade off in user interface design as described by the cognitive dimensions framework. some grouped by ―common. For example. The principle ofmonotony of design in user interfaces states that ideally there should be only way to achieve a simple [5] operation. some through icons. Please help improve this article by adding citations to reliable sources. Various features should work in [6] similar ways. the more frustrating the search will be. there is the "principle of least astonishment". Second. like any other [2] principle. 1969 to present [citation needed] [citation needed] Graphical user interface. some grouped by function. consistency has its limits. arguably more efficient than mouse-driven user interfaces for document editing and programming. Good user interface design is about getting a user to have a consistent set of expectations. and then meeting those expectations. [edit]History The history of user interfaces can be divided into the following phases according to the dominant type of user interface:    Batch interface. though. 1945–1968 Command-line user interface.See also:  Archy. then select text to [7] which apply." Commands should work the same way in all contexts. Consistency can be bad if not used for a purpose and when it serves no benefit for the end user. the controls for different features should be presented in a consistent manner so that users can [citation needed] find the controls easily. There are three aspects identified as relevant to consistency. then apply action to selection. The more search strategies a user has to use. some under a separate button at one corner of a screen. The more consistent the grouping. some through right-clicks.‖ some grouped by ―advanced." Others are "select text.‖ A user looking for a command should have a consistent search strategy for finding it. In some cases. users find it difficult to use software when some commands are available through menus. the easier the search. Unsourced material may bechallenged and removed. to facilitate habituation to the interface. For example. (March 2011) A property of a good user interface is consistency. [citation needed] . 1981 to present — see History of the GUIfor a detailed look [edit]Consistency This section needs additional citationsfor verification. an experimental keyboard-driven modeless user interface byJef Raskin. some features in Adobe Acrobat are "select tool.

The new interface caused rejection among advanced users. the change from the menu bars ofMicrosoft Office 2003 to the ribbon toolbar of Microsoft Office 2007caused mixed [8] reactions.polymorphism. andinheritance. Modality refers to several alternate interfaces to the same product. consistency counsels against user interface changes version-to-version. Programming techniques may include features such as data abstraction. more broadly used software must more carefully hew to the status quo to avoid disruptive costs. Heavy use of modes often reduces the usability of a user interface. A modality is a path of communication employed by the user interface to carry input and output. while mode describes different states of the same interface. For example. Simple. A usual solution in providing a new user interface is to provide a backwardscompatibility mode. caps lock sets an input mode in which typed letters are uppercase by default.messaging. non-OOP programs may be one "long" list of statements (or commands). so that an overall redesign can be achieved without breaking consistency and providing user feedback at any [11] single step. encapsulation. as the user must expend effort to remember current mode states. in which the same input can produce different perceived results depending of the state of the computer program. More complex programs will often group smaller sections of these statements into functions orsubroutines each of which . Change should be minimized. Older. less mature software has fewer users who are entrenched in the status quo. so that a product's most intensive users are not forced to bear the costs of the [citation needed] change. modularity.Third. A second strategy is to introduce big changes in small increments. allowing the user to choose which ones to use for interaction. A mode is a distinct method of operation within a computer program. Many modernprogramming languagesnow support OOP. the same typing produces lowercase letters when not in caps lock mode. For example. at least as an option. Generally. loudspeaker allows the system to produce sound (auditory modality) The user interface may employ several redundant input modalities and output modalities. and switch between mode states as necessary. Object oriented program concepts Object-oriented programming (OOP) is aprogramming paradigmusing "objects" – data structures consisting ofdata fields and methodstogether with their interactions – to design applications and computer programs. and forward-compatibility should be maintained.digitizing tablet allows the user to create free-form drawing Output — computer monitor allows the system to display text and graphics (vision modality). while average users reported improved productivity and a fairly good [10] acceptance. who reported losses in [9] [8] productivity. Examples of modalities:   Input — computer keyboard allows the user to enter typed text. [edit]Modalities [citation needed] and modes Main articles: Modality (human-computer interaction) and Mode (computer interface) Two words are used in UI design to describe different ways in which a user can utilize a product.

one for each of the real-world objects the program is dealing with.e. The technology focuses on data rather than processes. accessible from any part of the program. In contrast. a hockey player. A program might well contain multiple copies of each type of object. several different types of objects might offer print methods. The object's methods will typically include checks and safeguards that are specific to the types of data the object contains. Researchers studied ways to maintain software quality and developed object-oriented programming in part to address common [citation needed] problems by strongly emphasizing discrete. The programming construct that combines data with a set of methods for accessing and managing those data is called an object. enabling collaborationthrough the use of linked modules (subroutines). reflecting the different kinds of data each contains. For instance. the data is accessed by calling specially written functions. rather than specifically the data. Instead. i. This more conventional approach. with programs composed of self-sufficient modules ("classes"). These features become especially useful when more than one programmer is contributing code to a project or when the goal is to reuse code between projects. As programs grow in size. The practice of using subroutines to examine or modify certain kinds of data. Each type of object might implement that print method in a different way. commonly called methods. each instance of which ("objects") contains all the information needed to manipulate its own data structure ("members"). An object-oriented program may thus be viewed as a collection of interacting objects. As an example. An object can also offer simpleto-use. well before the widespread use of objectoriented programming. there could be one bank account object for each real-world account at a particular bank. reusable units of programming logic . Each copy of the bank account object would be alike in the methods it offers for manipulating or reading its data.might perform a particular task. Object-oriented programming has roots that can be traced to the 1960s. This approach can also be used to offer standardized methods across different types of objects. but the data inside each object would differ reflecting the different history of each account. it is common for some of the program's data to be 'global'. was also quite commonly used in non-OOP modular programming. or a bulldozer. and self-sufficient reusable units of programming logic. while concealing the specifics of how those tasks are accomplished. With designs of this sort. tends to consider data and behavior separately. but equally provided for code reuse. allowing any function to modify any piece of data means that bugs can have wide-reaching effects. In this way alterations can be made to the internal structure or methods of an object without requiring that the rest of the program be modified. each . the object-oriented approach encourages the programmer to place data where it is not directly accessible by the rest of the program. in which a program is seen as a list of tasks (subroutines) to perform. In OOP. each type corresponding to a particular kind of complex data to be managed or perhaps to a real-world object or concept such as a bank account. which still persists. Objects can be thought of as wrapping their data within a set of functions designed to ensure that the data are used appropriately." These act as the intermediaries for retrieving or modifying the data they control. standardized methods for performing particular operations on its data. manageability often became a concern. This is in contrast to the existing modular programming that had been dominant for many years that focused on the function of a module. which are either bundled in with the data or inherited from "class objects. As hardware and software became increasingly complex. An object-oriented program will usually contain different types of objects. and to assist in that use. but all the different print methods might be called in the same standardized manner from elsewhere in the program. as opposed to the conventional model. however.

Smalltalk creators were influenced by the ideas introduced in Simula 67. Smalltalk and with it OOP were introduced to a wider audience by the August 1981 issue ofByte Magazine. an MIT ALGOL version. in that dialect) directly with procedures. In the environment of the artificial intelligence group. derivatives of LISP (CLOS). Kay's Smalltalk work had influenced the Lisp communityto incorporate object-based techniques that were introduced to developers via the Lisp machine. andC++. and sending messages to other objects. such as models to study and improve the movement of ships and their content through cargo ports. Sutherland defined notions of "object" and "instance" (with the class concept covered by [4] "master" or "definition"). a programming language designed for discrete event simulation. created by Ole-Johan [7] Dahl and Kristen Nygaard of the Norwegian Computing Center in Oslo. virtual methods. Objects as a formal concept in programming were introduced in the 1960s in Simula 67.A. there were a few attempts to design processor architectures that included hardware support for objects in memory but these were not successful. OOP data structures tend to "carry their own operators around with them" (or at least "inherit" them from a similar object or class) .object is capable of receiving messages.R. Another early MIT example was Sketchpadcreated by Ivan Sutherland in 1960-61. and discrete event simulation) as part of an explicit programming paradigm. a part of the first standardized object-oriented programming language. eventually led to the Common Lisp Object System (CLOS. introduced the term object-oriented programming to represent the pervasive use of objects and messages as the basis for computation. "object" could refer to identified items (LISP atoms) with [1][2] properties (attributes). AED-0. Simula was used for physical modeling. including Smalltalk. Simula 67 was influenced [5][8] by SIMSCRIPT and C. a major revision of Simula I. Examples include theIntel iAPX 432 and the Linn Smart Rekursiv. ANSI Common Lisp). albeit specialized to graphical interaction. but Smalltalk was designed to be a fully dynamic system in which classes could be created and modified [9] dynamically rather than statically as in Simula 67. For example. Each object can be viewed as an independent "machine" with a distinct role or responsibility. Also. The ideas of Simula 67 influenced many later languages. The Smalltalk language. Alan Kay was later to cite a detailed understanding of LISP internals as a [3] strong influence on his thinking in 1966. Object-oriented programming developed as the dominant programming methodology in the early and mid 1990s when programming languages supporting the techniques became widely available. which integrates functional programming and object-oriented programming and allows extension via a Meta-object protocol. "Tony" Hoare'sproposed "record classes". [edit]History The terms "objects" and "oriented" in something like the modern sense of object-oriented programming seem to make their first appearance atMIT in the late 1950s and early 1960s. Object Pascal. In the 1980s. In the 1970s. coroutines. The language also used automatic garbage collection that had been invented earlier for the functional programminglanguage Lisp. prefiguring what were later [5][6] termed "messages". as early as 1960. These included . Experimentation with various extensions to Lisp (like LOOPS and Flavors introducing multiple inheritance and mixins). processing data. in the glossary of the 1963 technical report based on his dissertation about Sketchpad. linked data structures ("plexes". "methods" and "member functions". Simula introduced the notion of classes and instances or objects (as well as subclasses.except when they have to be serialized. The actions (or "methods") on these objects are closely associated with the object. which was developed at Xerox PARC (by Alan Kay and others) in the 1970s.

C++ . [citation needed] modern object-oriented software design methods include refinements such as the use of design patterns. the object itself determines what code gets executed by looking up the method at run time in a table associated with the object. an object-oriented. written in Objective-C. found in the strong majority of definitions of OOP. and very unlike C++. Pierce and some other researchers view as futile any attempt to distill OOP to a minimal set of features. Some feel that association with GUIs (real or perceived) was what propelled OOP into the programming mainstream. Both frameworks show the benefit of using OOP by creating an abstraction from implementation in their own way. Its dominance was further enhanced by the rising popularity of graphical user interfaces. whileprototype-based programming does not typically use classes. Developers usually compile Java to bytecode. and their succeeding design. Modula-2 (1978) included both. and Java. At ETH Zürich. developed by Sun Microsystems. design by contract. The approach is unlike Smalltalk. and others. For example. He nonetheless identifies fundamental features that support the OOP programming style in [14] most object-oriented languages:  Dynamic dispatch – when a method is invoked on an object. object-oriented programming that uses classes is sometimes called class-based programming. which has a fixed (static) .NET platform. allowing classes defined in one language to subclass classes defined in the other language. allowing Java to run on any operating system for which a Java virtual machine is available.NET) and C#. Adding these features to languages that were not initially designed for them often led to problems with compatibility and maintainability of code. Benjamin C.0. whereas Java makes use of the Adapter pattern . VB. Not all of these concepts are to be found in all object-oriented programming languages. included a distinctive approach to object orientation. dynamic messaging extension to C based on Smalltalk. a significantly different yet analogous terminology is used to define the concepts of object and instance. both designed for Microsoft's . including Ada.NET and C# make use of the Strategy pattern to [citation needed] accomplish cross-language inheritance.NET and C# support cross-language inheritance. and such.NET (VB. Just as procedural programming led to refinements of techniques such as structured programming. Fortran. or fundamental concepts. such asPython and Ruby. OOP toolkits also enhanced the popularity of event-driven [who?] programming (although this concept is not limited to OOP). Oberon. More recently. Probably the most commercially important recent object-oriented languages are Visual Basic. [edit]Fundamental [10][11][12] [citation needed] [citation needed] features and concepts See also: List of object-oriented programming terms A survey by Deborah J. andDelphi . Object-oriented features have been added to many existing languages during that time. VB. classes. This feature distinguishes an object from an abstract data type (or module). Armstrong of nearly 40 years of computing literature identified a number of [13] "quarks". As a result.Visual FoxPro 3. a number of languages have emerged that are primarily object-oriented yet compatible with procedural methodology. An example of a closely related dynamic GUI library and OOP language can be found in the Cocoaframeworks on Mac OS X. Pascal. BASIC. and modeling languages (such as UML). which rely heavily upon object-oriented programming techniques. Niklaus Wirth and his colleagues had also been investigating such topics as data abstraction and modular programming(although this had been in common use in the 1960s or earlier).

implementation of the operations for all instances. The following concepts and constructs have been used as interpretations of OOP concepts:       coalgebraic data types [17] abstract data types (which have existential types) allow the definition of modules but these do not support dynamic dispatch recursive types encapsulated state Inheritance records are basis for understanding objects if function literals can be stored in fields (like in functional programming languages). as opposed to using the object's class. it allows a method defined in one class to invoke another method that is defined later.     Encapsulation (or multi-methods. in his 2003 book. It is a programming methodology that gives modular component development while at the same time being very efficient. and inheritance. A common use of decoupling in OOP is to polymorphically decouple the encapsulation (see Bridge pattern and Adapter pattern) . but the actual calculi need be considerably more complex to incorporate essential features of OOP. [edit]Formal semantics See also: Formal semantics of programming languages There have been several attempts at formalizing the concepts used in object-oriented programming. inheritance and dynamic dispatch.for example. using a method interface which an encapsulated object must satisfy. see Abadi & Cardelli. Michael Lee Scott [16] inProgramming Language Pragmatics considers only encapsulation. which increases code re-usability. subtype polymorphism. Similarly. John C. A Theory of Objects for formal definitions of many OOP concepts and . This variable islatebound. in which case the state is kept separate) Subtype polymorphism Object inheritance (or delegation) Open recursion – a special variable (syntactically it may be a keyword). Mitchell identifies four main [15] features: dynamic dispatch. in some subclass thereof. Message passing Abstraction [edit]Decoupling Decoupling refers to careful controls that separate code modules from particular use cases. usually called this or self. these allow bothsubtype polymorphism and parametric polymorphism (generics) Attempts to find a consensus definition or theory behind objects have not proven very successful [18] (however. abstraction. that allows a method body to invoke another method body of the same object. Several extensions of System F<: that deal with mutable objects have [18] been studied. Additional concepts used in object-oriented programming include:      Classes of objects Instances of classes Methods which act on the attached objects. Concepts in programming languages.

Inheritance can be performed by cloning the maps (sometimes called "prototyping"). The translation produces some sort of output value that the Microcontroller can use. They may represent a person. . such as a CdS cell (Cadmium Sulfide cells measure light intensity).constructs). An interesting way to think about this is an Analog Signal works like a tuner on an older radio. Sensors translate between the physical world and the abstract world of Microcontrollers. a place. One of the simpler definitions is that OOP is the act of using "map" data structures or arrays that can contain functions and pointers to other maps. Sensor Output Values Sensors help translate physical world attributes into values that the computer on a robot can use. and some on program structuring. For example. and often diverge widely. some definitions focus on mental activities. most sensors fall into one of two categories: 1. Analog Sensors 2. You can turn it up or down in a continuous motion. a bank account. OBJECT:=>> Objects are the run time entities in an object-oriented system. You can fine tune it by turning the knob ever so slightly. The value can assume any possible value between 0 and 5 volts. a table of data or any item that the program has to handle. all with some syntactic and scoping sugar on top. In general. An 'Analog Signal' is one that can assume any value in a range. The Basics An important part of building a robot is the incorporation of sensors. Digital Sensors An analog sensor. might be wired into a circuit in a way that it will have an output that ranges from 0 volts to 5 volts. I will explain the common types of sensors used in personal robotics. This month in Basics.

0586 0. In this case. but the value must increase in steps. This allows the Microcontroller to do things like compute values and perform comparisons. most modern controllers have a resource called an Analog to Digital converter (A/D converter). Part of this decision depends on the type of resources available on your Microcontroller.Digital sensors generate what is called a 'Discrete Signal'. The most common discrete sensors used in robotics provide you with a binary output which has two discrete states.0391 1 range of an A/D converter to be 0 to +5 volts. for example. 'Discrete Signals' typically have a stair step appearance when they are graphed on chart. There is a known relationship between any value and the values preceding and following it. This means that there is a range of values that the sensor can output. or it is off. consider a push button switch.0195 0. may provide you with your current heading by sending a 9 bit value with a range from 0 to 359. It does this with a mapping function that assigns 0. Much of this article assumes a digital signal to be a binary signal. A digital compass.0586 . it allows you to change channels in steps. Controllers such as the 68HC11 deal with 8 bit values. For example. The distinction between Analog and Digital is important when you are deciding which type of sensor you wish to use.0391 0. An important part of using an Analog Signal is being able to convert it to a Discrete Signal such as a 8-bit digital value.0000 0. Other 'discrete' sensors might provide you with a binary value. 0.0781 2 3 The A/D converter will divide the range of values by the 0.0195 0 discrete values to the entire range of voltages. Analog to Digital Conversions Microcontrollers almost always deal with discrete values. the Discrete Signal has 360 possibilities. I will point out exceptions to this rule as we go. It is on. Fortunately. The function of the A/D converter is to convert an Analog signal into >= Volts < Volts Conversion a digital value. If you consider a television sets tuner. This is one of the simplest forms of sensors. It is typical for the 0. It has two discrete values.

The sample numbers are shown along the X axis at the bottom. An 8-bit converter is fairly common on Microcontrollers. A 16-bit A/D converter can do 65356 discrete values. this was an analog function just like the original Analog Signal graph shown above. to successfully use an Analog sensor. It is a 8-bit A/D converter. Useful Analog Sensors Remember. The A/D converter has mapped a set of discrete values onto this graph. Therefore. have .0977 4 The Chart on the left shows the results of the A/D conversions for 14 samples. the greater the accuracy. which has 256 discrete values. The higher the resolution. All of the circuits shown in this section are intended to be connected to a A/D converter port. There are others. The range of the Analog Signal is 0 to +5 volts.0781 0. There are many types of A/D converters on the market. will divide by 1024 samples. An important feature is the resolution of the converter. but the table would continue up to conversion value 255. the table on the right shows 5 samples of an Analog Signal that have been converted into digital values. the 8-bit value assigned to the conversion is show. such as the 68HC11. The left hand Y axis indicates the voltage of the Analog sample that was fed into the A/D converter. 0. you need some way to convert the data into a digital form. For example.0195 volts per unit. The table shows how voltages map to specific conversion values.number of discrete combinations. the A/D converted divides 5 volts by 256 to yield approximately . I have only included the first five. On the right hand side. A 10-bit converter. The resolution required for your application depends on the accuracy your sensor requires. for example. Many Microcontrollers. As you can see from the blue line.

Using a resistor divider equation. Typically. shown as P1 in the schematic to the left. Doing so allows you to have maximum range on your sensor. Note that as the light increases. the resistance decreases. They are great for determining the angles of a robot arm.5 volts. Vcc * (2k / (2k + 10k)) = 0. the A/D port should read around 128 in average light. CdS Cells Cadmium-Sulfide is an interesting compound. You should test the CdS cell that you are planning to use to determine its average value. the CdS cell has resistance of 10k. I have chosen R1 to have the value of 10k based on this. the more light.0 * (10k / (10k + 10k)) = 2. The CdS cell. . Its resistance changes readily when exposed to light energy. Others require that you add an additional support chip. the average value will be halfway through the range of possible values. Hence. This is useful for measuring the intensity of light.0195 voltage value from above. Therefore. has a resistance of 10k in average operating light. I know that the voltage going to the A/D port will be. for example. Potentiometers An often overlooked but extremely useful sensor is the good old POT. in average light. since most pots only turn approximately 270 degrees or so. V = Vcc * P1 / (P1 + R1) = 5. such as the ADC805 or other equivalent chip. I would expect the A/D conversion result to be approximately 42.83 volts. CdS cell wiring diagram For example. as the light gets brighter. the value from the A/D drops! In Summary. the lower the resistance.A/D ports built in. choosing the value for R1 based on the average reading for the CdS cell will center the 'average' reading at half of Vcc. By setting the values close to each other. Using the . They are especially useful for making angular measurements. Assume that it is bright enough for P1 to be only 2k.

the highest digital value will be 4. the value for R2 at the sweep is 10k. the divider network is broken and will not function properly. A few things to point out. Potentiometer Sensor Notice in this circuit the current limiting resistor R3. There is an easy mathematical relationship between the angular position and the resistance. This resistor is there to handle the case when the sweep on the POT is turned all the way to the 'top' position.0 * (100k / (100k + 330)) = 4. 4. a Potentiometer is a resistive sensor. it will be 247 since the A/D conversions are zero based.0195 = 248. The lowest value should be zero. With the sweep all the way to the 'top'. The circuit works just like the CdS example. Without it. It is important that the POT be connected to both Vcc and GND. Thus. the key is to make the resistive sensor part of a voltage divider. An 'Audio Taper' or 'Audio' pot changes its value on a logarithmic scale. a large amount of current could flow if the output was accidentally connected to the wrong port. Actually. Thus. Audio and Linear. you can increase the value of R2. the A/D port will be connected to GND. since with the sweep all the way to the bottom. Wiring Diagram Using the values in the shown schematic. A POT with a resistance of > 1k should be fine. These are not well suited as positional sensors. since the amount of current consumed by the circuit is extremely low. For example. A Linear pot changes its value at a linear rate. To increase the range.84 / 0. you can calculate what the voltage ranges the pot will allow. Thus.0 * (10k / (10K + 330)) = 4. or if the A/D port on your Microcontroller was by-directional.98 volts. Almost all resistive sensors are wired in a similar fashion.0195 = 255.84 volts.97 / . which will be 254 when adjusted for the zero based conversion. Otherwise. the limiting resistor has reduced the useful range of the POT. using a 100K pot means 5.As with the CdS cell. Useful Digital Sensors . As you can see by comparing the schematic on the right with the CdS schematic. There are two types of potentiometers on the market. The voltage drop across R2 = Vcc * (R2 / (R2 + R3)) = 5. A POT with > 100k of resistance is also a good choice. You also want to insure your POT is large enough not to allow too much current to flow.

Important points are to use a pull-up resistor that doubles as a current limiting resistor. With a 10k pull-up. A typical way of detecting infra-red is to use a Sharp G1U52X module. and to limit the amount of current that can flow. for user input. . but many switches can add up to some noticeable power. which uses a pull-up resistor to force the line high. and a whole host of other things. Switches One of the most basic of all sensors is a simple switch. If you have questions about pull-ups and current limiting resistors. Infrared Detectors Infra-red detection is a common thing to add to a robot. check outImplementing Infrared Object Detection. To learn more about this. Many of them are wired in the same form. you might like to check out The Basics . I recommend using a NO switch to limit the amount of power consumed. Many microswitch designs actually have one common terminal.There are many different types of digital input sensors. Switches are used in bumper sensors. It allows the robot to determine when it has come in close proximity to an object without coming into physical contact. to detect limits of motion. In the event that your program accidentally switches the input port to an output port.Very Basic Circuits for more information about these subjects. The wiring diagram for a switch is very easy. and both a NO terminal and a NC terminal. chances are this is the arrangement. Switches come in two types: normally open (NO) and normally close (NC). having the current limiting resistor will keep from frying your Microcontroller. If you have a switch that has three terminals. the amount of current is small.

Many Microcontrollers have A/D converters built in. Be sure to make a electrical connection between ground and the case by soldering a wire directly to the metal housing. You also need to be careful not to create a path that allows too much current to flow. and the output signal. Notice that R5 acts as a pull-up resistor. Wiring a Sharp IR detector Summary Sensors usually output one of two types of signal. Most of the Sharp modules are intended to be mounted on a circuit board. You need to be aware of what type of output a sensor provides. Interfacing sensors is fairly straight forward. Capacitor C1 acts as a bypass capacitor. ground. . Another unusual connection is between ground and the case. similar to other digital inputs.The basic wiring diagram for the Sharp module is shown on the right. An Analog to Digital converter allows the output of an analog device to be used by a Microcontroller. The output from the sharp detector is a digital signal. It expects the case to be grounded. Microcontrollers usually deal with discrete or digital signals. An Analog signal or a Discrete signal. Current limiting resistors are important in interfacing Microcontrollers to sensors. The connections are to power.

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->