You are on page 1of 6

Control Engineering: Control engineering or Control systems engineering is the engineering discipline that applies control theory to design

systems with predictable behaviors. The practice uses sensors to measure the output performance of the device being controlled (often a vehicle) and those measurements can be used to give feedback to the input actuators that can make corrections toward desired performance. When a device is designed to perform without the need of human inputs for correction it is called automatic control (such as cruise control for regulating a car's speed). Multi-disciplinary in nature, control systems engineering activities focus on implementation of control systems mainly derived by mathematical modeling of systems of a diverse range. Control Theory: Control theory is an interdisciplinary branch of engineering and mathematics that deals with the behavior of dynamical systems. The desired output of a system is called the reference. When one or more output variables of a system need to follow a certain reference over time, a controller manipulates the inputs to a system to obtain the desired effect on the output of the system. The usual objective of control theory is to calculate solutions for the proper corrective action from the controller that result in system stability, that is, the system will hold the set point and not oscillate around it. The input and output of the system are related to each other by what is known as a transfer function (also known as the system function or network function). The transfer function is a mathematical representation, in terms of spatial or temporal frequency, of the relation between the input and output of a linear time-invariant system. Extensive use is usually made of a diagrammatic style known as the block diagram.

To describe how a control system works: The output of the system y(t) is fed back through a sensor measurement F to the reference value r(t). The controller C then takes the error e (difference) between the reference and the output to change the inputs u to the system under control P. This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller. This is called a single-input-single-output (SISO) control system; MIMO (i.e. Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite-dimensional (typically functions).

If we assume the controller C, the plant P, and the sensor F are linear and time-invariant (i.e.: elements of their transfer function C(s), P(s), and F(s) do not depend on time), the systems above can be analysed using the Laplace transform on the variables.

PID Controller: A proportionalintegralderivative controller (PID controller) is a generic control loop feedback mechanism (controller) widely used in industrial control systems a PID is the most commonly used feedback controller. A PID controller calculates an "error" value as the difference between a measured process variable and a desired setpoint. The controller attempts to minimize the error by adjusting the process control inputs.

The PID controller calculation (algorithm) involves three separate constant parameters, and is accordingly sometimes called three-term control: the proportional, the integral and derivative values, denoted P, I, and D. Some applications may require using only one or two actions to provide the appropriate system control. This is achieved by setting the other parameters to zero. A PID controller will be called a PI, PD, P or I controller in the absence of the respective control actions. PI controllers are fairly common in Power Plant Control Systems, since derivative action is sensitive to measurement noise, whereas the absence of an integral term may prevent the system from reaching its target value due to the control action. Heuristically, the P, I and D values can be interpreted in terms of time: P depends on the present error, I on the accumulation of past errors, and D is a prediction of future errors, based on current rate of change.The weighted sum of these three actions is used to adjust the process via a control element such as the position of a control valve, or the power supplied to a heating element. Proportional term: Plot of PV vs time, for three values of Kp (Ki and Kd held constant) The proportional response can be adjusted by multiplying the error by a constant Kp, called the proportional gain. The proportional term is given by:

Integral Term: The contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. The accumulated error is then multiplied by the integral gain (Ki) and added to the controller output.The integral term is given by:

Derivative Term: The derivative of the process error is calculated by determining the slope of the error over time and multiplying this rate of change by the derivative gain Kd. The magnitude of the contribution of the derivative term to the overall control action is termed the derivative gain, Kd. The derivative term is given by:

Process Control System: Process control is a statistics and engineering discipline that deals with architectures, mechanisms and algorithms for maintaining the output of a specific process within a desired range. Process control is extensively used power plants and many other industries. Process control enables automation, with which a small staff of operating personnel can operate a complex process from a central control's room. Types of control systems: In practice, process control systems can be characterized as one or more of the following forms: 1) Discrete Found in many manufacturing, motion and packaging applications. Robotic assembly, such as that found in automotive production, can be characterized as discrete process control. Most discrete manufacturing involves the production of discrete pieces of product, such as metal stamping. 2) Batch Some applications require that specific quantities of raw materials be combined in specific ways for particular durations to produce an intermediate or end result. One example is the production of adhesives and glues, which normally require the mixing of raw materials in a heated vessel for a period of time to form a quantity of end product. Other important examples are the production of food, beverages and medicine. Batch processes are generally used to produce a relatively low to intermediate quantity of product per year (a few pounds to millions of pounds). 3) Continuous Often, a physical system is represented through variables that are smooth and uninterrupted in time. The control of the water temperature in a heating jacket, for example, is an example of continuous process control. Some important continuous processes are the production of fuels, chemicals and plastics. Continuous processes in manufacturing are used to produce very large quantities of product per year (millions to billions of pounds). Applications having elements of discrete, batch and continuous process control are often called hybrid applications.

Automatic Control System: Automatic control system is the application of concepts derived from the research area of modern control theory. The implementing requires prior of analyzing and modeling of the subject to be controlled. Automatic control covers all kinds of technical implementations for systems to save energy and to prevent from self-destroying. The systems studied within automatic control design are mostly complex systems, for the ease of modeling partially reduced for the operational conditions to somewhat simplified or partial linear systems. Designing a system with features of automatic control generally requires the feeding of e.g. electrical and/or mechanical energy to enhance the dynamic features of an otherwise sluggish or variant, even errant system. The control is applied with a controller, i.e. a computer regulating the energy feed. Components of Automatic Control System: The components of an Automatic Control System are: 1. Sensor: A sensor (also called detector) is a device that measures a physical quantity and converts it into a signal which can be read by an observer or by an instrument. 2. Controller: In control theory, a controller is a device which monitors and affects the operational conditions of a given dynamical system. The operational conditions are typically referred to as output variables of the system which can be affected by adjusting certain input variables. 3. Actuator: An actuator is a type of motor for moving or controlling a mechanism or system. It is operated by a source of energy, usually in the form of an electric current, hydraulic fluid pressure or pneumatic pressure, and converts that energy into some kind of motion. An actuator is the mechanism by which an agent acts upon an environment. The agent can be either an artificial intelligent agent or any other autonomous being.