You are on page 1of 38

Ans1AA knowledge-based system (KBS) is a computer program that reasons and uses a knowledge

base to solve complex problems. The term is broad and is used to refer to many different kinds of systems. The one
common theme that unites all knowledge based systems is an attempt to represent knowledge explicitly via tools such as
ontologies and rules rather than implicitly via code the way a conventional computer program does. A knowledge based
system has two types of sub-systems: a knowledge base and an inference engine. The knowledge base represents facts
about the world, often in some form of subsumption ontology. The inference engine represents logical assertions and
conditions about the world, usually represented via IF-THEN rules. [1]
Knowledge-Based systems were first developed by Artificial Intelligence researchers. These early knowledge-based
systems were primarily expert systems. In fact the term is often used synonymously with expert systems. The difference is
in the view taken to describe the system. Expert system refers to the type of task the system is trying to solve, to replace
or aid a human expert in a complex task. Knowledge-based system refers to the architecture of the system, that it
represents knowledge explicitly rather than as procedural code. While the earliest knowledge-based systems were almost
all expert systems, the same tools and architectures can and have since been used for a whole host of other types of
systems. I.e., virtually all expert systems are knowledge-based systems but many knowledge-based systems are not
expert systems.
The first knowledge-based systems were rule based expert systems. One of the most famous was Mycin a program for
medical diagnosis. These early expert systems represented facts about the world as simple assertions in a flat database
and used rules to reason about and as a result add to these assertions. Representing knowledge explicitly via rules had
several advantages:
1. Acquisition & Maintenance. Using rules meant that domain experts could often define and maintain the rules
themselves rather than via a programmer.
2. Explanation. Representing knowledge explicitly allowed systems to reason about how they came to a conclusion
and use this information to explain results to users. For example, to follow the chain of inferences that led to a
diagnosis and use these facts to explain the diagnosis.
3. Reasoning. Decoupling the knowledge from the processing of that knowledge enabled general purpose inference
engines to be developed. These systems could develop conclusions that followed from a data set that the initial
developers may not have even been aware of.[2]
As knowledge-based systems became more complex the techniques used to represent the knowledge base became
more sophisticated. Rather than representing facts as assertions about data, the knowledge-base became more
structured, representing information using similar techniques to object-oriented programming such as hierarchies of
classes and subclasses, relations between classes, and behavior of objects. As the knowledge base became more
structured reasoning could occur both by independent rules and by interactions within the knowledge base itself. For
example, procedures stored as demons on objects could fire and could replicate the chaining behavior of rules. [3]
Another advancement was the development of special purpose automated reasoning systems called classifiers. Rather
than statically declare the subsumption relations in a knowledge-base a classifier allows the developer to simply declare
facts about the world and let the classifier deduce the relations. In this way a classifier also can play the role of an
inference engine.[4]
The most recent advancement of knowledge-based systems has been to adopt the technologies for the development of
systems that use the Internet. The Internet often has to deal with complex, unstructured data that can't be relied on to fit a
specific data model. The technology of knowledge-based systems and especially the ability to classify objects on demand

is ideal for such systems. The model for these kinds of knowledge-based Internet systems is known as the Semantic Web.
[5]

Ans1BSupervised learning is the machine learning task of inferring a function from labeled training data.[1] The training
data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object
(typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm
analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal
scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning
algorithm to generalize from the training data to unseen situations in a "reasonable" way (see inductive bias).
The parallel task in human and animal psychology is often referred to as concept learning.
In order to solve a given problem of supervised learning, one has to perform the following steps:
1. Determine the type of training examples. Before doing anything else, the user should decide what kind of data is
to be used as a training set. In the case of handwriting analysis, for example, this might be a single handwritten
character, an entire handwritten word, or an entire line of handwriting.
2. Gather a training set. The training set needs to be representative of the real-world use of the function. Thus, a set
of input objects is gathered and corresponding outputs are also gathered, either from human experts or from
measurements.
3. Determine the input feature representation of the learned function. The accuracy of the learned function depends
strongly on how the input object is represented. Typically, the input object is transformed into a feature vector,
which contains a number of features that are descriptive of the object. The number of features should not be too
large, because of the curse of dimensionality; but should contain enough information to accurately predict the
output.
4. Determine the structure of the learned function and corresponding learning algorithm. For example, the engineer
may choose to use support vector machines or decision trees.
5. Complete the design. Run the learning algorithm on the gathered training set. Some supervised learning
algorithms require the user to determine certain control parameters. These parameters may be adjusted by
optimizing performance on a subset (called a validation set) of the training set, or via cross-validation.
6. Evaluate the accuracy of the learned function. After parameter adjustment and learning, the performance of the
resulting function should be measured on a test set that is separate from the training set.
A wide range of supervised learning algorithms is available, each with its strengths and weaknesses. There is no single
learning algorithm that works best on all supervised learning problems (see the No free lunch theorem).
There are four major issues to consider in supervised learning:

Answer 2AIn probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule) describes
the probability of an event, based on conditions that might be related to the event. For example, suppose one is interested
in whether a woman has cancer, and that she is 65. If cancer is related to age, information about her age can be used to
more accurately assess the probability of her having cancer using Bayes' Theorem.

When applied, the probabilities involved in Bayes' theorem may have different probability interpretations. In one of these
interpretations, the theorem is used directly as part of a particular approach to statistical inference. With the Bayesian
probability interpretation the theorem expresses how a subjective degree of belief should rationally change to account for
evidence: this is Bayesian inference, which is fundamental to Bayesian statistics. However, Bayes' theorem has
applications in a wide range of calculations involving probabilities, not just in Bayesian inference.
Bayes' theorem is named after Rev. Thomas Bayes (/bez/; 17011761), who first[citation needed] showed how to use new
evidence to update beliefs. It was further developed by Pierre-Simon Laplace, who first published the modern formulation
in his 1812 Thorie analytique des probabilits. Sir Harold Jeffreys put Bayes' algorithm and Laplace's formulation on an
axiomatic basis. Jeffreys wrote that Bayes' theorem "is to the theory of probability what the Pythagorean theorem is to
geometry"
Bayes' theorem is stated mathematically as the following equation: [2]

where A and B are events.

P(A) and P(B) are the probabilities of A and B without regard to each other.

P(A | B), a conditional probability, is the probability of observing event A given that B is true.

P(B | A) is the probability of observing event B given that A is true.

Ans2BReason is the capacity for consciously making sense of things, applying logic, establishing and verifying facts, and
changing or justifying practices, institutions, and beliefs based on new or existing information.[1] It is closely associated with
such characteristically human activities as philosophy, science, language, mathematics, and art and is normally
considered to be a definitive characteristic of human nature.[2] The concept of reason is sometimes referred to
as rationality and sometimes as discursive reason, in opposition to intuitive reason.[3]
Reason or "reasoning" is associated with thinking, cognition, and intellect. Reason, like habit or intuition, is one of the
ways by which thinking comes from one idea to a related idea. For example, it is the means by which rational beings
understand themselves to think about cause and effect, truth and falsehood, and what is good or bad. It is also closely
identified with the ability to self-consciously change beliefs, attitudes, traditions, and institutions, and therefore with the
capacity for freedom and self-determination.[4]
In contrast to reason as an abstract noun, a reason is a consideration which explains or justifies some event,
phenomenon or behaviour.[5] The field of logic studies ways in which human beings reason through argument.[6]
Psychologists and cognitive scientists have attempted to study and explain how people reason, e.g. which cognitive and
neural processes are engaged, and how cultural factors affect the inferences that people draw. The field of automated
reasoning studies how reasoning may or may not be modeled computationally. Animal psychology considers the question
of whether animals other than humans can reason.
A non-monotonic logic is a formal logic whose consequence relation is not monotonic. In other words, non-monotonic
logics are devised to capture and represent defeasible inferences (c.f. defeasible reasoning), i.e., a kind of inference in
which reasoners draw tentative conclusions, enabling reasoners to retract their conclusion(s) based on further evidence.
[1]
Most studied formal logics have a monotonic consequence relation, meaning that adding a formula to a theory never

produces a reduction of its set of consequences. Intuitively, monotonicity indicates that learning a new piece of knowledge
cannot reduce the set of what is known. A monotonic logic cannot handle various reasoning tasks such as reasoning by
default (consequences may be derived only because of lack of evidence of the contrary), abductive
reasoning (consequences are only deduced as most likely explanations), some important approaches to reasoning about
knowledge (the ignorance of a consequence must be retracted when the consequence becomes known), and
similarly, belief revision (new knowledge may contradict old beliefs).
Ans4AKnowledge acquisition is the process used to define the rules and ontologies required for a knowledge-based
system. The phrase was first used in conjunction with expert systems to describe the initial tasks associated with
developing an expert system, namely finding and interviewing domain experts and capturing their knowledge
via rules,objects, and frame-based ontologies.
Expert systems were one of the first successful applications of artificial intelligence technology to real world business
problems.[1] Researchers at Stanford and other AI laboratories worked with doctors and other highly skilled experts to
develop systems that could automate complex tasks such as medical diagnosis. Until this point computers had mostly
been used to automate highly data intensive tasks but not for complex reasoning. Technologies such as inference
engines allowed developers for the first time to tackle more complex problems. [2][3]
As expert systems scaled up from demonstration prototypes to industrial strength applications it was soon realized that
the acquisition of domain expert knowledge was one of if not the most critical task in the knowledge engineering process.
This knowledge acquisition process became an intense area of research on its own.
One approach to knowledge acquisition investigated was to use natural language parsing and generation to facilitate
knowledge acquisition. Natural language parsing could be performed on manuals and other expert documents and an
initial first pass at the rules and objects could be developed automatically. Text generation was also extremely useful in
generating explanations for system behavior. This greatly facilitated the development and maintenance of expert systems.
[4]

Ans4BArtificial intelligence (AI) is the intelligence exhibited by machines or software. It is also the name of the
academic field of study which studies how to create computers and computer software that are capable of intelligent
behavior. Major AI researchers and textbooks define this field as "the study and design of intelligent agents", [1] in which
anintelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.
[2]

John McCarthy, who coined the term in 1955,[3] defines it as "the science and engineering of making intelligent

machines".[4]
AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with
each other.[5] Some of the division is due to social and cultural factors: subfields have grown up around particular
institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields
focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a
particular tool or towards the accomplishment of particular applications.
The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language
processing (communication), perception and the ability to move and manipulate objects.[6] General intelligence is still
among the field's long-term goals.[7] Currently popular approaches include statistical methods, computational
intelligence and traditional symbolic AI. There are a large number of tools used in AI, including versions of search and
mathematical optimization, logic, methods based on probability and economics, and many others. The AI field is
interdisciplinary, in which a number of sciences and professions converge, including computer

science, mathematics,psychology, linguistics, philosophy and neuroscience, as well as other specialized fields such
as artificial psychology.
The field was founded on the claim that a central property of humans, human intelligencethe sapience of Homo sapiens
"can be so precisely described that a machine can be made to simulate it." [8] This raises philosophical issues about the
nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have
been addressed by myth, fiction and philosophy since antiquity.[9] Artificial intelligence has been the subject of tremendous
optimism[10] but has also suffered stunningsetbacks.[11] Today it has become an essential part of the technology industry,
providing the heavy lifting for many of the most challenging problems in computer science. [12]

Ans6AMeans-Ends Analysis[1] (MEA) is a problem solving technique used commonly in Artificial Intelligence (AI) for
limiting search in AI programs.
It is also a technique used at least since the 1950s as a creativity tool, most frequently mentioned in engineering books on
design methods. MEA is also related to Means-Ends Chain Approach used commonly in consumer behavior analysis. [2] It
is also a way to clarify one's thoughts when embarking on a mathematical proof.

Problem-solving as search[edit]
An important aspect of intelligent behavior as studied in AI is goal-based problem solving, a framework in which the
solution of a problem can be described by finding a sequence of actions that lead to a desirable goal. A goal-seeking
system is supposed to be connected to its outside environment by sensory channels through which it receives information
about the environment and motor channels through which it acts on the environment. (The term "afferent" is used to
describe "inward" sensory flows, and "efferent" is used to describe "outward" motor commands.) In addition, the system
has some means of storing in a memory information about the state of the environment (afferent information) and
information about actions (efferent information). Ability to attain goals depends on building up associations, simple or
complex, between particular changes in states and particular actions that will bring these changes about. Search is the
process of discovery and assembly of sequences of actions that will lead from a given state to a desired state. While this
strategy may be appropriate for machine learning and problem solving, it is not always suggested for humans
(e.g. cognitive load theory and its implications).

How MEA works[edit]


The MEA technique is a strategy to control search in problem-solving. Given a current state and a goal state, an action is
chosen which will reduce the difference between the two. The action is performed on the current state to produce a new
state, and the process is recursively applied to this new state and the goal state.
Note that, in order for MEA to be effective, the goal-seeking system must have a means of associating to any kind of
detectable difference those actions that are relevant to reducing that difference. It must also have means for detecting the
progress it is making (the changes in the differences between the actual and the desired state), as some attempted
sequences of actions may fail and, hence, some alternate sequences may be tried.
When knowledge is available concerning the importance of differences, the most important difference is selected first to
further improve the average performance of MEA over other brute-force search strategies. However, even without the
ordering of differences according to importance, MEA improves over other search heuristics (again in the average case)
by focusing the problem solving on the actual differences between the current state and that of the goal.

Some AI systems using MEA[edit]

The MEA technique as a problem-solving strategy was first introduced in 1961 by Allen Newell and Herbert A. Simon in
their computer problem-solving program General Problem Solver (GPS).[3][4] In that implementation, the correspondence
between differences and actions, also called operators, is provided a priori as knowledge in the system. (In GPS this
knowledge was in the form of table of connections.)
When the action and side-effects of applying an operator are penetrable [clarification needed] the search may select the relevant
operators by inspection of the operators and do without a table of connections. This latter case, of which the canonical
example is STRIPS, an automated planning computer program, allows task-independent correlation of differences to the
operators which reduce them.
Prodigy, a problem solver developed in a larger learning-assisted automated planning project started at Carnegie Mellon
University by Jaime Carbonell, Steven Minton and Craig Knoblock, is another system that used MEA.
Professor Morten Lind, at Technical University of Denmark has developed a tool called multilevel flow modeling (MFM). It
performs means-end based diagnostic reasoning for industrial control and automation systems.

Ans6B Fuzzy logic is not as fuzzy as you might think and has been working quietly behind the scenes for more than 20
years in more places than most admit. Fuzzy logic is a rule-based system that can rely on the practical experience of an
operator, particularly useful to capture experienced operator knowledge. Fuzzy logic is a form of artificial intelligence
software; therefore, it would be considered a subset of AI. Since it is performing a form of decision making, it can be
loosely included as a member of the AI software toolkit. Heres what you need to know to consider using fuzzy logic to
help solve your next application. Its not as fuzzy as you might think.
Fuzzy logic has been around since the mid 1960s; however, it was not until the 70s that a practical application was
demonstrated. Since that time the Japanese have traditionally been the largest producer of fuzzy logic applications. Fuzzy
logic has appeared in cameras, washing machines, and even in stock trading applications. In the last decade the United
States has started to catch on to the use of fuzzy logic. There are many applications that use fuzzy logic, but fail to tell us
of its use. Probably the biggest reason is that the term fuzzy logic may have a negative connotation.
Fuzzy logic can be applied to non-engineering applications as illustrated in the stock trading application. It has also been
used in medical diagnosis systems and in handwriting recognition applications. In fact a fuzzy logic system can be applied
to almost any type of system that has inputs and outputs.Fuzzy logic systems are well suited to nonlinear systems and
systems that have multiple inputs and multiple outputs. Any reasonable number of inputs and outputs can be
accommodated. Fuzzy logic also works well when the system cannot be modeled easily by conventional means.
Many engineers are afraid to dive into fuzzy logic due to a lack of understanding. Fuzzy logic does not have to be hard to
understand, even though the math behind it can be intimidating, especially to those of us who have not been in a math
class for many years.
Binary logic is either 1 or 0. Fuzzy logic is a continuum of values between 0 and 1. This may also be thought of as 0% to
100%. An example is the variable YOUNG. We may say that age 5 is 100% YOUNG, 18 is 50% YOUNG, and 30 is 0%
YOUNG. In the binary world everything below 18 would be 100% YOUNG, and everything above would be 0% YOUNG.
The design of a fuzzy logic system starts with a set of membership functions for each input and a set for each output. A
set of rules is then applied to the membership functions to yield a crisp output value.
For this process control explanation of fuzzy logic, TEMPERATURE is the input and FAN SPEED is the output. Create a
set of membership functions for each input. A membership function is simply a graphical representation of the fuzzy
variable sets. For this example, use three fuzzy sets, COLD, WARM, and HOT. We will then create a membership function
for each of three sets of temperature as shown in the cold-normal-hot graphic, Figure 1.

We will use three fuzzy sets for the output, SLOW, MEDIUM, and FAST. A set of functions is created for each output set
just as for the input sets.

It should be noted that the shape of the


membership functions do not need to be triangles as we have used in Figure 1 and Figure 2. Various shapes that can be
used, such as Trapezoid, Gaussian, Sigmoid, or user definable. By changing the shape of the membership function, the
user can tune the system to provide optimum response.
Now that we have our membership functions defined, we can create the rules that will define how the membership
functions will be applied to the final system. We will create three rules for this system.

If HOT then FAST

If WARM then MEDIUM

If COLD then SLOW

The rules are then applied to the membership functions to produce the crisp output value to drive the system. For
simplicity we will illustrate using only two input and two output functions. For an input value of 52 degrees we intersect the
membership functions. We see that in this example the intersection will be on both functions, thus two rules are applied.
The intersection points are extended to the output functions to produce an intersecting point. The output functions are
then truncated at the height of the intersecting points. The area under the curves for each membership function is then
added to give us a total area. The centroid of this area is calculated. The output value is then the centroid value. In this
example 44% is the output FAN SPEED value. This process is illustrated in Figure 3.

Figure 3

This is a very simple explanation of how the fuzzy logic systems work. In a real working system, there would be many
inputs and possibility several outputs. This would result in a fairly complex set of functions and many more rules. It is not
uncommon for there to be 40 or more rules in a system. Even so, the principles remain the same as in our simple system.
National Instruments has included in LabVIEW a set of pallet functions and a fuzzy system designer to greatly simplify the
task of building a fuzzy logic system. It has included several demo programs in the examples to get started. In the
graphical environment, the user can easily see what the effects are as the functions and rules are built and changed.
The user should remember that a fuzzy logic system is not a silver bullet for all control system needs. Traditional control
methods are still very much a viable solution. In fact they may be combined with fuzzy logic to produce a dynamically
changing system. The validation of a fuzzy logic system can be difficult due to the fact that it is a nonformal system. Its
use in safety systems should be considered with care.
I hope this short article will inspire the exploration and use of fuzzy logic in some of your future designs. I encourage the
reader to do further study on the subject. There are numerous books and articles that go into much more detail. This
serves as a simple introduction to fuzzy logic controls.
- Norm Dingle is a senior system engineer with EMP Technical Group of Noblesville, IN. He holds a BSE from Purdue
University

ANS7aConflict resolution, otherwise known as reconciliation, is conceptualized as the methods and processes
involved in facilitating the peaceful ending of conflict and retribution. Committed group members attempt to resolve group
conflicts by actively communicating information about their conflicting motives or ideologies to the rest of the group (e.g.,
intentions; reasons for holding certain beliefs), and by engaging in collective negotiation.[1] Dimensions of resolution
typically parallel the dimensions of conflict in the way the conflict is processed. Cognitive resolution is the way disputants
understand and view the conflict, with beliefs and perspectives and understandings and attitudes. Emotional resolution is
in the way disputants feel about a conflict, the emotional energy. Behavioral resolution is how one thinks the disputants
act, their behavior.[2] Ultimately, a wide range of methods and procedures for addressing conflict exist, including but not
limited to negotiation, mediation, diplomacy, and creative peacebuilding.
The term conflict resolution may also be used interchangeably with dispute resolution, where arbitration and litigation
processes are critically involved. Furthermore, the concept of conflict resolution can be thought to encompass the use of
nonviolent resistance measures by conflicted parties in an attempt to promote effective resolution. [3] Conflict resolution as
an academic field is relatively new. George Mason University in Fairfax, VA, was the first university to offer a PhD program
Knowledge representation and reasoning (KR) is the field of artificial intelligence (AI) dedicated to representing
information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a
medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from
psychology about how humans solve problems and represent knowledge in order to design formalisms that will make
complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings
from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets.
Examples of knowledge representation formalisms include semantic nets, systems architecture, Frames, Rules,
and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, and classifiers.

ANS7bA knowledge

It is a set of ontological
about the world?

representation (KR) is most fundamentally a surrogate, a substitute for the thing


itself, used to enable an entity to determine consequences by thinking rather than acting, i.e., by
reasoning about the world rather than taking action in it.
commitments,

i.e., an answer to the question: In what terms should I think

It is a fragmentary theory of intelligent reasoning, expressed in terms of three components: (i) the
representation's fundamental conception of intelligent reasoning; (ii) the set of inferences the
representation sanctions; and (iii) the set of inferences it recommends.

It is a medium for pragmatically efficient computation, i.e., the computational environment in which
thinking is accomplished. One contribution to this pragmatic efficiency is supplied by the guidance a
representation provides for organizing information so as to facilitate making the recommended
inferences.

It is a medium

of human expression,

i.e., a language in which we say things about the world.

Understanding the roles and acknowledging their diversity has several useful consequences. First, each role
requires something slightly different from a representation; each accordingly leads to an interesting and
different set of properties we want a representation to have.
Second, we believe the roles provide a framework useful for characterizing a wide variety of representations.
We suggest that the fundamental "mindset" of a representation can be captured by understanding how it views
each of the roles, and that doing so reveals essential similarities and differences.
Third, we believe that some previous disagreements about representation are usefully disentangled when all five
roles are given appropriate consideration. We demonstrate this by revisiting and dissecting the early arguments
concerning frames and logic.
Finally, we believe that viewing representations in this way has consequences for both research and practice.
For research, this view provides one direct answer to a question of fundamental significance in the field. It also
suggests adopting a broad perspective on what's important about a representation, and it makes the case that one
significant part of the representation endeavor--capturing and representing the richness of the natural world--is
receiving insufficient attention. We believe this view can also improve practice by reminding practitioners about
the inspirations that are the important sources of power for a variety of representations.

Terminology and Perspective


Two points of terminology will assist in our presentation. First, we use the term inference in a generic sense, to
mean any way to get new expressions from old. We are only rarely talking about sound logical inference and
when doing so refer to that explicitly.
Second, to give them a single collective name, we refer to the familiar set of basic representation tools like
logic, rules, frames, semantic nets, etc., as knowledge representation technologies.
It will also prove useful to take explicit note of the common practice of building knowledge representations in
multiple levels of languages, typically with one of the knowledge representation technologies at the bottom
level. Hayes' ontology of liquids [12], for example, is at one level a representation composed of concepts
like pieces of space, that have portals, faces, sides, etc. The language at the next more primitive (and as it
turns out, bottom) level is first order logic, where, for example, In(s1,s2) is a relation expressing that
space s1 is contained in s2.
This view is useful in part because it allows our analysis and discussion to concentrate largely on the KR
technologies. As the primitive representational level at the foundation of KR languages, they encounter all of
the issues central to knowledge representation of any variety. They are also useful exemplars because they are
widely familiar to the field and there is a substantial body of experience with them to draw on.

---------------------------------------------------------------------------------------------------------------------------------------------------------------------PAPER 2------------------------------------------------------------------------------------------------------------------------------------------------ANS1Ubuntu (like all UNIX-like systems) organizes files in a hierarchical tree, where relationships are
thought of in teams of children and parent. Directories can contain other directories as well as regular files,
which are the "leaves" of the tree. Any element of the tree can be referenced by a path name; an absolute
path name starts with the character / (identifying the root directory, which contains all other directories
and files), then every child directory that must be traversed to reach the element is listed, each separated
by a / sign.
A relative path name is one that doesn't start with /; in that case, the directory tree is traversed starting
from a given point, which changes depending on context, called the current directory. In every directory,
there are two special directories called . and .., which refer respectively to the directory itself, and to its
parent directory.
The fact that all files and directories have a common root means that, even if several different storage
devices are present on the system, they are all seen as directories somewhere in the tree, once they
are mounted to the desired place.
FilePermissions are another important part of the files organization system: they are superimposed to the
directory structure and assign permissionsto each element of the tree, ultimately decided by whom it can
be accessed and how.

Main directories
The standard Ubuntu directory structure mostly follows the Filesystem Hierarchy Standard, which can be
referred to for more detailed information.
Here, only the most important directories in the system will be presented.
/bin is a place for most commonly used terminal commands, like ls, mount, rm, etc.
/boot contains files needed to start up the system, including the Linux kernel, a RAM disk image
and bootloader configuration files.
/dev contains all device files, which are not regular files but instead refer to various hardware devices on
the system, including hard drives.
/etc contains system-global configuration files, which affect the system's behavior for all users.
/home home sweet home, this is the place for users' home directories.
/lib contains very important dynamic libraries and kernel modules
/media is intended as a mount point for external devices, such as hard drives or removable media
(floppies, CDs, DVDs).
/mnt is also a place for mount points, but dedicated specifically to "temporarily mounted" devices, such as
network filesystems.
/opt can be used to store additional software for your system, which is not handled by the package
manager.
/proc is a virtual filesystem that provides a mechanism for kernel to send information to processes.
/root is the superuser's home directory, not in /home/ to allow for booting the system even if /home/ is not
available.
/run is a tmpfs (temporary file system) available early in the boot process where ephemeral run-time data
is stored. Files under this directory are removed or truncated at the beginning of the boot process.
(It deprecates various legacy locations such as /var/run, /var/lock, /lib/init/rw in otherwise nonephemeral directory trees as well as/dev/.* and /dev/shm which are not device files.)
/sbin contains important administrative commands that should generally only be employed by
the superuser.
/srv can contain data directories of services such as HTTP (/srv/www/) or FTP.
/sys is a virtual filesystem that can be accessed to set or obtain information about the kernel's view of the
system.
/tmp is a place for temporary files used by applications.
/usr contains the majority of user utilities and applications, and partly replicates the root directory
structure, containing for instance, among others,/usr/bin/ and /usr/lib.
/var is dedicated to variable data, such as logs, databases, websites, and temporary spool (e-mail etc.)
files that persist from one boot to the next. A notable directory it contains is /var/log where system log
files are kept.

Security through obscurity" may be a catchy phrase, but it's not the only thing that's catching
among Windows users.

The expression is intended to suggest that proprietary software is more secure by virtue of its
closed nature. If hackers can't see the code, then it's harder for them to create exploits for it--or
so the thinking goes.
Unfortunately for Windows users, that's just not true--as evidenced by the never-ending parade
of patches coming out of Redmond. In fact, one of Linux's many advantages over Windows is that it
is more secure--much more. For small businesses and other organizations without a dedicated
staff of security experts, that benefit can be particularly critical.
Five key factors underlie Linux's superior security:
1. Privileges
Linux systems are by no means infallible, but one of their key advantages lies in the way account
privileges are assigned. In Windows, users are generally given administrator access by default,
which means they pretty much have access to everything on the system, even its most crucial
parts. So, then, do viruses. It's like giving terrorists high-level government positions.
With Linux, on the other hand, users do not usually have such "root" privileges; rather, they're
typically given lower-level accounts. What that means is that even if a Linux system is
compromised, the virus won't have the root access it would need to do damage systemwide;
more likely, just the user's local files and programs would be affected. That can make the
difference between a minor annoyance and a major catastrophe in any business setting.
2. Social Engineering
Viruses and worms often spread by convincing computer users to do something they shouldn't,
like open attachments that carry viruses and worms. This is called social engineering, and it's all
too easy on Windows systems. Just send out an e-mail with a malicious attachment and a subject
line like, "Check out these adorable puppies!"--or the porn equivalent--and some proportion of
users is bound to click without thinking. The result? An open door for the attached malware, with
potentially disastrous consequences organizationwide.
Thanks to the fact that most Linux users don't have root access, however, it's much harder to
accomplish any real damage on a Linux system by getting them to do something foolish. Before
any real damage could occur, a Linux user would have to read the e-mail, save the attachment,
give it executable permissions and then run the executable. Not very likely, in other words.
3. The Monoculture Effect
However you want to argue the exact numbers, there's no doubt that Microsoft Windows still
dominates most of the computing world. In the realm of e-mail, so too do Outlook and Outlook
Express. And therein lies a problem: It's essentially a monoculture, which is no better in
technology than it is in the natural world. Just as genetic diversity is a good thing in the natural
world because it minimizes the deleterious effects of a deadly virus, so a diversity of computing
environments helps protect users.
Fortunately, a diversity of environments is yet another benefit that Linux offers. There's Ubuntu,
there's Debian, there's Gentoo, and there are many other distributions. There are also many
shells, many packaging systems, and many mail clients; Linux even runs on many architectures
beyond just Intel. So, whereas a virus can be targeted squarely at Windows users, since they all
use pretty much the same technology, reaching more than a small faction of Linux users is much
more difficult. Who wouldn't want to give their company that extra layer of assurance?
4. Audience Size
Hand-in-hand with this monoculture effect comes the not particularly surprising fact that the
majority of viruses target Windows, and the desktops in your organization are no exception.
Millions of people all using the same software make an attractive target for malicious attacks.
5. How Many Eyeballs
"Linus' Law"--named for Linus Torvalds, the creator of Linux--holds that, "given enough eyeballs,
all bugs are shallow." What that means is that the larger the group of developers and testers
working on a set of code, the more likely any flaws will be caught and fixed quickly. This, in other
words, is essentially the polar opposite of the "security through obscurity" argument.
With Windows, it's a limited set of paid developers who are trying to find problems in the code.
They adhere to their own set timetables, and they don't generally tell anyone about the problems
until they've already created a solution, leaving the door open to exploits until that happens. Not
a very comforting thought for the businesses that depend on that technology.
In the Linux world, on the other hand, countless users can see the code at any time, making it
more likely that someone will find a flaw sooner rather than later. Not only that, but users can
even fix problems themselves. Microsoft may tout its large team of paid developers, but it's

unlikely that team can compare with a global base of Linux user-developers around the globe.
Security can only benefit through all those extra "eyeballs."
Once again, none of this is to say that Linux is impervious; no operating system is. And there are
definitely steps Linux users should take to make their systems as secure as possible, such as
enabling a firewall, minimizing the use of root privileges, and keeping the system up to date. For
extra peace of mind there are also virus scanners available for Linux, including ClamAV. These
are particularly good measures for small businesses, which likely have more at stake than
individual users do.
It's also worth noting that security firm Secunia recently declared that Apple productshave more
security vulnerabilities than any others--including Microsoft's.
Either way, however, when it comes to security, there's no doubt that Linux users have a lot less
to worry about.

ANS2aIn August 1997 Miguel de Icaza meliris a GUI with the name of the GNOME (GNU Network ObjectModel
Environment). GNOME is designed to be a product that really fit the standard Free Software Foundation.
Gnome has main component, it construct GNOME itself
1.

GNOME graphical desktop environment which is easy to use.

2.

GNOME development platform is a collection of tools, libraries and components to build applications for Linux.

3.

GNOME Office with a set of productive applications for the office.

Why use GNOME? The following are a few reasons:


1.

FREE. GNOME is the first project to provide a working environment based graphical completely to free
software.

2.

USER FRIENDLY. Every moment, GNOME always endeavored to keep it easy to use, even by beginners.
GNOME

3.

USABILITY PROJECT aims to increase the level of ease of use GNOME.

4.

CUTTING EDGE. GNOME always use the latest technologies. Call it CORBA for network transparency, the use
of XML, and everything is implemented using C language for speed and portability.

5.

DEVELOPER FRIENDLY. Not just enough to ease of use, GNOME also comes with an intuitive programming
environment.

6.

INTERNATIONAL. The GNOME developers are spread widely throughout the world. You can also
contribute.With the new GNOME I18N features, you can work with any type of popular language, complete with
documentation.

7.

ACCESSIBLE. For those who are not able to use standard features of GNOME, a project under the name of the
GNOME Accessibility Project was developed to actively support the GNOME usage by anyone.

KDE stands for K Desktop Environmet. Released by Matthias Ettrich on October 14, 1996. KDE is a desktop
environment and platform. KDE can we meet in another system, referred Linux, BSD, and Solaris. KDE can be used on
Mac OS (with the help of layer X11) and Windows (with the help of Cygwin). Excess KDE than it looks beautiful is the
ease of use, flexibility, portabitilis and wealth of features.
GNOME and KDE differences in terms of appearance and performance:
1.

KDE is more dominant and prioritize so that the view from the KDE look more beautiful and flexible compared
to GNOME. Plus KDE, we can freely edit the display according to what we want. Although not too much emphasis
GNOME than KDE display. GNOME is also interesting in terms of appearance.

2.

GNOME is more dominant and prioritize the performance of memory (RAM) than GNOME KDE so the
performance of relatively faster than KDE. One of the reasons for not telalu mendominasikan GNOME display.If you
do not believe you can install GUI GNOME (Ubuntu) and KDE (Kubuntu, Mandriva), and then you felt the
difference.

ANS2bvi is a powerful editor with many features. It can handle very large files much easier than a
program like Microsoft Word. Unlike Word, vi is only a TEXT EDITOR and you cannot include graphics or
fancy fonts in your file.
Although there are lots of different commands in vi, it is possible to get started using vi knowing only a
small subset of them. As you feel more comfortable with vi, you can learn the more advanced features.
vi has 3 modes:

write mode, used for entering text


command mode and
command line mode, used for entering commands to vi.
Remember, in vi, the mouse cannot be used to issue editor commands or move the cursor.
This is also similar to the GCG sequence editor SeqEd, which has a command mode and an editing mode.
It's best to use programs like GCG's SeqLab Editor to edit the sequences instead of using vi or a Word
Processor. If you do edit a GCG sequence, then run the program reformat before using the sequence.
Write Mode
When you first enter the editor, you are in the command mode. To enter the write mode, type the
letter

a for append. This is one of the four possible commands for entering the write mode. vi is Case

Sensitive. Lower case commands are different from upper case commands.
Command Mode
You are in command mode whenever you hit esc to leave the write mode. In command mode, you can
move the cursor anywhere in the file.

x key deletes individual characters, while dd deletes an entire line. To enter the insert mode type
the i key. When you're done inserting, hit the "esc" key to return to the command mode.
The

Command-Line Mode
Command-line mode is used for such things as writing changes and exiting the editor. To enter commandline mode, type : while in command mode. The

: will now appear at the bottom of the screen and the

command which you type will appear on that line.


ANS3aThe X Window System (X11, X, and sometimes informally X-Windows) is a windowing

system for bitmap displays, common on UNIX-like computer operating systems.


X provides the basic framework for a GUI environment: drawing and moving windows on the display device and
interacting with a mouse and keyboard. X does not mandate the user interface this is handled by individual
programs. As such, the visual styling of X-based environments varies greatly; different programs may present
radically different interfaces.
X originated at the Massachusetts Institute of Technology (MIT) in 1984. The protocol version has been X11 since
September 1987. The X.Org Foundation leads the X project, with the current reference implementation, X.Org
Server, available as free and open source software under the MIT License and similar permissive licenses

Simple example: the X server receives input from a local keyboard and mouse and displays to a screen. Aweb browser and
a terminal emulatorrun on the user's workstation and a terminal emulator runs on a remote computer but is controlled and
monitored from the user's machine

ANS3bMost of our programming languages today are able to make decisions based on conditions we

set. A condition is an expression that evaluates to a Boolean value - true or false. Any programmer
can make his program smart based on the decision and logic he puts into his program. The bash shell
supports if and switch (case)decision statements.

If statement
If is a statement that allows the programmer to make a decision in the program based on conditions
he specified. If the condition is met, the program will execute certain lines of code otherwise, the
program will execute other tasks the programmer specified. The following is the supported syntax of
the if statement in the bash shell.

General Syntax
Single decision:
if <condition>
then
### series of code goes here
fi

Double decision:
if <condition>
then
### series of code if the condition is satisfied
else
### series of code if the condition is not satisfied
fi

Multiple if condition:

if <condition1>
then
### series of code for condition1
elif <condition2>
then
### series of code for condition2
else
### series of code if the condition is not satisfied
fi

Single-bracket syntax
if [ condition ]
then

### series of code goes here


fi

Double-bracket syntax

if ((condition))
then
### series of code goes here
fi

The single bracket syntax is the oldest supported syntax in bash shell. It is used together with all
conditional statements in Linux. Meanwhile, the double-parenthesis syntax is used for a numberbased conditional statement to provide a familiar syntax to programmers. All types of ifstatements
need a specified condition in order to execute a task.
Conditional Statements in Linux
Conditional statements are used together with a decision control statement. There are different types
of conditional statements that you can use in the bash shell, the most common ones are: file-based,
string-based and arithmetic-based conditions.

File-based condition
File-based conditions are unary expressions and often used to examine a status of a file. The following
list shows the most commonly used file-based conditions in the bash shell.
Operator Description
-a file
Returns true if file exists
-b file
Returns true if file exists and is a block special file
-c file
Returns true if file exists and is a character special file
-d file
Returns true if file exists and is a directory
-e file
Returns true if file exists
-r file
Returns true if file exists and is readable
-s file
Returns true if file exists and has a greater size that zero
-s file
Returns true if file exists and has a greater size that zero
-w file
Returns true if file exists and is writable
-x file
Returns true if file exists and is executable
-N file
Returns true if the file exists and has been modified since it was last read

I/O Redirection

ANS4a

In this lesson, we will explore a powerful feature used by many command line programs
called input/output redirection. As we have seen, many commands such as ls print their output
on the display. This does not have to be the case, however. By using some special notation we
can redirect the output of many commands to files, devices, and even to the input of other
commands.

Standard Output
Most command line programs that display their results do so by sending their results to a facility
called standard output. By default, standard output directs its contents to the display. To redirect
standard output to a file, the ">" character is used like this:
[me@linuxboxme]$ ls>file_list.txt
In this example, the ls command is executed and the results are written in a file named
file_list.txt. Since the output of ls was redirected to the file, no results appear on the display.
Each time the command above is repeated, file_list.txt is overwritten (from the beginning) with
the output of the command ls. If you want the new results to be appended to the file instead,
use ">>" like this:
[me@linuxboxme]$ ls>>file_list.txt

When the results are appended, the new results are added to the end of the file, thus making
the file longer each time the command is repeated. If the file does not exist when you attempt to
append the redirected output, the file will be created.

Standard Input
Many commands can accept input from a facility called standard input. By default, standard
input gets its contents from the keyboard, but like standard output, it can be redirected. To
redirect standard input from a file instead of the keyboard, the "<" character is used like this:
[me@linuxboxme]$ sort<file_list.txt
In the above example we used the sort command to process the contents of file_list.txt. The
results are output on the display since the standard output is not redirected in this example. We
could redirect standard output to another file like this:
[me@linuxboxme]$ sort<file_list.txt>sorted_file_list.txt
As you can see, a command can have both its input and output redirected. Be aware that the
order of the redirection does not matter. The only requirement is that the redirection operators
(the "<" and ">") must appear after the other options and arguments in the command.

Pipes
By far, the most useful and powerful thing you can do with I/O redirection is to connect multiple
commands together with what are calledpipes. With pipes, the standard output of one command
is fed into the standard input of another. Here is my absolute favorite:
[me@linuxboxme]$ lsl|less
In this example, the output of the ls command is fed into less. By using this "|less" trick,
you can make any command have scrolling output. I use this technique all the time.
By connecting commands together, you can acomplish amazing feats. Here are some examples
you'll want to try:
Examples of commands used together with pipes

Command

What it does

lslt|head

Displays the 10 newest files in the current directory.

du|sortnr

Displays a list of directories and how much space they


consume, sorted from the largest to the smallest.

find.typefprint|wc
l

Displays the total number of files in the current working


directory and all of its subdirectories.

Filters
One class of programs you can use with pipes is called filters. Filters take standard input and
perform an operation upon it and send the results to standard output. In this way, they can be
used to process information in powerful ways. Here are some of the common programs that can
act as filters:
Common filter commands

Program

What it does

sort

Sorts standard input then outputs the sorted result on standard output.

uniq

Given a sorted stream of data from standard input, it removes duplicate lines of
data (i.e., it makes sure that every line is unique).

grep

Examines each line of data it receives from standard input and outputs every line
that contains a specified pattern of characters.

fmt

Reads text from standard input, then outputs formatted text on standard output.

pr

Takes text input from standard input and splits the data into pages with page
breaks, headers and footers in preparation for printing.

head

Outputs the first few lines of its input. Useful for getting the header of a file.

tail

Outputs the last few lines of its input. Useful for things like getting the most recent
entries from a log file.

tr

Translates characters. Can be used to perform tasks such as upper/lowercase


conversions or changing line termination characters from one type to another (for
example, converting DOS text files into Unix style text files).

sed

Stream editor. Can perform more sophisticated text translations than tr.

awk

An entire programming language designed for constructing filters. Extremely


powerful.

Performing tasks with pipes


1. Printing from the command line. Linux provides a program called lpr that accepts
standard input and sends it to the printer. It is often used with pipes and filters. Here are
a couple of examples:
2.
3. catpoorly_formatted_report.txt|fmt|pr|lpr
4.
5. catunsorted_list_with_dupes.txt|sort|uniq|pr|lpr

6.

In the first example, we use cat to read the file and output it to standard output, which is
piped into the standard input of fmt.fmt formats the text into neat paragraphs and
outputs it to standard output, which is piped into the standard input of pr.prsplits the
text neatly into pages and outputs it to standard output, which is piped into the standard
input of lpr.lpr takes its standard input and sends it to the printer.
The second example starts with an unsorted list of data with duplicate entries.
First, cat sends the list into sort which sorts it and feeds it into uniq which removes any
duplicates. Next pr and lpr are used to paginate and print the list.
7. Viewing the contents of tar files Often you will see software distributed as a gzipped
tar file. This is a traditional Unix style tape archive file (created with tar) that has been
compressed with gzip. You can recognize these files by their traditional file extensions,
".tar.gz" or ".tgz". You can use the following command to view the directory of such a file
on a Linux system:
8. tartzvfname_of_file.tar.gz|less
ANS4B most configuration files are stored in the /etc directory. Content can be viewed using

the cat command, which sends text files to the standard output (usually your monitor). The syntax is
straight forward:
cat file1 file2 ... fileN
In this section we try to give an overview of the most common configuration files. This is certainly
not a complete list. Adding extra packages may also add extra configuration files in /etc. When
reading the configuration files, you will find that they are usually quite well commented and selfexplanatory. Some files also have man pages which contain extra documentation, such
as man group.
Table 3-3. Most common configuration files

File

Information/service

aliases

Mail aliases file for use with the Sendmail and Postfix mail
server. Running a mail server on each and every system has
long been common use in the UNIX world, and almost every
Linux distribution still comes with a Sendmail package. In this
file local user names are matched with real names as they
occur in E-mail addresses, or with other local addresses.

apache

Config files for the Apache web server.

bashrc

The system-wide configuration file for the Bourne Again SHell.


Defines functions and aliases for all users. Other shells may
have their own system-wide config files, like cshrc.

crontab and the cron.*directories

Configuration of tasks that need to be executed periodically -

File

Information/service
backups, updates of the system databases, cleaning of the
system, rotating logs etc.

default

Default options for certain commands, such as useradd.

filesystems

Known file systems: ext3, vfat, iso9660 etc.

fstab

Lists partitions and their mount points.

ftp*

Configuration of the ftp-server: who can connect, what parts of


the system are accessible etc.

group

Configuration file for user groups. Use the shadow


utilities groupadd, groupmod and groupdel to edit this file.
Edit manually only if you really know what you are doing.

hosts

A list of machines that can be contacted using the network, but


without the need for a domain name service. This has nothing
to do with the system's network configuration, which is done
in /etc/sysconfig.

inittab

Information for booting: mode, number of text consoles etc.

issue

Information about the distribution (release version and/or


kernel info).

ld.so.conf

Locations of library files.

lilo.conf,silo.conf,aboot.confetc.

Boot information for the LInux LOader, the system for booting
that is now gradually being replaced with GRUB.

logrotate.*

Rotation of the logs, a system preventing the collection of huge


amounts of log files.

mail

Directory containing instructions for the behavior of the mail


server.

modules.conf

Configuration of modules that enable special features (drivers).

motd

Message Of The Day: Shown to everyone who connects to the


system (in text mode), may be used by the system admin to
announce system services/maintenance etc.

mtab

Currently mounted file systems. It is advised to never edit this


file.

nsswitch.conf

Order in which to contact the name resolvers when a process


demands resolving of a host name.

pam.d

Configuration of authentication modules.

passwd

Lists local users. Use the shadow


utilities useradd, usermod and userdel to edit this file. Edit
manually only when you really know what you are doing.

printcap

Outdated but still frequently used printer configuration file.


Don't edit this manually unless you really know what you are
doing.

profile

System wide configuration of the shell environment: variables,


default properties of new files, limitation of resources etc.

rc*

Directories defining active services for each run level.

resolv.conf

Order in which to contact DNS servers (Domain Name Servers


only).

sendmail.cf

Main config file for the Sendmail server.

services

Connections accepted by this machine (open ports).

sndconfig orsound

Configuration of the sound card and sound events.

ssh

Directory containing the config files for secure shell client and
server.

File

Information/service

sysconfig

Directory containing the system configuration files: mouse,


keyboard, network, desktop, system clock, power management
etc. (specific to RedHat)

X11

Settings for the graphical server, X. RedHat uses XFree, which


is reflected in the name of the main configuration file,
XFree86Config. Also contains the general directions for the
window managers available on the system, for
example gdm, fvwm, twm, etc.

xinetd.* orinetd.conf

Configuration files for Internet services that are run from the
system's (extended) Internet services daemon (servers that
don't run an independent daemon).

ANS7AExim is a mail transfer agent (MTA) used on Unix-like operating systems. Exim is free software distributed

under the terms of theGNU General Public License, and it aims to be a general and flexible mailer with extensive
facilities for checking incoming e-mail.
Exim has been ported to most Unix-like systems, as well as to Microsoft Windows using the Cygwin emulation layer.
Exim 4 is currently the default MTA on Debian GNU/Linux systems.
A large number of Exim installations exist, especially within Internet service providers[2] and universities in the UK.
Exim is also widely used with the GNU Mailman mailing list manager, and cPanel.
In November 2015 in a study performed by E-Soft, Inc.,[3] approximately 53% of the publicly reachable mail-servers
on the Internet ran Exim.

Origin[edit]
The first version of Exim was written in 1995 by Philip Hazel for use in the University of Cambridge Computing
Services e-mail systems. The name initially stood for EXperimentalInternet Mailer.[4] It was originally based on an
older MTA, Smail-3, but it has since diverged from Smail-3 in its design and philosophy.[5][6]

Design model[edit]
Exim, like Smail, still follows the Sendmail design model, where a single binary controls all the facilities of the MTA.
Exim has well-defined stages during which it gains or losesprivileges.[7]
Exims security record has been fairly clean, with only a handful of serious security problems diagnosed over the
years.[8] Since the redesigned version 4 was released there have been four remote code execution flaws and one
conceptual flaw concerning how much trust it is appropriate to place in the run-time user; the latter was fixed in a
security lockdown in revision 4.73, one of the very rare occasions when Exim has broken backwards
compatibility with working configurations. This issue would not have been prevented by using a non-monolithic
design.

Configuration[edit]
Exim is highly configurable, and therefore has features that are lacking in other MTAs. It has always had substantial
facilities for mail policy controls, providing facilities for the administrator to control who may send or relay mail
through the system. In version 4.x this has matured to an Access Control List based system allowing very detailed

and flexible controls. The integration of a framework for content scanning, which allowed for easier integration
of anti-virus and anti-spam measures, happened in the 4.x releases. This made Exim very suitable for enforcing
diverse mail policies.
The configuration is done through a (typically single) configuration file, which must include the main section with
generic settings and variables, as well as the following optional sections:

the access control list (ACL) section which defines behaviour during the SMTP sessions,

the routers section which includes a number of processing elements which operate on addresses (the
delivery logic), each tried in turn,

the transports section which includes processing elements which transmit actual messages to destinations,

the retry section where policy on retrying messages that fail to get delivered at the first attempt is defined,

the rewrite section, defining if and how the mail system will rewrite addresses on incoming e-mails

the authenticators section with settings for SMTP AUTH, a rule per auth mechanism.

The configuration file permits inclusion of other files, which leads to two different configuration styles.

Configuration styles[edit]
There are two main schools of configuration style for Exim. The native school keeps the Exim configuration in one
file and external files are only used as data sources; this is strongly influenced by Philip Hazel's preferences and
notes on performance as the configuration file is re-read at every exec, which happens post-fork for receiving
inbound connections and at delivery.
The second commonly encountered style is the Debian style which is designed to make it easier to have an installed
application automatically provide mail integration support without having the administrator edit configuration files.
There are a couple of variants of this and Debian provide documentation of their approach as part of the packages.
In these approaches, a debconf configuration file is used to build the Exim configuration file, together with templates
and directories with configuration fragments. The meta-config is tuned with variables which have names
starting dc_.
Because the Debian approach diverges significantly from the Exim one it is common to find a lack of support for the
Debian approach on the regular Exim mailing-lists, with people advised
managed mailing-list. The Ubuntu packaging

[11]

[9][10]

to ask Debian questions on the Debian-

still advises users to use the Debian mailing-list.

Documentation[edit]
Exim has extensive and exhaustive documentation; if a feature or some behaviour is not documented then this is
classed as a bug. The documentation consists of The Exim Specification and two ancillary files: the experimental
specification for features that might disappear and "NewStuff", which tracks very recent changes that might not have
been fully integrated into the main specification. The Exim Specification is available in multiple formats, including
online in HTML and in plain-text for fast searching. The document preparation system ensures that the plain-text
format is highly usable.

Performance[edit]
Exim has been deployed in busy environments, often handling thousands of emails per hour efficiently. Exim is
designed to deliver email immediately, without queueing. However, its queue processing performance is
comparatively poor when queues are large (which happens rarely on typical low-traffic sites, but can happen
regularly on high-traffic sites).
Unlike qmail, Postfix, and ZMailer, Exim does not have a central queue manager (i.e. an equivalent of qmailsend, qmgr, or scheduler). There is thus no centralized load balancing, either of queue processing (leading to
disproportionate amounts of time being spent on processing the same queue entries repeatedly) or of system-wide
remote transport concurrency (leading to a "thundering herd" problem when multiple messages addressed to a
single domain are submitted at once). In Philip Hazel's own words:[12]
"The bottom line is that Exim does not perform particularly well in environments where the queue regularly
gets very large. It was never designed for this; deliveries from the queue were always intended to be
'exceptions' rather than the norm."
However, the interfaces to the spool system are well defined and various people have written their own spool
management daemons to use instead of asking the listening daemon to periodically fork queue runners. [citation needed]
In 1997, Philip Hazel replaced Exim's POSIX regular expression library written by Henry Spencer with a new
library he developed called PCRE (Perl Compatible Regular Expressions). Perl regular expressions are much
more powerful than POSIX and other common regular expressions, and PCRE has become popular in
applications other than Exim.

Updates[edit]
Historically, Exim used a peculiar version numbering scheme where the first decimal digit is updated only
whenever the main documentation is fully up to date; until that time, changes were accumulated in the file
NewStuff. For this reason, a 0.01 version change can signify important changes, not necessarily fully
documented.[13] In 2005, changes to Exim's version numbering were on the table of discussion. [14]
In more recent times, the document preparation system for Exim has been overhauled and changes are much
more likely to just go immediately into The Exim Specification. The 4.70 release just followed on naturally from
4.69 and the 4.6x releases had up-to-date documentation.
Philip Hazel retired from the University of Cambridge in 2007 and maintenance of Exim transitioned to a team of
maintainers. Exim continues to be maintained actively, with frequent releases.
ANS7BCompress is a Unix shell compression program based on the LZW compression algorithm.[1] Compared to

more modern compression utilities such as gzip and bzip2, compress performs faster and with less memory usage,
at the cost of a significantly lower compression ratio.
The uncompress utility will restore files to their original state after they have been compressed using
the compress utility. If no files are specified, the standard input will be uncompressed to the standard output

Files compressed by compress are typically given the extension ".Z" (modeled after the earlier pack program, that
used the extension ".z"). Most tar programs will pipe their data through compress when given the command line
option " -Z ". (The tar program in its own does not compress; it just stores multiple files within one tape archive file.)
Files can be returned to their original state using uncompress. The usual action of uncompress is not merely to
create an uncompressed copy of the file, but also to restore the timestamp and other attributes of the compressed
file.
For files produced by compress on other systems, uncompress supports 9- to 16-bit compression.

PAPER 3
ANS1a
isual Basic is initiated by using the Programs option > Microsoft Visual Basic 6.0 > Visual Basic
6.0. Clicking the Visual Basic icon, we can view a copyright screen enlisting the details of the license holder of the copy of
Visual Basic 6.0. Then it opens in to a new screen as shown in figure 1 below, with the interface elements Such as
MenuBar, ToolBar, The New Project dialog box. These elements permit the user to buid different types of Visual Basic
applications.

The Integrated Development Environment


One of the most significant changes in Visual Basic 6.0 is the Integrated Development Environment (IDE). IDE is a term
commonly used in the programming world to describe the interface and environment that we use to create our
applications. It is called integrated because we can access virtually all of the development tools that we need from one
screen called an interface. The IDE is also commonly referred to as the design environment, or the program.
Tha Visual Basic IDE is made up of a number of components

Menu Bar

Tool Bar

Project Explorer

Properties window

Form Layout Window

Toolbox

Form Designer

Object Browser

In previous versions of Visual Basic, the IDE was designed as a Single Document Interface (SDI). In a Single Document
Interface, each window is a free-floating window that is contained within a main window and can move anywhere on the
screen as long as Visual Basic is the current application. But, in Visual Basic 6.0, the IDE is in a Multiple Document
Interface (MDI) format. In this format, the windows associated with the project will stay within a single container known as
the parent. Code and form-based windows will stay within the main container form.
Figure 1 The Visual Basic startup dialog box

Menu Bar

This Menu Bar displays the commands that are required to build an application. The main menu items have sub menu
items that can be chosen when needed. The toolbars in the menu bar provide quick access to the commonly used
commands and a button in the toolbar is clicked once to carry out the action represented by it.

Toolbox
The Toolbox contains a set of controls that are used to place on a Form at design time thereby creating the user interface
area. Additional controls can be included in the toolbox by using the Components menu item on the Project menu. A
Toolbox is represented in figure 2 shown below.
Toolbox window with its controls available commonly.

Control

Description

Pointer

Provides a way to move and resize the controls form

PictureBox

Displays icons/bitmaps and metafiles. It displays text or acts as a visual container for
other controls.

TextBox

Used to display message and enter text.

Frame

Serves as a visual and functional container for controls

CommandButton

Used to carry out the specified action when the user chooses it.

CheckBox

Displays a True/False or Yes/No option.

OptionButton

OptionButton control which is a part of an option group allows the user to select only
one option even it displays mulitiple choices.

ListBox

Displays a list of items from which a user can select one.

ComboBox

Contains a TextBox and a ListBox. This allows the user to select an ietm from the
dropdown ListBox, or to type in a selection in the TextBox.

HScrollBar and
VScrollBar

These controls allow the user to select a value within the specified range of values

Timer

Executes the timer events at specified intervals of time

DriveListBox

Displays the valid disk drives and allows the user to select one of them.

DirListBox

Allows the user to select the directories and paths, which are displayed.

FileListBox

Displays a set of files from which a user can select the desired one.

Shape

Used to add shape (rectangle, square or circle) to a Form

Line

Used to draw straight line to the Form

Image

used to display images such as icons, bitmaps and metafiles. But less capability than
the PictureBox

Data

Enables the use to connect to an existing database and display information from it.

OLE

Used to link or embed an object, display and manipulate data from other windows
based applications.

Label

Displays a text that the user cannot modify or interact with.

Project Explorer
Docked on the right side of the screen, just under the tollbar, is the Project Explorer window. The Project Explorer as
shown in in figure servres as a quick reference to the various elements of a project namely form, classes and modules. All
of the object that make up the application are packed in a project. A simple project will typically contain one form, which is
a window that is designed as part of a program's interface. It is possible to develop any number of forms for use in a
program, although a program may consist of a single form. In addition to forms, the Project Explorer window also lists
code modules and classes.
Figure 3 Project Explorer

Properties Window
The Properties Window is docked under the Project Explorer window. The Properties Window exposes the various
characteristics of selected objects. Each and every form in an application is considered an object. Now, each object in
Visual Basic has characteristics such as color and size. Other characteristics affect not just the appearance of the object
but the way it behaves too. All these characteristics of an object are called its properties. Thus, a form has properties and
any controls placed on it will have propeties too. All of these properties are displayed in the Properties Window.

Object Browser
The Object Browser allows us to browse through the various properties, events and methods that are made available to
us. It is accessed by selecting Object Browser from the View menu or pressing the key F2. The left column of the Object
Browser lists the objects and classes that are available in the projects that are opened and the controls that have been
referenced in them. It is possible for us to scroll through the list and select the object or class that we wish to inspect. After
an object is picked up from the Classes list, we can see its members (properties, methods and events) in the right column.
A property is represented by a small icon that has a hand holding a piece of paper. Methods are denoted by little green
blocks, while events are denoted by yellow lightning bolt icon.
Object naming conversions of controls (prefix)
Form -frm
Label-lbl
TextBox-txt
CommandButton-cmd
CheckBox -chk
OptionButton -opt
ComboBox -cbo
ListBox-lst
Frame-fme
PictureBox -pic
Image-img
Shape-shp
Line -lin
HScrollBar -hsb
VScrollBar -vsb
ANS1bIn computer programming, event-driven programming is a programming paradigm in which the flow of the

program is determined by events such as user actions (mouse clicks, key presses), sensor outputs,
or messages from other programs/threads. Event-driven programming is the dominant paradigm used in graphical
user interfaces and other applications (e.g. JavaScript web applications) that are centered on performing certain
actions in response to user input.
In an event-driven application, there is generally a main loop that listens for events, and then triggers a callback
function when one of those events is detected. In embedded systems the same may be achieved using hardware
interrupts instead of a constantly running main loop. Event-driven programs can be written in any programming
language, although the task is easier in languages that provide high-level abstractions, such as closures.

A trivial event handler[edit]


Because the code for checking for events and the main loop do not depend on the application, many programming
frameworks take care of their implementation and expect the user to provide only the code for the event handlers. In
this simple example there may be a call to an event handler called OnKeyEnter() that includes an argument with a

string of characters, corresponding to what the user typed before hitting the ENTER key. To add two numbers,
storage outside the event handler must be used. The implementation might look like below.

globally declare the counter K and the integer T.


OnKeyEnter(character C)
{
convert C to a number N
if K is zero store N in T and increment K
otherwise add N to T, print the result and reset K to zero
}

While keeping track of history is straightforward in a batch program, it requires special attention and planning in an
event-driven program.

Exception handlers[edit]
In PL/1, even though a program itself may not be predominantly event driven, certain abnormal events such as a
hardware error,overflow or "program checks" may occur that possibly prevent further processing. Exception
handlers may be provided by "ON statements" in (unseen) callers to provide housekeeping routines to clean up
afterwards before termination.

Creating event handlers[edit]


The first step in developing an event-driven program is to write a series of subroutines, or methods, called eventhandler routines. These routines handle the events to which the main program will respond. For example, a single
left-button mouse-click on a command button in a GUI program may trigger a routine that will open another window,
save data to a database or exit the application. Many modern day programming environments provide the
programmer with event templates, allowing the programmer to focus on writing the event code.
The second step is to bind event handlers to events so that the correct function is called when the event takes
place. Graphical editors combine the first two steps: double-click on a button, and the editor creates an (empty)
event handler associated with the user clicking the button and opens a text window so you can edit the event
handler.
The third step in developing an event-driven program is to write the main loop. This is a function that checks for the
occurrence of events, and then calls the matching event handler to process it. Most event-driven programming
environments already provide this main loop, so it need not be specifically provided by the application
programmer. RPG, an early programming language from IBM, whose 1960s design concept was similar to event
driven programming discussed above, provided a built-in main I/O loop (known as the "program cycle") where the
calculations responded in accordance to 'indicators' (flags) that were set earlier in the cycle.

Criticism and best practice[edit]


Event-driven programming is widely used in graphical user interfaces, for instance the Android concurrency
frameworks are designed using the Half-Sync/Half-Async pattern, [1] where a combination of a single-threaded event
loop processing (for the main UI thread) and synchronous threading (for background threads) is used. This is

because the UI-widgets are not thread-safe, and while they are extensible, there is no way to guarantee that all the
implementations are thread-safe, thus single-thread model alleviates this issue.
The design of those toolkits has been criticized, e.g., by Miro Samek, for promoting an over-simplified model of
event-action, leading programmers to create error prone, difficult to extend and excessively complex application
code. He writes,
Such an approach is fertile ground for bugs for at least three reasons:
1. It always leads to convoluted conditional logic.
2. Each branching point requires evaluation of a complex expression.
3. Switching between different modes requires modifying many variables, which all can easily lead to
inconsistencies.
Miro Samek, Who Moved My State?, C/C++ Users Journal, The Embedded Angle column (April 2003)

and advocates the use of state machines as a viable alternative.[2][clarification needed]

Stackless threading[edit]
An event driven approach is used in hardware description languages. A thread context only needs a CPU stack
while actively processing an event, once done the CPU can move on to process other event-driven threads, that
allows an extremely large number of threads to be handled. This is essentially a Finite-state machine approach
ANS 2 The InputBox and MsgBox functions

InputBox and MsgBox are two useful functions. Each opens a dialog window,
which closes when the user responds. The InputBox is used to get input from
the user and MsgBox is used for output. These our illustrated in this
simple message program. The following is its listing:
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs)
Handles Button1.Click
Dim strName As String
strName = InputBox("What is your name?")
MsgBox("Thanks, " & strName & ". I have been waiting weeks for someone to do that.")
End Sub

In this example, the InputBox function has one argument, a string that is used
to prompt the user. The function returns whatever the user enters. (If the
user clicks on the cancel button, a string of length zero is returned).
The MsgBox function also has one argument in this example. It is a string to
be displayed in the message box.
The InputBox function requires at least one argument (the prompt), but it has
4 optional arguments. The optional arguments are a title (string), a default
input value (string), and X and Y coordinates (numeric) that determine the
position of the input window on the screen.
For example, this program:

Dim strUserIn As String


strUserIn = InputBox("This is the prompt", "This is the title", "This is the default
input", 1, 1)

produces this InputBox in the uppler left corner (1, 1) of the screen:

Similarly, the MsgBox function requires one argument, a string with the
message, but has 2 optional arguments, a numeric code that indicates which
buttons to display on the message box and a string title for the message box.
The button code is most interesting. Using it, you can put OK, Cancel, Retry,
and other buttons on the message box, The following are some of the
allowable button code values:
Code

Buttons displayed

OK (the default)

OK and Cancel

Abort, Retry, and Ignore

Yes, No, and Cancel

Yes and No

Retry and Cancel

You can experiment with these and check the online help for other MsgBox
options.
For example, this program:
Dim intButton As Integer
intButton = MsgBox("This is the message", 3, "This is the title")

produces this InputBox:

If there are multiple buttons on a message box, the programmer might also
want to know which one the user clicked. The following table shows the values
the MsgBox function can return:
The user clicked

Value returned

OK

Cancel

Abort

Retry

Ignore

Yes

No

Finally, we should note that the Show method of the MessageBox class may be
used as an alternative to the MsgBox function. For example, the following
statement displays a MessageBox with a greeting:
MessageBox.Show("Hiya kids, Hiya! Hiya!")

Like the MsgBox function, the Show method returns a value that can be tested
later in the program to see which button the user had clicked on, for example:
Dim bob As Integer
bob = MessageBox.Show("Hiya kids, Hiya! Hiya!")

ANS3 In Visual Basic 6.0, the CommonDialog ActiveX control is used to display various common dialogs
(Open, Save, Color, Font, Print, and Help) to your application.
In Visual Basic 2008, the CommonDialog control is replaced by individual components for displaying
dialogs the OpenFileDialog,SaveFileDialog, ColorDialog, FontDialog, and PrintDialog components.

Note:

There is no direct equivalent for showing a Help dialog in Visual Basic 2008. The CommonDialog control
only supported Windows Help; Visual Basic 2008 only supports HTML Help. Visual Basic 2008 uses
the HelpProvider component to display help for your application. For more information, see Help Support for
Visual Basic 6.0 Users.

Code Changes for the CommonDialog Control

The following examples illustrate the differences in coding techniques between Visual Basic 6.0 and Visual
Basic 2008 for some common uses of theCommonDialog control.

Code Changes for Displaying a File Open Dialog Box


The following code demonstrates displaying a File Open dialog box, initialized to the Program Files
directory.
' Visual Basic 6.0
' Uses a CommonDialog control.
CommonDialog1.InitDir = "C:\Program Files"
CommonDialog1.ShowOpen
VB
' Visual Basic
' Uses a OpenFileDialog component.
OpenFileDialog1.InitialDirectory = "C:\Program Files"
OpenFileDialog1.ShowDialog()

Code changes for Displaying a File Save Dialog Box


The following code demonstrates displaying a File Save dialog box, saving the file to the application's
folder.
' Visual Basic 6.0
' Uses a CommonDialog control.
CommonDialog1.InitDir = App.Path
CommonDialog1.ShowSave
VB
' Visual Basic
' Uses a SaveFileDialog component.
SaveFileDialog1.InitialDirectory = My.Application.Info.DirectoryPath
SaveFileDialog1.ShowDialog()

Code changes for Displaying a Print Dialog Box


The following code demonstrates displaying a Print dialog box, printing a file located in the application's
folder.
' Visual Basic 6.0
' Uses a CommonDialog control.
CommonDialog1.FileName = App.Path & "MyFile.txt"
CommonDialog1.ShowPrinter
VB
' Visual Basic
' Uses PrintDocument and PrintDialog components.
PrintDocument1.DocumentName = My.Application.Info.DirectoryPath _
& "MyFile.txt"
PrintDialog1.Document = PrintDocument1
PrintDialog1.ShowDialog()

Code changes for Displaying Help


The following code demonstrates displaying a Help file from your application, opening it to the table of
contents.
' Visual Basic 6.0
' Uses a CommonDialog control.
CommonDialog1.HelpFile = "C:\Windows\Help\calc.hlp"
CommonDialog1.HelpCommand = cdlHelpContents
CommonDialog1.ShowHelp
VB
' Visual Basic
' Uses the Help.ShowHelp method.
Help.ShowHelp(Me, "file://C:\Windows\Help\calc.chm", _
HelpNavigator.TableOfContents)

CommonDialog Control Property and Method Equivalencies


The following tables list Visual Basic 6.0 properties and methods and their Visual Basic 2008 equivalents.
Properties and methods with the same names and behaviors are not listed. Where applicable, constants
are indented beneath the property or method. All Visual Basic 2008 enumerations map to
the System.Windows.Forms namespace unless otherwise noted.
Links are provided as necessary to topics explaining differences in behavior. Where there is no direct
equivalent in Visual Basic 2008, links are provided to topics that present alternatives.

Properties
Visual Basic 6.0

Visual Basic 2008 Equivalent

Action

New implementation. The Visual Basic 6.0 Action property determines which dialog to
display; Visual Basic 2008 uses a separate component for each dialog.

CancelError

Cancel

Copies

Copies

DialogTitle

Title (OpenFileDialog and SaveFileDialog components only)


New implementation for the other components. Standard Windows titles (Color, Font,

and Print) are displayed and cannot be overridden.

FileName

FileNames

FileTitle

New implementation. The Visual Basic 6.0 FileTitle property returns


the FileName without the path; you can parse theFileNames property to get the name
without the path.

Flags

The Visual Basic 6.0 Flags property provides constants for setting various attributes of
the different common dialogs. Rather than using constants, the dialog components
provide properties for setting the attributes.

Font

Font

FontBold

Note:

FontItalic

Fonts are handled differently in Visual Basic 2008. For more information,
see Font Handling for Visual Basic 6.0 Users.

FontName
FontSize
FontStrikethroug
h
FontUnderline

FromPage

FromPage

hDC

New implementation. For more information see Graphics for Visual Basic 6.0 Users.

HelpCommand

HelpNavigator

HelpFile

HelpNamespace

HelpKey

The parameter parameter of the ShowHelp method.

Index

New implementation. For more information, see Control Arrays for Visual Basic 6.0
Users.

InitDir

InitialDirectory

Left

Left
Note:
Coordinates are handled differently in Visual Basic 2008. For more
information, see Coordinate System for Visual Basic 6.0 Users.

Max

MaxSize (FontDialog component)


MaximumPage (PrintDialog component)

Min

MinSize (FontDialog component)


MinimumPage (PrintDialog component)

MaxFileSize

New implementation. This property Visual Basic 6.0 allocates memory for extremely
long file names; it is no longer necessary in managed code.

Orientation

Landscape

Parent

FindForm method

PrinterDefault

New implementation. This Visual Basic 6.0 property is used in conjunction with
the hDC property to print using graphic device interface methods; this is no longer
supported.

Top

P:System.Windows.Forms.Control.Top
Note:
Coordinates are handled differently in Visual Basic 2008. For more
information, see Coordinate System for Visual Basic 6.0 Users.

ToPage

ToPage

Methods
Visual Basic
6.0

Visual Basic 2005 Equivalent

AboutBox

New implementation. The AboutBox property displayed an About box for


the CommonDialog control, which was created for Microsoft by a third party.

ShowColor

ShowDialog (ColorDialog component)

ShowFont

ShowDialog (FontDialog component)

ShowHelp

ShowHelp

ShowOpen

ShowDialog (OpenFileDialog component)

ShowPrinter

ShowDialog (PrintDialog component)

ShowSave

ShowDialog (SaveFileDialog component)

ANS5aThe TreeView control is designed to display data that is hierarchical in nature, such as organization
trees, the entries in an index, the files and directories on a disk.
Typical TreeView

Possible Uses

To create an organization tree that can be manipulated by the user.

To create a tree that shows at least two or more levels of a database.

Setting Node Object Properties


A "tree" is comprised of cascading branches of "nodes," and each node typically consists of an image (set
with the Image property) and a label (set with the Text property). Images for the nodes are supplied by an
ImageList control associated with the TreeView control. For more information on using the ImageList control
with other controls, see "Using the ImageList control."
A node can be expanded or collapsed, depending on whether or not the node has child nodes nodes
which descend from it. At the topmost level are "root" nodes, and each root node can have any number of
child nodes. The total number of nodes is not limited (except by machine constraints). Figure 2.41 shows a
tree with two root nodes; "Root 1" has three child nodes, and "Child 3" has a child node itself. "Root 2" has
child nodes, as indicated by the "+" sign, but is unexpanded.
Figure 2.41

Root and child nodes

Each node in a tree is actually a programmable Node object, which belongs to the Nodes collection. As in
other collections, each member of the collection has a unique Index and Key property which allows you to
access the properties of the node. For example, the code below uses the Index of a particular node ("7") to
set the Image and Text properties:
tvwMyTree.Nodes(7).Image = "closed"
tvwMyTree.Nodes(7).Text = "IEEE"
However, if a unique key, for example "7 ID" had been assigned to the node, the same code could be
written as follows:

tvwMyTree.Nodes("7 ID").Image = "closed"


tvwMyTree.Nodes("7 ID").Text = "IEEE"

Node Relationships and References to Relative Nodes


Each node can be either a child or a parent, depending on its relationship to other nodes. The Node object
features several properties which return various kinds of information about children or parent nodes. For
example, the following code uses the Children property to return the number of children if any a node
has:
MsgBox tvwMyTree.Nodes(10).Children
However, some of the properties do not return information, as the Children property does, but instead
return a reference to another node object. For example, the Parent property returns a reference to the
parent of any particular node (as long as the node is not a root node). With this reference, you can
manipulate the parent node by invoking any methods, or setting properties, that apply to Node objects. For
example, the code below returns the Text and Index properties of a parent node:
MsgBox tvwMyTree.Nodes(10).Parent.Text
MsgBox tvwMyTree.Nodes(10).Parent.Index
Tip Use the Set statement with an object variable of type Node to manipulate references to other Node
objects. For example, the code below sets a Node object variable to the reference returned by the Parent
property. The code then uses the object variable to return properties of the relative node:
Dim tempNode As Node ' Declare object variable.
' Set object variable to returned reference.
Set tempNode = tvwMyTree.Nodes(10).Parent
MsgBox tempNode.Text ' Returns parent's Text.
MsgBox tempNode.Index ' Returns parent's Index.
ANS5b

Set up the program to recognize the two numbers as values. Programmers can do this
either by defining the numbers as constants or variables. Variables are desired over constants for
many reasons, mainly because they can be changed easily. For example, a variable can be
changed by a user entering a number in a visual text box, whereas a constant cannot.
Defining items in Visual Basic requires a "dimension" command, abbreviated as "dim". To define
your two numbers as integers, write the following code "above the fold" in the initial load
sequence before functions are described: dim A as integer, dim B as integer. Here, A and B will
be your two numbers.
Ad

Identify your numbers. After dimensioning the two numbers, you'll need to either enter values
for them in code, or provide instructions for users to populate them during the program. A simple
command like A = 5 is sufficient.

Create another variable for the sum.Write this code into the same pre-functional prefix code:
dim C as integer

Write the code needed to identify the third number as the sum of the first 2.With the
above example, your code is this: C = A + B

Provide for the display of results. You can include a visual text box to display the sum and
create a command like: textbox1.text = val<c>

Work with the result. Add the variable C back into other equations for more functionality
within the program. There are many ways that programmers can take advantage of an added
number to further influence outcomes within an executable program.

Run the program to catch bugs. Sometimes, small errors can produce bugs. Run the program and
any available diagnostics to make sure the program works and values the sum correctly.

We can define new variable as Dim Sum As Integer


Sum=Val(Textbox1.text)+Val(Textbox2.text)
textbox3.text=Sum

-------------------------------------------------------------------------------------------------------------------------------------------------------------------

You might also like