Professional Documents
Culture Documents
Introduction
Management information systems (MIS) provide data to help company in decision making
reduce waste and increase profits. MIS in company management provides a broad picture of
company performance, acts as a planning tool, emphasizes strengths and weaknesses. It
helps improve performance and illuminates levels of organizational efficiency. All levels of
management, departments and even customers can use the information generated through
various MIS data to inform decisions like buying, hiring, reorganizing, pricing, marketing and
budgeting.
Applications of MIS
MIS application in business falls into several different categories that provide information on all
forms of functioning within an organization. Executives and departments within an organization
could obtain any of the following forms of data:
● Business Intelligence System: In BI, all levels of management and executives can print data and
graphs showing information or trends relating to growth, costs, strategic control, efficiency, risk
and performance.
● Executive Information System: An EI system provides the same information as a BI system, but
with greater attention to detail and more confidential information, designed to help top-level
executives make choices that impact the entire organization.
● Marketing Information System: MI systems provide data about past marketing campaigns so
that marketing executives can determine what works, what does not work and what they need
to change in order to achieve the desired results.
● Transaction Processing System: TPS handles sales transactions and makes it possible for
customers to sort search results by size, color or price. This system can also track trends related
to sales and search results.
● Customer Relationship Management System: Keeping up with customers is key to overall
success, and CRMS helps companies know when and how to follow up with customers in order
to encourage an ongoing sales relationship with them.
● Sales Force Automation System: Gone are the days when sales teams must do everything
manually. SFA systems automate much of what must be done for orders and to obtain customer
information.
● Human Resource Management System: HRM systems track how much employees are paid,
when and how they are performing. Companies can use this information to help improve
performance or the bottom line.
● Knowledge Management System: Customers with questions want answers right away and
knowledge management systems allow them to access frequently asked questions or
troubleshoot on their own timetable.
● Financial Accounting System: Financial accounting systems help to track accounts receivable
and accounts payable, in order to best manage the cash flow of a company.
● Supply Chain Management System: SCM systems record and manage the supply of finances,
goods and data from the point of origin domestically or abroad, all the way to its destination in
the hands of a customer.
DSS is an information system that aids a business in decision-making activities that require
judgment, determination, and a sequence of actions. The information system assists the mid-
and high-level management of an organization by analyzing huge volumes of unstructured data
and accumulating information that can help to solve problems and help in decision-making. A
DSS is either human-powered, automated, or a combination of both.
A decision support system produces detailed information reports by gathering and analyzing
data. Hence, a DSS is different from a normal operations application, whose goal is to collect
data and not analyze it.
In an organization, a DSS is used by the planning departments – such as the operations
department – which collects data and creates a report that can be used by managers for
decision-making. Mainly, a DSS is used in sales projection, for inventory and operations-related
data, and to present information to customers in an easy-to-understand manner.
In a JIT inventory system, the organization requires real-time data of their inventory levels to
place orders “just in time” to prevent delays in production and cause a negative domino effect.
Therefore, a DSS is more tailored to the individual or organization making the decision than a
traditional system.
The model management system stores models that managers can use in their decision-making.
The models are used in decision-making regarding the financial health of the organization and
forecasting demand for a good or service.
2. User Interface
The user interface includes tools that help the end-user of a DSS to navigate through the
system.
3. Knowledge Base
The knowledge base includes information from internal sources (information collected in a
transaction process system) and external sources (newspapers and online databases).
Types of Decision Support Systems
● Communication-driven: Allows companies to support tasks that require more than one
person to work on the task. It includes integrated tools such as Microsoft SharePoint
Workspace and Google Docs.
● The cost to develop and implement a DSS is a huge capital investment, which makes it
less accessible to smaller organizations.
● A company can develop a dependence on a DSS, as it is integrated into daily
decision-making processes to improve efficiency and speed. However, managers tend to
rely on the system too much, which takes away the subjectivity aspect of
decision-making.
● A DSS may lead to information overload because an information system tends to
consider all aspects of a problem. It creates a dilemma for end-users, as they are left
with multiple choices.
The tools and techniques provided by the group decision support system improve the quality
and effectiveness of the group meetings. Groupware and web-based tools for electronic
meetings and videoconferencing also support some of the group decision making processes, but
their main function is to make communication possible between the decision-makers.
In a group decision support system (GDSS) electronic meeting, each participant is provided with
a computer. The computers are connected to each other, to the facilitator’s computer and to the
file server. A projection screen is available at the front of the room. The facilitator and the
participants can both project digital text and images onto this screen.
A group decision support system (GDSS) meeting comprises different phases, such as idea
generation, discussion, voting, vote counting and so on. The facilitator manages and
controls the execution of these phases. The use of various software tools in the meeting is also
controlled by the facilitator.
● Hardware: It includes electronic hardware like the computer, equipment used for
networking, electronic display boards and audiovisual equipment. It also includes the
conference facility, including the physical set up – the room, the tables, and the chairs –
laid out in such a manner that they can support group discussion and teamwork.
● Ease of Use: It consists of an interactive interface that makes working with GDSS simple
and easy.
● Better Decision Making: It provides the conference room setting and various software
tools that facilitate users at different locations to make decisions as a group resulting in
better decisions.
● Emphasis on Semi-structured and Unstructured Decisions: It provides important
information that assists middle and higher-level management in making semi-structured
and unstructured decisions.
● Specific and General Support: The facilitator controls the different phases of the group
decision support system meeting (idea generation, discussion, voting and vote counting,
etc.) what is displayed on the central screen and the type of ranking and voting that
takes place, etc. In addition, the facilitator also provides general support to the group
and helps them to use the system.
● Supports all Phases of the Decision Making: It can support all the four phases of
decision making, viz intelligence, design, choice, and implementation.
● Supports Positive Group Behavior: In a group meeting, as participants can share their
ideas more openly without the fear of being criticized, they display more positive group
behavior towards the subject matter of the meeting.
Group decision support system software tools help the decision-makers in organizing their
ideas, gathering required information and setting and ranking priorities. Some of these tools are
as follows:
● Tools for Setting Priority: It includes a collection of techniques, such as simple voting,
ranking in order and some weighted techniques that are used for voting and setting
priorities in a group meeting.
● Policy Formation Tool: It provides the necessary support for converting the wordings of
policy statements into an agreement.
Using knowledge management software can help keep documentation up to date, assist
customers in finding their own answers, and manage knowledge access and permissions across
user groups. It’s a tool that’s valuable to both small businesses that are just starting out and
global enterprises that need to distribute knowledge to a wide variety of audiences.
Explicit knowledge
This is the knowledge that needs to be documented and is usually easy to turn into an article. It
is a description about, or a set of steps towards, achieving something. Examples include clothing
measurements and fabric information or where to change your login information on a software
application. Gather explicit knowledge through fact-finding with your subject matter experts.
Implicit knowledge
This is information customers need to infer from explicit knowledge. It requires customers to
interpret existing pieces of explicit knowledge as described above, or general knowledge to
create desired outcomes. For example, how to combine software features to achieve a business
need or knowing a certain material is waterproof. Gather implicit knowledge by documenting
your customers' use cases and then explain how to combine other knowledge to achieve them.
Tacit knowledge
This is knowledge coming from experience and typically requires a lot of context and practice to
acquire. It could be something like knowing immediately what to do during an emergency or
that a specific shoe brand doesn't give you enough arch support. Tacit knowledge is hard to
gather because it is often specific and requires individual testing. Start by getting specialists or
senior members of your team together to disseminate complex ideas and use that to build
larger training content.
Bringing these all together: Explicit knowledge is knowing what apples, cinnamon, flour, and
sugar are. Implicit knowledge is knowing they can be combined to make a pie. Tacit knowledge
is knowing the exact combination of the ingredients that makes the most delicious pie.
Whether you’re a SaaS company supporting business customers, a consumer product shipping
out retail items, or a helpdesk manager dealing with internal customers, a knowledge
management portal will help you effectively deliver information to the people who need it. Not
only is a knowledge management system great for business, but it’s also great for your
customers. Providing a thorough knowledge management system is key to helping customers
help themselves and improving the overall customer experience.
Knowledge Management System for small business
If you’re a smaller business, you might think that you don’t have enough knowledge to require a
system to manage it. But small businesses benefit just as much from using a KMS. Providing a
way for customers to help themselves is even more important because your team doesn’t
always have a ton of extra time - and scaling as you grow is important.
● It’s all hands on deck, so having an easy-to-use tool that encourages team members to
document their work is important.
● As processes change regularly, keeping internal documentation up to date will help small
businesses maintain order in the chaos of growth.
● While analytics might be essential for revisions and improvement, it probably won’t be
the priority of small business teams that are focused on getting things done. It will come
into play as they grow.
For enterprise companies, effective knowledge management has exponential returns as the
number of customers that will receive help grows - but their knowledge base requires much
more effort to maintain and scale.
● Workflow management and permissions are more important as more people become
involved in updating existing articles.
● Internal documentation tools are essential for growing enterprise teams who have
complex and large amounts of knowledge to share internally.
● Analytics and reporting will be extremely crucial for enterprise teams who will likely
want to integrate their KMS with Google Analytics for robust insights that will also be
delivered to marketing and product teams.
An internal KMS is only accessible to employees (or even a specific group of employees) and
hosts private, internal information that customers shouldn’t have access to like policies, specific
troubleshooting requirements, or HR material. Developing an internal knowledge base together
can help scale your customer support team more effectively, and help onboard new agents as
you grow.
● As your company grows, there’s more information and processes required for the
smooth running of the organization. If this information isn’t documented, it’s tough to
keep everyone on the same page.
● When you scale, instead of continuing to hire more and more support agents, a robust
internal knowledge management system equips your existing team to provide speedy
answers and work efficiently.
● An internal KMS fosters collaboration amongst different teams and breaks departmental
silos that exist within different teams in your company.
● Creating onboarding guides in your internal knowledge management system will help
transfer knowledge to new employees more effectively. New employees will have a
go-to place to search for help before asking another team member.
●
How to implement a knowledge management system
Now that you know all about KMS, you can work on your knowledge management strategy to
decide the right knowledge to share, whom the information is for, the best format to convey it,
and the optimal way to organize the information.
Decide what information you want to record in your knowledge management system. It could
be product information, on boarding guides, how-to tutorials, FAQs, or troubleshooting
instructions for common issues. Find out common customer inquiries that are submitted at your
support helpdesk and build your knowledge repository based on customer needs.
You need to start by thinking about who will be searching for the information and when. You
can do this by analyzing your customer journey and figuring out the information that’s required
at each state, and identifying the best way to efficiently convey that. For example, as you move
down the customer journey, you’ll want to restrict some content like information on referral or
loyalty programs to logged-in customers. Or, for an internal KMS, you can set your support
agents up for success with deeper product details and pricing specifics.
In order to measure the success of your KMS, you need to tap into user feedback. Add feedback
surveys at the end of each article and guide to understand if the information was useful or not.
For example, Freshdesk articles offer an option for readers to vote Yes or No to “Did you find it
helpful?” at the bottom of each article. If many customers report that an article is not helpful,
it’s almost certainly time for an update.
Modern knowledge management software have built-in analytics in them that tracks and
projects the article feedback and article view count on intuitive dashboards. Integrating your
online knowledge management system with Google Analytics gives deeper insights into how
users navigate within your KMS and how relevant your content is.
Update your KMS regularly
Rarely is any knowledge static. You need to include a process that constantly revises your
knowledge base as the product expands, as customers express confusion or dissatisfaction, or
as your offerings change. Invite multiple stakeholders within your organization like the customer
support team or the sales department to collaborate, contribute, and update the knowledge
shared
Imagine a company is making great profits and achieving business success. It is reaching its
goals quite easily, but the problem lies in the way these business goals are being met. Business
processes play an integral part in lashing goals. But their inefficiency can cause great harm to
your business in long run.
As the business expands, it becomes even more challenging to change and modify the processes
to get the desired outcomes. This happens due to old habits and methods of investments. It is a
fact that no process can’t be improved without making suitable amends and changes in it.
Just making a plan isn’t enough! Effective execution is needed. Proper Business Process
Re-engineering (BPR) execution can prove to be a game-changer for any business. It has the
potential to perform miracles on even a failing or deteriorating company, by escalating the
profits and propelling business growth.
The concept of Business process re-engineering is not the simplest to understand and grasp. It
comprises imposing changes within an organization – scrapping the old ways and making space
for the new one.
And trust me, it isn’t an easy task at all. This is a thorough guide that will help you understand
the A to Z of Business Process Re-engineering. Let us jump straight to what business process
re-engineering is?
Here are certain steps to follow for efficient Business Process Re-engineering:
For small startups, this step is probably very easy. You can go for BPR when you realize that your
product is receiving a huge user drop-off rate. Then, the next thing to do is informing the
co-founder, suggest a direction to spindle and you are good to go for the further steps.
For a large business, the first step is the biggest hurdle itself. You will always find individuals
who are satisfied and happy with the existing ways of working. These individuals can be both,
from management side and the employees. The management will most probably be afraid of
getting their investments sunk, and the employees might see it as a job security threat.
Before anything else, you will have to make up their minds and convince them why the change
is required for the firm. This shouldn’t be difficult if your company isn’t doing well.
Perform a thorough research and try answering these questions in case of dilemma: Which of
the processes might not be efficient? Where are you lagging behind of your competition? Are
you even part of the competition or is the condition worse?
Operational Manager: He is the one who is aware of the ins-and-outs of the process. Their
process knowledge can prove to be a great asset to build a new, effective and more efficient
process.
Reengineering Experts: They are the ones who expertise in the field from IT to manufacturing.
They will discover where and how the right changes should be implemented to yield the best
outcomes. The changes might be anything – software, workflows, hardware, etc.
Step #3: Define Key Performance Indicators (KPI) for the Inefficient Processes
After the team is ready and you are all set to launch the initiative, there will be the need to
define the correct Key Performance Indicators (KPIs). BPR is introduced to optimize your
process. Formulate BPR strategies that can bend as per your business requirements and, not the
other way around.
KPIs usually differ a lot depending on the type of process you’re optimizing. And the following
are the most typical ones:
Manufacturing
Cycle Time – The total time taken from initiating to concluding a process.
Changeover Time – Time in between required to shift the line from making one product to the
next.
Inventory Turnover – The time taken in the manufacturing process to convert inventory into
products.
Planned VS Emergency Maintenance – The proportion of the times when planned maintenance
and emergency maintenance happened.
IT
Mean Time to Repair – The average time spent to repair the app, software, or system after any
emergency.
Support Ticket Closure rate – The ratio of number of support tickets closed by the support team
to the number opened.
Application Development – The time spent on completely developing a new application from
the scratch.
Cycle Time – The time required to get the network back up after a security fissure.
Perform business process mapping to know exactly where the KPIs need to be defined in the
individual processes. Use the step-by-step strategy to perform BPR effectively.
1. Process Flowcharts – It is the most basic technique. Just grab a pen and a blank paper
and jot down the processes stepwise.
2. Business Process Management (BPM) Software – Technology makes anything easy!
Using a BPM software for process analysis makes everything clearer and easier to work
with.
For example, you can use BPM software, process digitization, setting deadlines, etc. Such
software will most probably lead you to optimize the said processes since it allows easier
collaboration among the employees.
Now all you are left to do is – put up your theories into practice and see how the KPIs are
holding up. Once you realize that the new solution works better, and start scaling the solution
gradually. Eventually put it into action within other company processes as well.
If the new solution doesn’t prove to be that fruitful, then you need to start the process all over
again. The cycle of finding loopholes and solutions to them repeats until you form a desirable,
effective process.
Business Process Reengineering (BPR) is a sensational initiative for change. Its methodology is
based on five core areas, which are laid as follows:
1. Refocus: Align company values with the customer needs and demands.
2. Redesign: Draft and design core processes to enable improvements using information
technology (IT).
1. Rethink: Think about the basic organizational needs and issues people facing with the
current system.
2. Improve: Keep in mind all the business processes across the organization and work to
improve them.
What are the Advantages of Implementing BPR in your Business? | Benefits
There are many benefits of business process re-engineering to your business. Some of them are
as follows:
Business Process Reengineering eliminates all the unproductive and futile activities within an
organization. This drastically reduces the costs and cycle times for the employees performing
them. With team reorganization the need for management layers is eradicated.
This also enhances the flow of information eliminating the errors and rework efforts required
due to multiple handoffs.
Business Process Reengineering minimizes work fragmentation and establish clear responsibility
and ownership of the processes. This impacts the overall process effectively. Performance
measurement can be evaluated easily with a prompt feedback and this allows workers to gain
insight on the output responsibility.
The system development life cycle framework provides a sequence of activities for system designers
and developers to follow. It consists of a set of steps or phases in which each phase of the SDLC uses
the results of the previous one.
The SDLC adheres to important phases that are essential for developers—such as planning, analysis,
design, and implementation—and are explained in the section below. This includes evaluation of the
currently used system, information gathering, feasibility studies, and request approval. A number of
SDLC models have been created, including waterfall, fountain, spiral, build and fix, rapid prototyping,
incremental, synchronize, and stabilize.The oldest of these, and the best known, is the waterfall
model, a sequence of stages in which the output of each stage becomes the input for the next.
These stages can be characterized and divided up in different ways, including the following:
Preliminary analysis: Begin with a preliminary analysis, propose alternative solutions, describe costs
and benefits, and submit a preliminary plan with recommendations.
Conduct the preliminary analysis: Discover the organization's objectives and the nature and scope of
the problem under study. Even if a problem refers only to a small segment of the organization itself,
find out what the objectives of the organization itself are. Then see how the problem being studied
fits in with them.
Propose alternative solutions: After digging into the organization's objectives and specific problems,
several solutions may have been discovered. However, alternate proposals may still come from
interviewing employees, clients, suppliers, and/or consultants. Insight may also be gained by
researching what competitors are doing.
Cost-benefit analysis: Analyze and describe the costs and benefits of implementing the proposed
changes. In the end, the ultimate decision on whether to leave the system as is, improve it, or
develop a new system will be guided by this and the rest of the preliminary analysis data.
Systems analysis, requirements definition: Define project goals into defined functions and
operations of the intended application. This involves the process of gathering and interpreting facts,
diagnosing problems, and recommending improvements to the system. Project goals will be further
aided by the analysis of end-user information needs and the removal of any inconsistencies and
incompleteness in these requirements.
Scrutiny of the existing system: Identify the pros and cons of the current system in place, so as to
carry forward the pros and avoid the cons in the new system.
Analysis of the proposed system: Find solutions to the shortcomings described in step two and
prepare the specifications using any specific user proposals.
Systems design: At this step, desired features and operations are described in detail, including
screen layouts, business rules, process diagrams, pseudocode, and other documentation.
Role in system design: client, ux designer, project manager, business analyst, software developer, QA
specialist
Integration and testing: All the modules are brought together into a special testing environment,
then checked for errors, bugs, and interoperability.
Acceptance, installation, deployment: This is the final stage of initial development, where the
software is put into production and runs actual business.
Maintenance: During the maintenance stage of the SDLC, the system is assessed/evaluated to
ensure it does not become obsolete. This is also where changes are made to the initial software.
Evaluation: Some companies do not view this as an official stage of the SDLC, while others consider it
to be an extension of the maintenance stage, and may be referred to in some circles as post-
implementation review. This is where the system that was developed, as well as the entire process,
is evaluated. Some of the questions that need to be answered include if the newly implemented
system meets the initial business requirements and objectives, if the system is reliable and fault-
tolerant, and if it functions according to the approved functional requirements. In addition to
evaluating the software that was released, it is important to assess the effectiveness of the
development process. If there are any aspects of the entire process (or certain stages) that
management is not satisfied with, this is the time to improve.
Disposal: In this phase, plans are developed for discontinuing the use of system information,
hardware, and software and making the transition to a new system. The purpose here is to properly
move, archive, discard, or destroy information, hardware, and software that is being replaced, in a
manner that prevents any possibility of unauthorized disclosure of sensitive data. The disposal
activities ensure proper migration to a new system. Particular emphasis is given to the proper
preservation and archiving of data processed by the previous system. All of this should be done in
accordance with the organization's security requirements.
In the following diagram, these stages of the systems development life cycle are divided into ten
steps, from definition to creation and modification of IT work products:
Not every project will require that the phases be sequentially executed; however, the phases are
interdependent. Depending upon the size and complexity of the project, phases may be combined or
may overlap.
The sequential phases in Waterfall model are −
Requirement Gathering and analysis − All possible requirements of the
system to be developed are captured in this phase and documented in a
requirement specification document.
System Design − The requirement specifications from first phase are studied
in this phase and the system design is prepared. This system design helps in
specifying hardware and system requirements and helps in defining the overall
system architecture.
Implementation − With inputs from the system design, the system is first
developed in small programs called units, which are integrated in the next
phase. Each unit is developed and tested for its functionality, which is referred
to as Unit Testing.
Integration and Testing − All the units developed in the implementation
phase are integrated into a system after testing of each unit. Post integration
the entire system is tested for any faults and failures.
Deployment of system − Once the functional and non-functional testing is
done; the product is deployed in the customer environment or released into
the market.
Maintenance − There are some issues which come up in the client
environment. To fix those issues, patches are released. Also to enhance the
product some better versions are released. Maintenance is done to deliver
these changes in the customer environment.
All these phases are cascaded to each other in which progress is seen as flowing
steadily downwards (like a waterfall) through the phases. The next phase is started
only after the defined set of goals are achieved for previous phase and it is signed
off, so the name "Waterfall Model". In this model, phases do not overlap.
A feasibility study is a comprehensive evaluation of a proposed project that evaluates all factors
critical to its success in order to assess its likelihood of success. Business success can be defined
primarily in terms of ROI, which is the amount of profits that will be generated by the project.
In a feasibility study, a proposed plan or project is evaluated for its practicality. As part of a feasibility
study, a project or venture is evaluated for its viability in order to determine whether it will be
successful.
As the name implies, a feasibility analysis is used to determine the viability of an idea, such as
ensuring a project is legally and technically feasible as well as economically justifiable. It tells us
whether a project is worth the investment—in some cases, a project may not be doable. There can
be many reasons for this, including requiring too many resources, which not only prevents those
resources from performing other tasks but also may cost more than an organization would earn back
by taking on a project that isn’t profitable.
A well-designed study should offer a historical background of the business or project, such as a
description of the product or service, accounting statements, details of operations and management,
marketing research and policies, financial data, legal requirements, and tax obligations. Generally,
such studies precede technical development and project implementation.
It can be thrilling to start a complex, large-scale project with a significant impact on your company.
You are creating real change. Failure can be scary. This article will help you get started if you have
never done a feasibility study on project management.
1. Technical Feasibility
This assessment focuses on the technical resources available to the organization. It helps
organizations determine whether the technical resources meet capacity and whether the technical
team is capable of converting the ideas into working systems. Technical feasibility also involves the
evaluation of the hardware, software, and other technical requirements of the proposed system. As
an exaggerated example, an organization wouldn’t want to try to put Star Trek’s transporters in their
building—currently, this project is not technically feasible.
2. Economic Feasibility
This assessment typically involves a cost/ benefits analysis of the project, helping organizations
determine the viability, cost, and benefits associated with a project before financial resources are
allocated. It also serves as an independent project assessment and enhances project credibility—
helping decision-makers determine the positive economic benefits to the organization that the
proposed project will provide.
3. Legal Feasibility
This assessment investigates whether any aspect of the proposed project conflicts with legal
requirements like zoning laws, data protection acts or social media laws. Let’s say an organization
wants to construct a new office building in a specific location. A feasibility study might reveal the
organization’s ideal location isn’t zoned for that type of business. That organization has just saved
considerable time and effort by learning that their project was not feasible right from the beginning.
4. Operational Feasibility
This assessment involves undertaking a study to analyze and determine whether—and how well—
the organization’s needs can be met by completing the project. Operational feasibility studies also
examine how a project plan satisfies the requirements identified in the requirements analysis phase
of system development.
5. Scheduling Feasibility
This assessment is the most important for project success; after all, a project will fail if not
completed on time. In scheduling feasibility, an organization estimates how much time the project
will take to complete.
When these areas have all been examined, the feasibility analysis helps identify any constraints the
proposed project may face, including:
Apart from the approaches to feasibility study listed above, some projects also require other
constraints to be analysed -
Introduction
Data is collection of facts and figures which can be processed to produce information. Name of a
student, age, class and her subjects can be counted as data for recording purposes. Mostly data
represents recordable facts. Data aids in producing information which is based on facts. For
example, if we have data about marks obtained by all students, we can then conclude about
toppers and average marks etc.
A database is a collection of data, typically describing the activities of one or more related
organizations. A database management system stores data, in such a way which is easier to
retrieve, manipulate and helps to produce information. It is a collection of data. This is a
collection of related data with an implicit meaning and hence is a database.
The primary goal of a DBMS is to provide a way to store and retrieve database information that
is both convenient and efficient. Database systems are designed to manage large bodies of
information. Management of data involves both defining structures for storage of information
and providing mechanisms for the manipulation of information. In addition, the database system
must ensure the safety of the information stored, despite system crashes or attempts at
unauthorized access. If data are to be shared among several users, the system must avoid possible
anomalous results.
Real-world entity: Modern DBMS are more realistic and uses real world
entities to design its architecture. It uses the behavior and attributes too. For
example, an employee database may use employee as entity and their designation
as their attribute.
Relation-based tables: DBMS allows entities and relations among them to form
as tables. This eases the concept of data saving. A user can understand the
architecture of database just by looking at table names etc.
Consistency: DBMS always enjoy the state on consistency where the previous
form of data storing applications like file processing does not guarantee this.
Consistency is a state where every relation in database remains consistent. There
exist methods and techniques, which can detect attempt of leaving database in
inconsistent state.
ACID Properties: DBMS follows the concepts for ACID properties, which
stands for Atomicity, Consistency, Isolation and Durability. These concepts
are applied on transactions, which manipulate data in database. ACID properties
maintains database in healthy state in multi-transactional environment and in
case of failure.
Multiple views: DBMS offers multiples views for different users. A user who is
in sales department will have a different view of database than a person working
in production department. This enables user to have a concentrate view of
database according to their requirements.
Security: Features like multiple views offers security at some extent where users
are unable to access data of other users and departments. DBMS offers methods
to impose constraints while entering data into database and retrieving data at
later stage. DBMS offers many different levels of security features, which
enables multiple users to have different view with different features. For
example, a user in sales department cannot see data of purchase department is
one thing, additionally how much data of sales department he can see, can also
be managed.
File Oriented Approach: The earliest business computer systems were used to process business
records and produce information. They were generally faster and more accurate than equivalent
manual systems. These systems stored groups of records in separate files, and so they were
called file processing systems. In a typical file processing systems, each department has its own
files, designed specifically for those applications. The department itself working with the data
processing staff, sets policies or standards for the format and maintenance of its files. Programs
are dependent on the files and vice-versa; that is, when the physical format of the file is changed,
the program has also to be changed. Although the traditional file oriented approach to
information processing is still widely used, it does have some very important disadvantages.
System programmers wrote these application programs to meet the needs of the bank. New
application programs are added to the system as the need arises.
For example, suppose that the savings bank decides to offer checking accounts. As a result, the
bank creates new permanent files that contain information about all the checking accounts
maintained in the bank, and it may have to write new application programs to deal with
situations that do not arise in savings accounts, such as overdrafts. Thus, as time goes by, the
system acquires more files and more application programs.
The typical file-processing system is supported by a conventional operating system. The system
stores permanent records in various files, and it needs different application programs to extract
records from, and add records to, the appropriate files. Before database management systems
(DBMSs) came along, organizations usually stored information in such systems.
Difficulty in accessing data: Suppose that one of the bank officers needs to find out the names
of all customers who live within a particular postal-code area. The officer asks the data-
processing department to generate such a list. Because the designers of the original system did
not anticipate this request, there is no application program on hand to meet it. There is, however,
an application program to generate the list of all customers. The bank officer has now two
choices: either obtains the list of all customers and extracts the needed information manually or
ask a system programmer to write the necessary application program. Both alternatives are
obviously unsatisfactory. Conventional file-processing environments do not allow needed data to
be retrieved in a convenient and efficient manner. More responsive data-retrieval systems are
required for general use.
Data isolation: Because data are scattered in various files and files may be in different formats,
writing new application programs to retrieve the appropriate data is difficult.
Integrity problems: The data values stored in the database must satisfy certain types of
consistency constraints. For example, the balance of a bank account may never fall below a
prescribed amount. Developers enforce these constraints in the system by adding appropriate
code in the various application programs. However, when new constraints are added, it is
difficult to change the programs to enforce them.
Atomicity problems: A computer system, like any other mechanical or electrical device, is
subject to failure. In many applications, it is crucial that, if a failure occurs, the data be restored
to the consistent state that existed prior to the failure. Consider a program to transfer Rs.5000
from account A to account B. If a system failure occurs during the execution of the program, it is
possible that the Rs.5000 was removed from account A but was not credited to account B,
resulting in an inconsistent database state. Clearly, it is essential to database consistency that
either both the credit and debit occur, or that neither occur. That is, the funds transfer must be
atomic—it must happen in its entirety or not at all. It is difficult to ensure atomicity in a
conventional file-processing system.
Concurrent-access anomalies: For the sake of overall performance of the system and faster
response, many systems allow multiple users to update the data simultaneously. In such an
environment, interaction of concurrent updates may result in inconsistent data. Consider bank
account A, containing Rs.5000. If two customers withdraw funds (say Rs.500 and Rs.1000
respectively) from account A at about the same time, the result of the concurrent executions may
leave the account in an incorrect (or inconsistent) state.
Security problems: Not every user of the database system should be able to access all the data.
For example, in a banking system, payroll personnel need to see only that part of the database
that has information about the various bank employees. They do not need access to information
about customer accounts. But, since application programs are added to the system in an ad hoc
manner, enforcing such security constraints is difficult.
Efficient data access: A DBMS utilizes a variety of sophisticated techniques to store and
retrieve data efficiently. This feature is especially important if the data is stored on
external storage devices.
Data integrity and security: If data is always accessed through the DBMS, the DBMS
can enforce integrity constraints on the data. For example, before inserting salary
information for an employee, the DBMS can check that the department budget is not
exceeded. Also, the DBMS can enforce access controls that govern what data is visible to
different classes of users.
Data administration: When several users share the data, centralizing the administration
of data can offer significant improvements. Experienced professionals who understand
the nature of the data being managed, and how different groups of users use it, can be
responsible for organizing the data representation to minimize redundancy.
Reduced application development time: Clearly, the DBMS supports many important
functions that are common to many applications accessing data stored in the DBMS.
This, in conjunction with the high-level interface to the data, facilitates quick
development of applications. Such applications are also likely to be more robust than
applications developed from scratch because many important tasks are handled by the
DBMS instead of being implemented by the application.
Disadvantages of DBMS:
Danger of a Overkill: For small and simple applications for single users a database
system is often not advisable.
Costs: Through the use of a database system new costs are generated for the system itself
but also for additional hardware and the more complex handling of the system.
Lower Efficiency: A database system is a multi-use software which is often less efficient
than specialized software which is produced and optimized exactly for one problem.
Database Users: DBMS is used by various users for various purposes. Some may involve in
retrieving data and some may involve in backing it up. Some of them are described as follows:
Administrators: A bunch of users maintain the DBMS and are responsible for
administrating the database. They are responsible to look after its usage and by whom it
should be used. They create users access and apply limitation to maintain isolation and
force security. Administrators also look after DBMS resources like system license,
software application and tools required and other hardware related maintenance.
Designer: This is the group of people who actually works on designing part of database.
The actual database is started with requirement analysis followed by a good designing
process. They people keep a close watch on what data should be kept and in what
format. They identify and design the whole set of entities, relations, constraints and
views.
End Users: This group contains the persons who actually take advantage of database
system. End users can be just viewers who pay attention to the logs or market rates or
end users can be as sophisticated as business analysts who take the most of it.
Databases change over time as information is inserted and deleted. The collection of information
stored in the database at a particular moment is called an instance of the database. The overall
design of the database is called the database schema. Schemas are changed infrequently, if at all.
Database systems have several schemas, partitioned according to the levels of abstraction. The
physical schema describes the database design at the physical level, while the logical schema
describes the database design at the logical level. A database may also have several schemas at
the view level, sometimes called sub schemas that describe different views of the database.
A data definition language (DDL) is used to define the external and conceptual schemas.
Conceptual Schema: The conceptual schema (sometimes called the logical schema)
describes the stored data in terms of the data model of the DBMS. In a relational DBMS,
the conceptual schema describes all relations that are stored in the database. In a
university database, these relations contain information about entities, such as students
and faculty, and about relationships, such as students’ enrollment in courses. All student
entities can be described using records in a Students relation. In fact, each collection of
entities and each collection of relationships can be described as a relation, leading to the
following conceptual schema:
o Students(sid: string, name: string, login: string, age: integer,)
o Faculty(fid: string, fname: string, sal: real)
o Courses(cid: string, cname: string, credits: integer)
o Rooms(rno: integer, address: string, capacity: integer)
o Enrolled(sid: string, cid: string, grade: string)
o Teaches(fid: string, cid: string)
The physical schema: This schema specifies additional storage details. Essentially, the
physical schema summarizes how the relations described in the conceptual schema are
actually stored on secondary storage devices such as disks and tapes. A sample physical
schema for the university database follows:
o Create indexes on the first column of the Students, Faculty, and Courses relations,
the sal column of Faculty, and the capacity column of Rooms Decisions about the
physical schema are based on an understanding of how the data is typically
accessed. The process of arriving at a good physical schema is called physical
database design.
External Schema: External schemas, which usually are also in terms of the data model
of the DBMS, allow data access to be customized (and authorized) at the level of
individual users or groups of users. Any given database has exactly one conceptual
schema and one physical schema because it has just one set of stored relations, but it may
have several external schemas, each tailored to a particular group of users. Each external
schema consists of a collection of one or more views and relations from the conceptual
schema. A view is conceptually a relation, but the records in a view are not stored in the
DBMS. Rather, they are computed using a definition for the view, in terms of relations
stored in the DBMS.
Fig 4.1: Three level architecture
External schema: It describes a subset of the database that a particular user group is interested
in, according to the format the format user wants, and hides the rest - may contain virtual data
that is derived from the files, but is not explicitly stored.
Conceptual schema: It hides the details of physical storage structures and concentrates on
describing entities, data types, relationships, operations, and constraints.
Internal schema: It describes the physical storage structure of the DB and uses a low-level
(physical) data model to describe the complete details of data storage and access paths Data and
meta-data. Three schemas are only meta-data (descriptions of data). Data actually exists only at
the physical level
Mapping: DBMS must transform a request specified on an external schema into a request
against the conceptual schema, and then into the internal schema.
Logical data independence: The capacity to change the conceptual schema without having to
change external schema or application programs is called logical data independence.
Example: record of an employee is defined as below.
Employee (E#, Name, Address, Salary)
A view including only E# and Name is not affected by changes in any other attributes.
Logical data independence is the capacity to change the conceptual schema without having to
change external schemas or application programs. We may change the conceptual schema to
expand the database (by adding a record type or data item), or to reduce the database (by
removing a record type or data item). In the latter case, external schemas that refer only to the
remaining data should not be affected. Only the view definition and the mappings need be
changed in a DBMS that supports logical data independence.
Physical data independence: The capacity to change the internal schema without having to
change the conceptual (or external) schema.
Internal schema may change to improve the performance (e.g., creating additional access
structure)
easier to achieve logical data independence, because application programs are dependent
on logical structures
Overview of database design: The database design process can be divided into six steps. The
ER model is most relevant to the first three steps.
Logical Database Design: We must choose a DBMS to implement our database design,
and convert the conceptual database design into a database schema in the data model of
the chosen DBMS. We will only consider relational DBMSs, and therefore, the task in
the logical design step is to convert an ER schema into a relational database schema.
Schema Refinement: The fourth step in database design is to analyze the collection of
relations in our relational database schema to identify potential problems, and to refine it.
In contrast to the requirements analysis and conceptual design steps, which are essentially
subjective, schema refinement can be guided by some elegant and powerful theory like
the theory of normalizing relations—restructuring them to ensure some desirable
properties.
Physical Database Design: In this step we must consider typical expected workloads
that our database must support and further refine the database design to ensure that it
meets desired performance criteria.
Security Design: In this step, we identify different user groups and different roles played
by various users (e.g., the development team for a product, the customer support
representatives, the product manager). For each role and user group, we must identify the
parts of the database that they must be able to access and the parts of the database that
they should not be allowed to access, and take steps to ensure that they can access only
the necessary parts.
Weak relationship
Relationship
Multi valued
Attribute attribute
Entity with attributes: let employee be an entity. It can have several attributes like employee code
(E-No.), employee name (E-NAME), address (ADD), phone number (Ph.No.), department
no.(DEPT No., designation.
E-NAME
E-No.
ADD
EMPLOYEE
DESIGNATI
ON.
Ph. No.
DEPT No.
Several types of attributes occur in the ER model: simple versus composite; single-valued versus
multivalued; and stored versus derived.
Composite Versus Simple (Atomic) Attributes: Composite attributes can be divided into
smaller subparts, which represent more basic attributes with independent meanings. For example,
the Address attribute of the employee entity can be sub-divided into Street Address, City, State,
and PIN code Attributes that are not divisible are called simple or atomic attributes. The value of
a composite attribute is the concatenation of the values of its constituent simple attributes.
House
no.
Address
Street no.
PIN City
State
Single-valued Versus Multi-valued Attributes: Most attributes have a single value for a
particular entity; such attributes are called single-valued. For example, Age is a single-valued
attribute of person. In some cases an attribute can have a set of values for the same entity—for
example, phone no. attribute for an employee, or a College Degrees attribute for a person. One
person may not have a college degree, another person may have one, and a third person may
have two or more degrees; so different persons can have different numbers of values for the
College Degrees attribute. Such attributes are called multi-valued. A multi-valued attribute may
have lower and upper bounds on the number of values allowed for each individual entity.
College Degree
Stored Versus Derived Attributes: In some cases two (or more) attribute values are
related—for example, the Age and Birth Date attributes of a person. For a particular person
entity, the value of Age can be determined from the current (today’s) date and the value of that
person’s Birth Date. The Age attribute is hence called a derived attribute and is said to be
derivable from the Birth Date attribute, which is called a stored attribute. Some attribute values
can be derived from related entities; for example, an attribute Number of Employees of a
department entity can be derived by counting the number of employees related to (working for)
that department.
Entity Types and Entity Sets: A database usually contains groups of entities that are similar.
For example, a company employing hundreds of employees may want to store similar
information concerning each of the employees. These employee entities share the same
attributes, but each entity has its own value(s) for each attribute. An entity type defines a
collection (or set) of entities that have the same attributes. Each entity type in the database is
described by its name and attributes. The following figure shows two entity types, named
EMPLOYEE and DEPARTMENT, and a list of attributes for each. The collection of all entities
of a particular entity type in the database at any point in time is called an entity set; the entity set
is usually referred to using the same name as the entity type. For example, EMPLOYEE refers to
both a type of entity as well as the current set of all employee entities in the database. An entity
type is represented in ER diagrams as a rectangular box enclosing the entity type name. Attribute
names are enclosed in ovals and are attached to their entity type by straight lines. Composite
attributes are attached to their component attributes by straight lines. Multi-valued attributes are
displayed in double ovals.
E-NAME
E-No.
ADD
EMPLOYEE
DESIGNATI
ON.
Ph. No.
DEPT No.
D-NAME
D-No.
ADD
DEPARTMENT
PH. No
Key Attributes of an Entity Type: An important constraint on the entities of an entity type is
the key or uniqueness constraint on attributes. An entity type usually has an attribute whose
values are distinct for each individual entity in the collection. Such an attribute is called a key
attribute, and its values can be used to identify each entity uniquely. For example E-No
(employee code ) is unique for EMPLOYEE so this is the primary key. Sometimes, several
attributes together form a key, meaning that the combination of the attribute values must be
distinct for each entity. If a set of attributes possesses this property, we can define a composite
attribute that becomes a key attribute of the entity type. Notice that a composite key must be
minimal; that is, all component attributes must be included in the composite attribute to have the
uniqueness property. In ER diagrammatic notation, each key attribute has its name underlined
inside the oval, specifying that an attribute is a key of an entity type means that the preceding
uniqueness property must hold for every extension of the entity type. Hence, it is a constraint that
prohibits any two entities from having the same value for the key attribute at the same time.
Binary relationship
Recursive relationship
Ternary relationship
One – to – One:
One – to – Many: One entity is associated with many number of same entity.
Many-to-many:
There will be employees. Each employee works for one department but may work on several
projects. We keep track of the number of hours per week that an employee currently works on
each project. We also keep track of the direct supervisor of each employee. Each employee
may have a number of DEPENDENTs. For each dependent, we keep track of their name, sex,
Birth date, and relationship to the employee.
Domain Integrity: Domain integrity means the definition of a valid set of values for an attribute.
Definition should be like
- data type,
- length or size
- is null value allowed
- is the value unique or not for an attribute.
The default value, the range (values in between) and/or specific values for the attribute may also
be defined. Some DBMS allow defining the output format and/or inputting mask for the
attribute. These definitions ensure that a specific attribute will have a right and proper value in
the database.
Entity Integrity Constraint: The entity integrity constraint states that primary keys can't be
null. There must be a proper value in the primary key field. This is because the primary key
value is used to identify individual rows in a table. If there were null values for primary keys, it
would mean that we could not identify those rows.
On the other hand, there can be null values other than primary key fields. Null value means that
one doesn't know the value for that field. Null value is different from zero value or space.
Referential Integrity Constraint: The referential integrity constraint is specified between two
tables and it is used to maintain the consistency among rows between the two tables.
The rules are:
1. A record from a primary table cannot be deleted if matching records exist in a related table.
2. A primary key value in the primary table cannot be changed if that record has related records.
3. A value cannot be entered in the foreign key field of the related table that doesn't exist in the
primary key of the primary table.
4. A Null value can be entered in the foreign key, specifying that the records are unrelated.
Foreign Key Integrity Constraint: There are two foreign key integrity constraints: cascade
update related fields and cascade delete related rows. These constraints affect the referential
integrity constraint.
Cascade Update Related Fields: Any time the primary key of a row in the primary table is
changed, the foreign key values are updated in the matching rows in the related table. This
constraint overrules rule 2 in the referential integrity constraints.
Cascade Delete Related Rows: Any time you delete a row in the primary table, the matching
rows are automatically deleted in the related table. This constraint overrules rule 1 in the
referential integrity constraints.
Database normalization:
Un-normalized data exists in flat files. Normalization is the process of moving data into related
tables. Database normalization is the process of removing redundant data from tables to improve
storage efficiency, data integrity, and scalability.
In the relational model, methods exist for quantifying how efficient a database is. These
classifications are called normal forms (or NF), and there are algorithms for converting a given
database between them.
Normalization generally involves splitting existing tables into multiple ones, which must be re-
joined or linked each time a query is issued.
Edgar F. Codd first proposed the process of normalization and what came to be known as the 1st
normal form. In his paper A Relational Model of Data for Large Shared Data Banks Codd
stated: “There is, in fact, a very simple elimination procedure which we shall call
normalization. Through decomposition non simple domains are replaced by ‘domains
whose elements are atomic values.’”
Example of Normalization:
Table: Employees
Table: Departments
DeptNo DeptName
10 Accounting
20 Marketing
50 Shipping
106 10 11/02/2013
107 500 11/02/2013
108 700 11/02/2013
115 10 11/09/2013
116 700 11/09/2013
Let's start by adding a couple of books written by Luke Welling and Laura Thomson. Because
this book has two authors, we are going to need to accommodate both in our table.
First, this table is not very efficient with storage. Let’s imagine for a second that Luke and Laura
were extremely busy writers and managed to produce 500 books for our database. The
combination of their two names is 25 characters long, and since we will repeat their two names
in 500 rows we are wasting 25 × 500 = 12,500 bytes of storage space unnecessarily. This creates
data redundancy.
Second, this design does not protect data integrity. Let’s once again imagine that Luke and Laura
have written 500 books. Someone has had to type their names into the database 500 times, and it
is very likely that one of their names will be misspelled at least once (i.e.. Thompson instead of
Thomson). Our data is now corrupt, and anyone searching for book by author name will find
some of the results missing. The same thing could happen with publisher name. Sams publishes
hundreds of titles and if the publisher's name were misspelled even once the list of books by
publisher would be missing titles.
Third, this table does not scale well. First of all, we have limited ourselves to only two authors,
yet some books are written by over a dozen people.
Example 2:
Insertion Problem: For inserting the record of a student who has taken new admission but not
opted for any subject yet we have to leave the subject opted column null for the student.
Deletion Problem: Suppose the student 402 has opted out temporarily from the subject
Mathematics then we have to delete the row but it will delete the whole student record.
First Normal Form: The normalization process involves getting our data to conform to three
progressive normal forms, and a higher level of normalization cannot be achieved until the
previous levels have been achieved (there are actually five normal forms). The First Normal
Form (or 1NF) involves removal of redundant data from horizontal rows. We want to ensure that
there is no duplication of data in a given row, and that every column stores the least amount of
information possible (making the field atomic).
In our example1 table above we have two violations of First Normal Form. First, we have more
than one author field, and our subject field contains more than one piece of information. With
more than one value in a single field, it would be very difficult to search for all books on a given
subject. In addition, with two author fields we have two fields to search in order to look for a
book by a specific author. We could get around these problems by modifying our table to have
only one author field, with two entries for a book with two authors, as in the following table:
Example 2: We want to create a table of user information, and we want to store each users'
Name, Company, Company Address, and some personal urls. You might start by defining a table
structure like this:
This table is in un-normalized form because none of rules of normalization have been applied
yet.
Notice how we're breaking that first rule by repeating the url1 and url2 fields?
Now the table is said to be in the First Normal Form. We've solved the problem of url field
problem, but by doing so we have created some problems. Every time we input a new record into
the users table, we've got to duplicate all that company and user name data. Not only will our
database grow much larger than we'd ever want it to, but we could easily begin corrupting our
data by misspelling some of that redundant information.
There must not be any partial dependency on any column on primary key. It means that
for a table that has concatenated primary key each column of the table which is not a part
of the primary key must depend upon entire concatenated key for its existence.
If any column depends on only one part of the concatenated key then it is not in 2nd
normal form.
In the above table concatenation of c-id and order –id is the primary key so the table is in 1st
Normal Form. But it is not in 2nd Normal Form as there is partial dependence. C-name only
depends on c-id and Order name only depends on Order-id. The tables will be decomposed to
make them in second normal form.
Customer detail
C-id C-name
101 Rohan
102 Ravi
103 Ranjan
Order Details
102 12 Sale 3
103 13 Sale 4
Third normal form: the conditions for third normal form are
Example 1
The table is not is 1st normal form because there are multiple records in colour field. Row 1 and
row 3 are the same so there are duplicate records and there is no primary key. So first it needs to
be converted in first normal form.
1NF
The above table is not in 2nd normal form as price and tax depend on item but not on colour.
Item Table
Item Colour
T-Shirt Red
Polo Red
T-Shirt blue
Sweat- shirt Blue
Polo Yellow
Sweat-shirt black
Price table
The tables are not in third normal form as tax depends on price but both are non key fields.
Item Colour
T-Shirt Red
Polo Red
T-Shirt blue
Sweat- shirt Blue
Polo Yellow
Sweat-shirt black
Price table
Item Price
T-Shirt 240.00
Polo 240.00
Tax table
Price Tax
240.00 0.60
240.00 0.60
500.00 1.25
Example 2: Un-normalized Table:
No Repeating Fields
Data in Smallest Parts
Table: Students
1022 101-07
1022 143-01
1022 159-02
4123 201-01
4123 211-02
4123 214-01
Advisor table:
Student table:
1022 Rohan 10
4123 Rakesh 12
Table: Registration
1022 101-07
1022 143-01
1022 159-02
4123 201-01
4123 211-02
4123 214-01