Professional Documents
Culture Documents
Backup and recovery: A DBMS provides mechanisms for backing up and recovering the
data in the event of a system failure.
File based systems were an early attempt to computerize the manual system. It is also called
a traditional based approach in which a decentralized approach was taken where each
department stored and controlled its own data with the help of a data processing specialist. The
main role of a data processing specialist was to create the necessary computer file structures,
and also manage the data within structures and design some application programs that create
reports based on file data.
Consider an example of a student's file system. The student file will contain information
regarding the student (i.e. roll no, student name, course etc.). Similarly, we have a subject file
that contains information about the subject and the result file which contains the information
regarding the result.
Some fields are duplicated in more than one file, which leads to data redundancy. So to
overcome this problem, we need to create a centralized system, i.e. DBMS approach.
DBMS:
A database approach is a well-organized collection of data that are related in a meaningful way
which can be accessed by different users but stored only once in a system. The various
operations performed by the DBMS system are: Insertion, deletion, selection, sorting etc.
2. Network Model
The Network Model was formalized by the Database Task group in the 1960s. This model is
the generalization of the hierarchical model. This model can consist of multiple parent
segments and these segments are grouped as levels but there exists a logical association
between the segments belonging to any level. Mostly, there exists a many-to-many logical
association between any of the two segments.
Simple Architecture: 1-Tier Architecture is the most simple architecture to set up, as only a
single machine is required to maintain it.
Cost-Effective: No additional hardware is required for implementing 1-Tier Architecture,
which makes it cost-effective.
Easy to Implement: 1-Tier Architecture can be easily deployed, and hence it is mostly
used in small projects.
2-Tier Architecture
The 2-tier architecture is similar to a basic client-server model. The application at the client
end directly communicates with the database on the server side. APIs like ODBC and JDBC
are used for this interaction. The server side is responsible for providing query processing and
transaction management functionalities. On the client side, the user interfaces and application
programs are run. The application on the client side establishes a connection with the server
side to communicate with the DBMS.
An advantage of this type is that maintenance and understanding are easier, and compatible
with existing systems. However, this model gives poor performance when there are a large
number of users.
Data Independence
A major objective for three-level architecture is to provide data independence, which means
that upper levels are unaffected by changes in lower levels.
Logical data independence indicates that the conceptual schema can be changed without
affecting the existing external schemas. The change would be absorbed by the mapping
between the external and conceptual levels. Logical data independence also insulates
application programs from operations such as combining two records into one or splitting an
existing record into two or more records. This would require a. change in the
external/conceptual mapping to leave the external view unchanged.
Physical data independence indicates that the physical storage structures or devices could be
changed without affecting conceptual schema. The change would be absorbed by the mapping
between the conceptual and internal levels.
The Logical data independence is difficult to achieve than physical data independence as it
requires the flexibility in the design of database and prograll1iller should foresee the future
requirements or modifications in the design.
Database design
Performance issues
Database accessibility
Capacity issues
Data replication
Table Maintenance
Responsibilities of DBA
Data Dictionary
A metadata (also called the data dictionary) is the data about the data. It is the self-describing
nature of the database that provides program-data independence. It is also called as the System
Catalog. It holds the following information about each data element in the databases, it
normally includes:
– Name
– Type
– Range of values
– Source
– Access authorization
– Indicates which application programs use the data so that, when a change in a data structure is
contemplated, a list of the affected programs can be generated.
The data dictionary is used to control the database operation, data integrity, and accuracy.
Metadata is used by developers to develop the programs, queries, controls, and procedures to
manage and manipulate the data.
Active and Passive Data Dictionaries
The data dictionary may be either active or passive. An active data dictionary (also called
integrated data dictionary) is managed automatically by the database management software.
Consistent with the current structure and definition of the database. Most of the relational
database management systems contain active data dictionaries that can be derived from their
system catalog.
The passive data dictionary (also called non-integrated data dictionary) is the one used only for
documentation purposes. Data about fields, files, people and so on, in the data processing
environment, are. Entered the dictionary and cross-referenced.
DDL is short name of Data Definition Language, which deals with database schemas and
descriptions, of how the data should reside in the database.
CREATE – to create database and its objects like (table, index, views, stored procedure,
function, and triggers)
ALTER – alters the structure of the existing database
DROP – delete objects from the database
TRUNCATE – remove all records from a table, including all spaces allocated for the records
are removed COMMENT – add comments to the data dictionary
RENAME – rename an object
DML is short name of Data Manipulation Language which deals with data manipulation and
includes most common SQL statements such SELECT, INSERT, UPDATE, DELETE etc, and
it is used to store, modify, retrieve, delete and update data in the database.
SELECT – retrieve data from a database
INSERT – insert data into a table
UPDATE – updates existing data within a table
DELETE – Delete all records from a database table
MERGE – UPSERT operation (insert or update)
The term cloud refers to a network or the internet. It is a technology that uses remote servers on
the internet to store, manage, and access data online rather than local drives. The data can be
anything such as files, images, documents, audio, video, and more.
There are the following operations that we can do using cloud computing:
o Developing new applications and services
o Storage, back up, and recovery of data
o Hosting blogs and websites
o Delivery of software on demand
o Analysis of data
o Streaming videos and audios
Types of Cloud
There are the following 5 types of cloud that you can deploy according to the organization's
needs-
o Public Cloud
o Private Cloud
o Hybrid Cloud
o Community Cloud
o Multi Cloud
Public cloud is open to all to store and access information via the Internet using the pay-per-
usage method.
In public cloud, computing resources are managed and operated by the Cloud Service Provider
(CSP). The CSP looks after the supporting infrastructure and ensures that the resources are
accessible to and scalable for the users.
Due to its open architecture, anyone with an internet connection may use the public cloud,
regardless of location or company size. Users can use the CSP's numerous services, store their
data, and run apps. By using a pay-per-usage strategy, customers can be assured that they will
only be charged for the resources they actually use, which is a smart financial choice.
Private cloud is also known as an internal cloud or corporate cloud. It is used by
organizations to build and manage their own data centers internally or by the third party. It can
be deployed using Opensource tools such as Openstack and Eucalyptus.
Hybrid Cloud is a combination of the public cloud and the private cloud. we can say:
Hybrid cloud is partially secure because the services which are running on the public cloud can
be accessed by anyone, while the services which are running on a private cloud can be accessed
only by the organization's users. In a hybrid cloud setup, organizations can leverage the benefits
of both public and private clouds to create a flexible and scalable computing environment. The
public cloud portion allows using cloud services provided by third-party providers, accessible
over the Internet.
In a community cloud setup, the participating organizations, which can be from the same
industry, government sector, or any other community, collaborate to establish a shared cloud
infrastructure. This infrastructure allows them to access shared services, applications, and data
relevant to their community.
Multi-cloud is a strategy in cloud computing where companies utilize more than one cloud
service provider or platform to meet their computing needs. It involves distributing workloads,
applications, and statistics throughout numerous cloud environments consisting of public,
private, and hybrid clouds.
Adopting a multi-cloud approach allows businesses to have the ability to select and leverage the
most appropriate cloud services from different providers based on their specific necessities.
This allows them to harness each provider's distinctive capabilities and services, mitigating the
risk of relying solely on one vendor while benefiting from competitive pricing models. '
Cloud computing service & deployment models
Software as a Service (SaaS). The apailit proided to the osuer is to use the proider’s appliatios
running on a cloud infrastructure. The applications are accessible from various client devices
through either a thin client interface, such as a web browser (e.g., web-based email), or a
program interface.
Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the
cloud infrastructure consumer-created or acquired applications created using programming
languages, libraries, services, and tools supported by the provider.