You are on page 1of 29

The most commonly used SQL Queries, with examples

September 8, 2018

A. Data Definition Language (DDL)


1. CREATE
CREATE  queries are used to create a new database or table.

CREATE TABLE table_name (

column_1 datatype_1,

column_2 datatype_2,

);

2. ALTER
ALTER  queries are used to modify the structure of a database or a table such as adding a
new column, change the data type, drop, or rename an existing column, etc.

ALTER TABLE table_name

ADD column_name datatype;

3. DROP
DROP  queries are used to delete a database or table. You should also be careful when using
this type of query because it will remove everything, including table definition along with all
the data, indexes, triggers, constraints and permission specifications for that table.

DROP TABLE table_name;

4. TRUNCATE
TRUNCATE  queries are used to clean the table, remove all the existing records, but not the
table itself.

TRUNCATE TABLE table_name;

B. Data Manipulation Language (DML): managing data within table object.


1. SELECT
 SELECT … FROM …  is the most basic and commonly used query in SQL. It’s used for
retrieving data from a table.

A common  SELECT  query is broken down into four main parts:

 SELECT
 FROM
 WHERE
 ORDER BY

Let’s look deeper

 To see data of an entire table:

SELECT * FROM table_name;

 To see data in some specific columns:

SELECT column_name(s) FROM table_name;

 To see data from your table based on some conditions, this is the case for  WHERE  to
be used:

SELECT column_name(s)

FROM table_name

WHERE condition(s);

By using  WHERE  in a  SELECT  query, we add one or more conditions and restrict the number
the records affected by the query.
Or in other words, it’s being a filter to filter out only the records that match the conditions
as the result.
Example:

SELECT * FROM students

WHERE state_code = 'CA'

That query to show every record from the table students that match the state_code “CA”.
 ORDER BY  is a clause that indicates you want to sort the result set by a particular
column either alphabetically or numerically.

SELECT column_name

FROM table_name

ORDER BY column_name ASC | DESC;

2. INSERT
INSERT INTO  queries are used to insert one or more rows of data (new records) into an
existing table.

INSERT INTO table_name (column_1, column_2, column_3, ...)

VALUES (value_1, value_2, value_3, ...);

Example:

INSERT INTO students (full_name, student_id, state_code)

VALUES (“Alex Jonas”, 234, "CA");

3. UPDATE
UPDATE  queries are used to modify an existing table and update it with new data based on
some conditions.

UPDATE table_name

SET column_1 = value_1, column_2 = value_2, ...

WHERE condition;

4. DELETE
DELETE FROM  queries are used to remove the records from a table based on some
conditions.  DELETE FROM  is similar to  TRUNCATE  but it limits the number of rows being
affected by the query using the conditions.

DELETE FROM table_name

WHERE condition;

C. Aggregate Functions
 AVG()  returns the average value for a numeric column.

SELECT AVG(column_name)

FROM table_name;

 SUM()  returns the sum of all the values in a column.

SELECT SUM(column_name)

FROM table_name;

 ROUND()  rounds the values in the column to the number of decimal places specified
by the integer.

SELECT ROUND(column_name, integer)

FROM table_name;

 MAX()  returns the largest value in a column.

SELECT MAX(column_name)

FROM table_name;

 MIN()  returns the smallest value in a column.

SELECT MIN(column_name)

FROM table_name;

 COUNT()  counts the number of rows where the column is not NULL.

SELECT COUNT(column_name)

FROM table_name;

D. Additional clauses and functions


 You can use  AS  to rename a column or table temporarily using an alias on the result
view.

SELECT column_name AS 'Alias'

FROM table_name;

 The  BETWEEN  operator is used to select the value within a certain range.

SELECT column_name(s)

FROM table_name

WHERE column_name BETWEEN value_1 AND value_2;

 GROUP BY  is a clause in SQL that is only used with aggregate functions (COUNT,
MAX, MIN, SUM, AVG). It is used in collaboration with the SELECT statement to
arrange identical data into groups.

SELECT column_name, COUNT(*)

FROM table_name

GROUP BY column_name;

 HAVING  is used to replace  WHERE  to work with aggregate functions.  WHERE  clause
introduces a condition on individual rows;  HAVING  clause introduces a condition on
aggregations.

HAVING  is typically used with  GROUP BY .

SELECT column_name, COUNT(*)

FROM table_name

GROUP BY column_name

HAVING COUNT(*) > value;

 IS NULL  and  IS NOT NULL  are used to test whether a column value is empty or not.

SELECT column_name(s)
FROM table_name

WHERE column_name IS NULL;

 LIKE  is a special operator used with the WHERE clause to search for a specific
pattern in a column.

SELECT column_name(s)

FROM table_name

WHERE column_name LIKE pattern;

 You can use  LIMIT  to specify the maximum number of records you want to show in
a result set.

SELECT *

FROM table_name

LIMIT number;

 OR  is used to combine two or more conditions in a where clause. The results have to
match at least one of the conditions specified.

SELECT *

FROM table_name

WHERE condition_1

OR condition_2;

 SELECT DISTINCT  returns unique values in the specified column(s).

SELECT DISTINCT column_name

FROM table_name;

 An  OUTER JOIN  will combine rows from different tables even if the join condition is
not met. Every row in the left table is returned in the result set, and if the join condition
is not met, then NULL values are used to fill in the columns from the right table.
SELECT column_name(s)

FROM table_1

LEFT JOIN table_2

ON table_1.column_name = table_2.column_name;

 An  INNER JOIN  will combine rows from different tables if the join condition is true.

SELECT column_name(s)

FROM table_1

JOIN table_2

ON table_1.column_name = table_2.column_name;
5 Must Haves In A Data Backup Strategy:
A Disaster Recovery Report
 

March 5, 2020 by Siobhan Climer and Eric White

“Data is the new oil, some say the gold, of the 21st century”, announced Joe
Kaeser, Siemens CEO at a 2018 tech forum in Stockholm. For businesses, as
well as individuals, data is more valuable than ever before.

Protecting that data is essential. A robust data backup strategy can help you
do just that. In the event of a disaster – such as ransomware, flood, or power
outage – data backups can help you get up and running as soon as possible.

It’s a great time to review the data backup strategy. World Backup Day, which
falls on March 31st of every year, is fast-approaching. The day is meant to
raise awareness about the importance of protecting your data and
acknowledging its value.

In honor of the upcoming holiday, we took a look at the top 5 must-haves in


a data backup strategy. Prepare today so you’re ready for whatever tomorrow
brings.
 

Must Haves In A Data Backup Strategy


 

 Onsite Backups: When a
server crashes or fails, it is helpful to have data backups on hand for easy
restoration. It’s a cliché, but time is indeed money. Onsite backups are
often faster to restore than cloud backups and almost always faster than
offsite tape backups.
 
  Offsite Backups: Onsite backups are valuable, but they cannot be
counted on alone. Should something disastrous happen to the data center,
it could also damage any backups you have in the building. For that
reason, it is always wise to have copies of your backups offsite where they
can be accessed manually or through the cloud.
 

 Optimized Backup Schedule: Backups are not a one and done process.


Key data in your data center must be regularly and consistently backed up
according to a clear and organized schedule. Check out our blog article on
just a few backup rotation schemes for more information.
 

 Backup Testing: Backups need to be tested and need to be tested


regularly. In addition, the IT staff must be trained on how to access and
restore their data backups as quickly as possible. A backup that fails or a
team that is unable to restore the backup quickly undermines the
company’s investment in a backup solution in the first place.
 

 Organized Storage System: Mostly applying to tape-based backup


solutions, the storage repository for backups and labeling system must be
clear and organized. The team cannot commit extra time digging through
box after box of tape looking for a specific backup from a specific date
several years ago.
 

A 3-2-1 Backup Strategy


 

The 3-2-1 backup strategy is well-known across the industry. Despite drastic
changes to the technology powering backups and even calls for – wait for it –
a 3-1-2, 3-2-2, and 3-2-3 configurations, the 3-2-1 backup strategy provides a
baseline rule by which companies can protect the data on which they rely.

The 3-2-1 backup strategy states that you should keep:

1. At least THREE copies of your data;


2. Backed-up data on TWO different storage types;
3. At least ONE copy of the data offsite.
Speed Is The Key
 

Central to all of these backup must-haves is speed. Backups not only need
to be reliable and accessible, but the company needs to be able to restore
the data quickly. When assessing possible data backup strategies in your
environment, do not lose sight of this metric.

Mindsight would like to wish everyone a (preemptive) happy World Backup


Day. Protect your data, and protect your business.

 
Normalization of Database
Database Normalization is a technique of organizing the data in the
database. Normalization is a systematic approach of decomposing
tables to eliminate data redundancy(repetition) and undesirable
characteristics like Insertion, Update and Deletion Anomalies. It is a
multi-step process that puts data into tabular form, removing
duplicated data from the relation tables.

Normalization is used for mainly two purposes,

 Eliminating redundant(useless) data.

 Ensuring data dependencies make sense i.e data is logically


stored.
The video below will give you a good overview of Database
Normalization. If you want you can skip the video, as the concept is
covered in detail, below the video.

Problems Without Normalization


If a table is not properly normalized and have data redundancy then it
will not only eat up extra memory space but will also make it difficult
to handle and update the database, without facing data loss. Insertion,
Updation and Deletion Anomalies are very frequent if database is not
normalized. To understand these anomalies let us take an example of
a Student table.

rollno name branch hod o

401 Akon CSE Mr. X 53337

402 Bkon CSE Mr. X 53337

403 Ckon CSE Mr. X 53337

404 Dkon CSE Mr. X 53337

In the table above, we have data of 4 Computer Sci. students. As we


can see, data for the fields branch, hod(Head of Department)
and office_tel is repeated for the students who are in the same branch
in the college, this is Data Redundancy.
Insertion Anomaly

Suppose for a new admission, until and unless a student opts for a
branch, data of the student cannot be inserted, or else we will have to
set the branch information as NULL.

Also, if we have to insert data of 100 students of same branch, then


the branch information will be repeated for all those 100 students.

These scenarios are nothing but Insertion anomalies.

Updation Anomaly

What if Mr. X leaves the college? or is no longer the HOD of computer


science department? In that case all the student records will have to
be updated, and if by mistake we miss any record, it will lead to data
inconsistency. This is Updation anomaly.

Deletion Anomaly

In our Student table, two different informations are kept together,


Student information and Branch information. Hence, at the end of the
academic year, if student records are deleted, we will also lose the
branch information. This is Deletion anomaly.

Normalization Rule
Normalization rules are divided into the following normal forms:

1. First Normal Form

2. Second Normal Form

3. Third Normal Form

4. BCNF

5. Fourth Normal Form

First Normal Form (1NF)

For a table to be in the First Normal Form, it should follow the


following 4 rules:

1. It should only have single(atomic) valued attributes/columns.

2. Values stored in a column should be of the same domain

3. All the columns in a table should have unique names.

4. And the order in which data is stored, does not matter.

In the next tutorial, we will discuss about the First Normal Form in


details.

Second Normal Form (2NF)

For a table to be in the Second Normal Form,

1. It should be in the First Normal form.


2. And, it should not have Partial Dependency.

To understand what is Partial Dependency and how to normalize a


table to 2nd normal for, jump to the Second Normal Form tutorial.

Third Normal Form (3NF)

A table is said to be in the Third Normal Form when,

1. It is in the Second Normal form.

2. And, it doesn't have Transitive Dependency.

Here is the Third Normal Form tutorial. But we suggest you to first


study about the second normal form and then head over to the third
normal form.

Boyce and Codd Normal Form (BCNF)

Boyce and Codd Normal Form is a higher version of the Third


Normal form. This form deals with certain type of anomaly that is not
handled by 3NF. A 3NF table which does not have multiple
overlapping candidate keys is said to be in BCNF. For a table to be in
BCNF, following conditions must be satisfied:

 R must be in 3rd Normal Form

 and, for each functional dependency ( X → Y ), X should be a


super Key.

To learn about BCNF in detail with a very easy to understand example,


head to Boye-Codd Normal Form tutorial.
Fourth Normal Form (4NF)

A table is said to be in the Fourth Normal Form when,

1. It is in the Boyce-Codd Normal Form.

2. And, it doesn't have Multi-Valued Dependency.

Here is the Fourth Normal Form tutorial. But we suggest you to


understand other normal forms before you head over to the fourth
normal form.

Types of backup
and five backup
mistakes to avoid
What are the main types of backup operations and how can you
avoid the sinking feeling that comes with the realization that
you may not get your data back?
Daniel Cunha Barbosa

10 May 2019 - 12:30PM

Share

As humanity’s use of all kinds of technology has grown, terms like backup are
no longer unfamiliar to the majority of people. Of course, the concept of a
backup existed long before it came to be named as such. Whenever any
important document or information was copied and stored in a place separate
from the original for the purpose of ensuring the information would not be
lost, the process of backing up was taking place. This way, if the original
became damaged, it was possible to recover the information it contained by
referring to the copy, which was kept in a different, safe location. When this
notion was adopted by people and companies within a technological context,
its original characteristics did not change – simply, new resources became
available to make the backup process easier and faster.

In this article, we will look at the main types of backup operations, as well as at
some of the most common mistakes that many of us may make while backing
up our data. In short, there are three main types of backup: full, incremental,
and differential.
Full backup

As the name suggests, this refers to the process of copying everything that is
considered important and that must not be lost. This type of backup is the first
copy and generally the most reliable copy, as it can normally be made without
any need for additional tools.

Incremental backup

This process requires much more care to be taken over the different phases of
the backup, as it involves making copies of the files by taking into account the
changes made in them since the previous backup. For example, imagine you
have done a full backup. Once you’ve finished, you decide that going forward
you will do incremental backups, and you then create two new files. The
incremental backup will detect that all the files in the full backup remain the
same, and will only make backup copies of the two newly created files. As
such, the incremental backup saves time and space, as there will always be
fewer files to be backed up than if you were to do a full backup. We
recommend that you do not try to employ this type of backup strategy using
manual means.

Differential backup
A differential backup has the same basic structure as an incremental backup—
in other words, it involves making copies only of new files or of files that
underwent some kind of change. However, with this backup model, all the files
created since the original full backup will always be copied again. For the same
reasons as with incremental backups, we recommend that differential backups
are also not carried out manually.

Where to store the backup

Once you have decided which type of backup is best suited to your needs, it is
important to consider carefully where to store it. The types of media most
commonly used for storing data have changed over the years. Backups have
been variously done on punch card, floppy disk, optical media like CD, DVD
and Blu-Ray, tape, external hard disk, cloud-based storage services, and more.
One of the questions you need to consider when deciding where to save your
backup copy is: How long am I going to need to keep this backup? Knowing
the answer to that will make it easier to figure out which medium to store your
files on.

The following table contains estimated lifespans of various storage media:


Medium Date of invention Lifespan Capacity

HD 1956 5–10 years From GB to T

Floppy disk 1971 3–5 years Hundreds of

CD/CD-ROM 1979 25–50 years 80 minutes o

MD (Mini Disc) 1991 25–50 years 60 minutes o

DVD 1994/1995 25–50 years 4.7 GB

SD card 1994 10 years or more A few MB to

USB flash drive 2000 10 years or more A few MB to

SSD 1970–1990 10 years or more From GB to T

Source: showmetch.com.br

So now we have some information that will help us to establish and maintain a
stable and successful backup routine, but some people might still be
wondering whether it is really necessary to do it and why it’s considered so
important.

To answer that question properly it would be necessary to know the specific


needs of each individual business or home, so instead let’s look at two
fictitious scenarios which will serve as examples of ways in which a backup can
be of great value.

 For businesses

The year is 2017 and the company ‘Fictitious Corp.’ starts its business day at 8
a.m. as usual. At around 11 a.m., one of the IT managers hears a strange sound
coming from a nearby area. Just after hearing the noise, his phone rings and
he answers it. After finishing the call, he realizes that the workstation is totally
paralyzed and reads a message on the screen saying all the data are now
encrypted. The same message is displayed on some of the other machines
located in this and other areas of the business. Then he discovers that the
company’s file server has crashed, caused by the same problem:
the WannaCryptor ransomworm.

In this example, the company, which was dependent on its file server in order
to be able operate, could have easily avoided its systems being paralyzed by
the ransomware attack if it had maintained a full, offline and current backup of
its file server.
 A home-based example

Mr. Easygoing was watching TV from the comfort of his sofa at home when he
suddenly felt a surge of nostalgia and got the urge to look at some photos of
his wedding and his son’s birth. Just as he was opening the photos a
downpour started. Once he finished looking through them, Mr. Easygoing
went to the kitchen to fix something to eat, leaving the computer plugged in.
Suddenly he heard the crash of a bolt of lightning, and the electricity went off.
The next day, when the power was back on, he discovered that the computer’s
hard disk was fried and that all the photos capturing his memories were lost.

Here, the incident occurred due to a power surge, but there are a great many
other potential causes for data loss, and all of them can be protected against,
at least to a great extent, by making regular backups. If you have any
information you wouldn’t want to lose, a backup is an effective way to
help prevent data loss.
Common mistakes made while doing a backup

Now that we have looked at some of the issues around the importance of
backups, let’s continue with some recommendations as well as some common
mistakes made during the process.

 Not doing a backup

This is without a doubt the most common mistake. Very often a backup was
not done either due to not getting around to it or because of thinking the
information wasn’t important—until it was lost.

 Saving the backup copies on the same hardware as the original files

The idea of a backup is to make a copy for safekeeping. That copy must be
stored in a location different from where the original files are kept. If they are
stored on the same hardware and that hardware is damaged, the backup
copies might be lost along with the originals.

 Not testing the backup

Making a backup involves a series of processes. It isn’t enough to just create a


copy – you also need to check the files to verify that the data you saved is
actually accessible in case you need it. Indeed, testing your backups is just as
important as backing up itself. Depending on the form of the backup, which is
often a compressed file, it could become corrupted, in which case a new
backup needs to be done.

 Not running the backup regularly and sufficiently frequently

It is important to make backup copies regularly, especially if the information is


frequently updated. Imagine, for example, that you are writing a book in a
word processing document and you only make a backup copy on the first of
each month. If the file is lost on the 15th of the month, you will only have a
copy dating back to two weeks ago and you will have lost all the work you did
in the interim.

 Not labeling the backup files

After running your backups, keep a record of which archive is from which
hardware. In case you need to recover the data, it will be essential to do so on
the right equipment.

Conclusion

A data loss event can cost any of us dearly, and it goes without saying that
backups should be part of everybody’s cyber-hygiene. In a way, backups are
intended to protect the investment we make into the data, so let’s think ahead
so that we don’t lose that investment.

Do you want to learn more? We have previously covered the issue of backup
from several angles, including in a digestible white paper, ‘Options for backing
up your computer’, which mainly dealt with the most common hardware and
software resources involved in backup operations. We encourage you to give
it a read.

You might also like