You are on page 1of 5

1.

Differential backups are a way of reducing the number of log files that would need to be
restored in the case of an emergency. Using a differential backup can shorten the number of
backup files to restore

In this particular situation, If a disaster losing data at 18.15, we'd need to restore 17 backup files
(1 full backup and 16 log backups), plus the tail log backup

This is a somewhat dangerous situation since, as we have discussed, the more backup files we
have to take, store, and manage the greater the chance of one of those files being unusable.
This can occur for reasons from disk corruption to backup failure. Also, if any of these
transaction log backup files is not usable, we cannot restore past that point in the database's
history.

If, instead, our strategy included an additional differential backup at midday each day, then we'd
only need to restore eight files: the full backup, the differential backup, and six transaction log
backups, plus a tail log backup

Using a differential backup can shorten the number of backup files to restore.

We would also be safe in the event of a corrupted differential backup because we would still
have all of the log backups since the full backup was taken.

In any situation that requires a quick turnaround time for restoration, a differential backup is
our friend. The more files there are to process, the more time it will also take to set up the
restore scripts, and the more files we have to work with, the more complex will be the restore
operation, and so (potentially) the longer the database will be down.

In this particular situation, the savings might not be too dramatic, but for mission-critical
systems, transaction logs can be taken every 15 minutes. If we're able to take one or two
differential backups during the day, it can cut down dramatically the number of files involved in
any restore process.
Step by Step Restore Data

Step 1: Right-click your database and select the following items from the drop-down
menus: Tasks >> Restore >> Database
Step 2: Click the “Timeline” button.
Step 3: Select “Specific date and time” and enter your desired date and time in the
boxes below. You can also click in the green color bar or use the slider to set the
time. The example shows restoring to the 16:45am backup.

Click “OK.” As you can see, have the full back up, 12:00pm differential backup, 6 log
for 30 minutes transaction logs, plus a tail log backup
will get us the 16:44:59am data that we want.

Step 5: Click “OK” to start the restore. You will see the progress indicator in the
upper left. First it will count through the full backup and then each of the transaction
logs before it finishes.

Steps to find who deleted the User database in SQL Server Using SQL Server Schema Changes
History Report

1. Open SQL Server Management Studio and Connect to the SQL Server Instance.
2. Right click SQL Server Instance and Select Reports -> Standard Reports -> Schema Changes
History as shown in the below snippet.
3. This will open up Scheme Changes History report which will have the details about who
deleted the SQL Server Database along with the timestamp when the database was deleted.
3.

Development server, VLDB, simple file architecture Here, we have a development machine
containing one VLDB. This database is not structurally complex, containing only one data file and
one log file. The developers are happy to accept data loss of up to a day. All activity on this
database takes place during the day, with a very few transactions happening after business
hours. In this case, it might be appropriate to operate the user database in SIMPLE recovery
model, and implement a backup scheme such as the one below.

1. Perform full nightly database backups for the system databases.


2. Perform a full weekly database backup for the VLDB, for example on Sunday night.
3. Perform a differential database backup for the VLDB on the nights where you do not take the
full database backups. In this example, we would perform these backups on Monday through
Saturday night.

Production server, 3 databases, complex file architecture, 30 minutes's data loss In this final
scenario,

we have a production database system that contains three databases with complex data
structures. Each database comprises multiple data files split into two filegroups, one read-only
and one writable. The read-only file group is updated once per week with newly archived
records. The writable file groups have an acceptable data loss of 30 minutes. Most database
activity on this server will take place during the day. With the database operating in FULL
recovery model, the backup scheme below might work well.

1. Perform nightly full database backups for all system databases.


2. Perform a weekly full file backup of the read-only filegroups on each user database, after the
archived data has been loaded
3. Perform nightly full file backups of the writable file groups on each user database.
4. Perform minutely log backups for each user database; the log backup schedule should start
after the nightly full file backups are complete, and finish 30 minutes before the full file backup
processes start again

Assuming this is a system that has limits on its toleration of data loss, then this question will
determine the need to take supplemental backups (transaction log, differential) in addition to
full database (or file) backups, and the frequency at which they need to be taken. Now, the
application owner needs to be reasonable here. If they state that they cannot tolerate any
down-time, and cannot lose any data at all, then this implies the need for a very high availability
solution for that database, and a very rigorous backup regime, both of which are going to cost a
lot of design, implementation and administrative effort, as well as a lot of money. If they offer
more reasonable numbers, such as one hour's potential data loss, then this is something that
can be supported as part of a normal backup regime, taking hourly transaction log backups.
The only case when this may not happen is when the user is using SQL Server authentication and
the internal SID, a unique identifying value, doesn't match on the original and target server. If
two SQL logins with the same name are created on different machines, the underlying SIDs will
be different. So, when we move a database from Server A to Server B, a SQL login that has
permission to access Server A will also be moved to Server B, but the underlying SID will be
invalid and the database user will be "orphaned." This database user will need to be
"de-orphaned" (see below) before the permissions will be valid. This will never happen for
matching Active Directory accounts since the SID is always the same across a domain.

We should, audit each and every login – never assume that if a user has certain permissions in
one environment they need the same in another; fix any internal user mappings for logins that
exist on both servers, to ensure no one gets elevated permissions • perform orphaned user
maintenance – remove permissions for any users that do not have a login on the server to which
we are moving the database;

the sp_change_ users_login stored procedure can help with this process, reporting all orphans,
linking a user to its correct login, or creating a new login to which to link:
• EXEC sp_change_users_login 'Report'
• EXEC sp_change_users_login 'Auto_Fix', 'user'
• EXEC sp_change_users_login 'Auto_Fix', 'user', 'login', 'password'

Don't let these issues dissuade you from performing full restores as and when necessary.
Diligence is a great trait in a DBA, especially in regard to security. If you apply this diligence,
keeping a keen eye out when restoring databases between mismatched environments, or when
dealing with highly sensitive data of any kind, then you'll be fine.

2.

A critical flaw is an Critical information about the error's and flaw base that causes a
program to abort. A critical flaw might occur in an operating system, a running
application, program or software. In this process, the operation that was being performed
is aborted, and data may be lost. This may even result in freezing or spontaneous
rebooting of the computer. Therefore, we need to full database backup includes the
complete permission set for the database. Each user's permissions are stored in the database
and are associated to the login that they use on that server for avoid losing critical data.

3,

As I have known, A fragmented log file can dramatically can make slow down any operation that
needs to read the log file. For example, it can cause slow startup times (since SQL Server reads
the log during the database recovery process), slow RESTORE operations, and more. Log size and
growth should be planned and managed to avoid excessive numbers of growth events, which
can lead to this fragmentation. There is no more space within the log to write new records,
there is no further space on the disk to allow the log file to grow, and so the database becomes
read-only until the issue is resolved.

If the root cause of the log growth turns out to be no log backups (or insufficiently frequent
ones), then perform one immediately. An even quicker way to make space in the log, assuming
you can get permission to do it, is to temporarily switch the database to SIMPLE recovery to
force a log truncation, then switch it back to FULL and perform a full backup
.Practice test restores for your critical databases on a regular schedule, regularly back up per day
or week, need to be 100% sure that it's going to work. If the base backup isn't refreshed
regularly, typically on a weekly basis, the differentials will start to take longer to process.

You might also like