You are on page 1of 22

***************** licensing models: ***************** there are 2 types of licensing models with the sql server.

they are the process and client access license(cal) -the processor model is a descendant of the internet connector license and allows you to have unlimited connections for each processor that is licensed. -the cal model lets you license the client(not the server)and each connection. -once we purchase the license then we can access any number of instances making it the least expensive method of licensing for a small to medium environment -when installing multiple instance of the sql server on the same machine in the cal modelyou may have the license each instance seperately. -if u are running enterprise edition you will not need additional server licenses and we can install upto 16 instances per server and remain supported my microsoft. there are 2 types of licensing modes 1)processor licensing--in this there will will access to server for only one system and from that system there will be access to various systems and 2)per seat licensing--in this there will access to different systems that are present different components of sql 2000: 1)database engine 2)analysis services 3)english queries different components of sql 2005: 1)database engine 2)analysis services 3)integration servies 4)notification services 5)reporting services sql 2000 features: 1)log shipping 2)replication 3)supports clustering for a database 4)supports full text search 5)dts(etl tool) sql 2005 features: 1)peer to peer 2)online indexing 3)partial meal restore 4)database mirroring 5)database snapshots 6)supports clustering with analysis services 7)table partioning tools that get installed when we install sql 2000 1)enterprise manager 2)query analyzer 3)client n/w utility 4)server n/w utility 5)dts import/export 6)services configuration manager

7)xml native support 8)books online tools that get installed when we install sql 2005 1)sql management studio 2)sac(surface area configuration) tool 3)sql reporting service configuration tool 4)sql bt studio 5)books online 6)sql configuration manager different versions available on sql sql 2000 editions: 1)enterprise edition 2)standard edition 3)developer edition 4)personal edition 5)windows edition sql 2005 editions: 1)enterpise edition 2)standard edition 3)developer edition 4)work group edition 5)express edition -usually for a very big server enterprise edition in sql 2000 is used and enterprise edition supports data mirroring in sql 2005 -usually sql 2005 is used with the enterprise edition, standard edition and the work group edition. software requirements: -windows 2000 and windows 2003 server as os -sql 2005 works better for enterprise edition with a windows 2003 server os -by default sql server can support untill 3gb and if need to increase the space we need to enable awe and boot.ini and increase the capacity to 32gb hardware requirements: -for sql 2000 we need a minimum free space of 180 mb for installing -ram of 120 mb for 2000 and 512 mb for 2005 -the -the -the -the latest latest latest latest service service service service pack pack pack pack for for for for sql 2000 is sp4 sql 2005 is sp2 windows 2000 is sp4 windows 2003 is sp1

instance: the term instance is typically used to describe a complete database environment, including the rdbms software, table structure, stored procedures and other functionality. it is most commonly used when administrators describe multiple instances of the same database. there are 2 types of instances 1)default instance 2)named instance sql server 2000 supports 1 default instance and 15 named instances sql server 2005 supports 1 default instance and 49 named instances

sql 7.0 supports 1 default instance and no named instances by default sql server listens to port no 1433 default system databases: 1)master --- master database has all the system tables and it has 50 default tables 2)model 3)tempdb and 4)msdb default user databases 1)northwind and 2)pubs when we install sql 2000 we get 2 agents that gets installed 1)mssql server 2)sql server agent and 3)msdtc (distribution transtion co-ordinator) current activity gives about all the current logs and current blogs that are done on the server data base maintainence plans: -these are plans that are to be scheduled in order to maintain the database by taking the backups,see how the server is working and different times and check the internal allocation plans -sql server 2000 is capable of dynamic memory management -for named instance we will be using the host name/instance name and for default name we will be using the host name use master go select@@version it will be giving the connection to the master database select@@instance name will give the instance name ******************** creating a database: ******************** syntax: use master go create database nag on (name='nag_data',filename='c:\sqldata\nag_data.mdf') log on (name='nag_log',filename='d:\sqllog\nag_log.ldf') go as soon as the database is created we will have to execute the stored procedure exec sp_changeddbowner 'sa' m-master database has 50 system tables out of which 19 also exists in user databases. -in order to delete a database

syntax; drop database <database name> go we can rename a database by using sp_renamedb 'old dbname ','new dbname' in order to find the system databases that are avialable syntax: use master go select * from sysdatabases in order to find the system objects in the master database syntax: use master go select * from sysobjects in order to find the system files in the master database syntax: use master go select * from sysfiles in order to know wether we have created a database and if not we need to create a database the syntax for this is syntax: use master go if notexists(select name form sysdatabases where name='nag') create database nag if a database exists we will have to drop it the syntax for this is use master go if exists(select name form sysdatabases where name='nag') drop database nag in order to create a table in the database syntax: use master go create tabel <tablename> <column1><datatype>[size] <column2><datatype>[size] there are different types of datatypes which we will be using in creating the tables there are different types of the t-sql commands like the 1)ddl(data definition language) -create -drop -alter -truncate 2)dml(data manipulation language)

-insert -update -delete -select 3)dcl(data control language) -grank -revoke-withdraws access privilaged given with the grant command -deny-different permissions are denied by the user,denies the access for the user in orderto access the previlages of the database. 4)tcl(transactional control) -commit -rollback tables: -in order to drop a table we will use the syntax as droptable<table name> -insert into table <tablename> values(values,values...) -select * into # <table name> from <table name > will create a temporary table(so if # is present then its a temporary table) -in order to drop all the user tables we use the syntax as select 'drop table' +name from sysobjects where xtype='u' -in order to change the column name we will use the following syntax select <old column name> as <new column name> from <table name> ********* security: ********* security is basically divided in to 3 types: 1)authentication 2)authorization 3)accounting/auditing 1)authentication: -for sql server there are 2 modes of authentication --windows authentication and --mixed mode authentiation windows authentication: -it has no id and password -only windows clients can log in -its sso(single sign on) it asks for the id and password only for the first time we log in -trusted connection mixed mode authentication: -this can be used in both the windows mode and the sql server mode -we will have to create the id and password to login in the server 2)authorizations: -roles and permissions roles are classified in to 3 types: -fixed server roles -fixed database roles -application roles

fixed server roles: (server level permissions) --sys admin-access to the system admin files --db creator-access to create the databases --disk admin-access to add new files --process admin-can work with process ids --server admin-can configure server wide settings --setup admin-can setup the features in sql 2000 --security admin-can create ids and passwords in order to get access fixed database roles: (database level permissions) --db_owner-is the owner of the database and can perform any activity on the database --db_accessadmin-can deciede about who can give the access to get in to the database --db_ddladmin-can create any database objects --db_security admin- can assign object level permissions --db_backupoperator-can take the back up of the databases --db_datareader-can issue a select statement --db_datawriter-can insert, modify and update --db_denydatareader-can deny the permission for reader access --db_denydatawriter-can deny the permission for writing access appplication roles: this is a temporay role and in this we can create a new role in the database and this roles gets deleted as soon as the application is the database ends permissions: permissions are clssified in to 2 types -statement level permissions(ddl) -object level permissions(dml) statement level permissions(ddl): -this is available at the database level -can create statements at different database levels object level permissions(dml): -permissions are granted at different objects in the databases ******* backup: ******* sql server 2000 supports for 4 types of backups: 1)full backup 2)differential backup 3)transactional backup 4)file group backup 1) full backup: -its a complete backup syntax: backup database<database name>to disk='c:\sqlbackup\nag_db_20070411.bak' with init,stats=10

2)differential backup: -it is the backup of the changes that happen after the last full backup. syntax: backup database<database name>to disk='d:\sqlbackup\nag_diff_20070411.bak' with init stats=10,differential 3)transactional backup: -it is the back up of only the ldf files in the database syntax: backup database<database name>to disk='e:\sqlbackup\nag_db_20070411.trn' with init,stats=10 4)file group back up: -if the database is big then we use the file groups for convinence and its easy to store the data in the file groups and the back up of the complete file group is taken in the similar way log sequence number: log sequence number is a number that gets added everytime when a transactional backup is taken ********* recovery: ********* sql server 2000 supports for 3 types of recovery modes: 1)full recovery 2)bulk logged 3)simple 1)full recovery mode: -backup types on this mode are full,differential and transactional backups -so if the database in this mode then we can restore actually everything untill the last transactional backup is taken when any disaster happens 2)bulk logged recovery mode: -backup types on this mode are full,differential and transactional backups -so if the database is in this mode then we can recovered up to the point before the bulk operation 3)simple recovery mode: -backup types on this mode are full and differential -in this mode we cannot restore the transactional log back up as there will not be any transtional backups that will be taken in this mode -so if the database is in this mode then we can recover untill the last full and differential backup in order to set the recovery mode to full from simple syntax: alter database<database name> set recovery full point in time recovery: sql 2000 has the feature of point in time recovery -a point in time recovery is restoring a database to a specified data and time.when you have completed a point in time recovery the database will be in the state it was at a specific date and time we identified when restoring the database.a point in time recovery is a method to recover the database to anypoint

in time since the last database backup.if any transtional backup up to that point was taken is truncated then we cannot restore that backup so all the backups untill that point will be restored.so this is called point in time recovery. orphan user: an ophan user is an user in a database witha system id (sid) that does not exists in the syslogin table in the master database (it just has the login id but does not have the password) why does an ldf file size grow and how can we control the growth ? ldf file grows as the transactional log backups takes place and is because of the uncommited transactions.we can handle this issue by committing the transaction we can actually control the growth of the ldf files restore: -in order to restore in sql 2000 we will have to follow certain procedure -we will have to give the following commands. syntax: restore verifyonly from disk='c:\sqlbackup\nag.bak' -this is to verify wether the back file is valid or not restore headeronly from disk='c:\sqlbckup\nag.bak' -this command is gives the information about who took the backup and when was the backup taken restore database<database name> from disk='c:\sqlbackup\nag.bak' with move'mdf logical filename' to 'new physical location' move'ldf logical filename' to 'new physical location' [replace],[norecovery],stats=10,[standby] -in this we with [replace] we can replace the existing database -[no recovery] with this database will not be upon running untill everything in the database is restored and at the end when we are giving the last command we will give it without [norecovery] -[standby]with this we can only read a particular database but cannot write restore filelist only from disk='c:\sqlbackup\nag.bak' -this command gives all the information of all the logical and physical filenames in the database restore transactional log back up restore log<dbname> to disk='c:\sqlbackup\nag.trn' with [norecovery] the command that is used to restore a database forceably that is still loading and is in the suspect mode sp_resetstatus<dbname> dbcc dbrecovery<dbname> the syntax in order to restore a database: use master go alter database nag set offline with rollback immediate alter database nag set online restore database nag from disk='c:\sqlbackup\nag_db_20070411.bak' with move'nag_data'to'd:\testsql\nag_dat.mdf' move'nag_log'to'e:\testsql\nag_log.ldf'

replace,stats=10 ****** index: ****** -indexes are user defined data structures which provide fast access to the data when the data can be searched by value which is index key -query optmizer determines the indexes -indexes store the information using standard b-trees(balanced trees) -b-trees are managed,balanced and hence finding any record requires about the same amount of resources and also retrival speed is consistant. -1 page data=8kb 1 extent=8pages =64kb -if extent is used by single object its uniform extent and if extent is filled by multiple objects its mixed extent. -the number of levels in an index will vary depending on the number of rows and the size of key column for the index. -in every index the leaf level contains every key value in key sequence clustered index: -leaf level contains the data pages not just the index key,data itself is a part of the clustered index. -when the index is scanned to the leaf level its the actual data that is retrieved but it is not pointed.hence the clustered is always preffered. -there can be only one clustered index per table(as the data can be sorted out about only one column) -clustered index allows especially the fast access for queries requires for a range of values because it arranges the data in the sequential order. -book mark is clustered index key for the corresponding data row. -no need to scan the entire talble because the data page is exactly identified. -by default the primary key create a clustered index. non clustered index: -the leaf level contains a book mark that tells sql server where to find the database corresponding to the key in the index. -there can be a total of 249 non clustered indexes per table. -index levels are more for non clustered index rather than for the clustered index -non clustered index is prefered only when highly selective. -non clustered index uses clustered index keys as the book mark. -clustered(book mark=data),unique non clustered index(book mark=clustered key),non-unique clustered index(book mark=row id) syntax: create [u],[c][nc]index <index name> on <table name><column name> with [fill factor= ],[pad_index= ] [ignore_dup_key],[drop_existing] [statistics_norecompute],[sort_in_template] --[u]--unique --[c]--clustered --[nc]--non clustered --[fill factor=]--amount of the free space left in an index --[pad_index]--amount of the free space left on an extent --[ignore_dup_key]--ignore the duplicate key

--[drop_existing]--we will have to create one index in order to create another index --[sort_in_tempdb]--in this when the index gets corrupted then we will have to rebuild the new index on the whole again,so we do this in tempdb and once the new index is created then the corrupted index is dropped. -when we drop the index we use the syntax: drop index<index name> -but we can drop the index only if we create the index by itself -if we create the index by using a primary then we cannot drop the index like this we will have to first alter it then we can drop the index. -each index has a row in sysindexes table with a column name indid value of 1(clustered) -for the non cluseted index the values of the indid is 2 to 250 so on the whole we will be having abt 249 non clustered index values. -255 is for lob-large objects. -in sysindexes table we have various columns. -rowcnt-no of data rows in the table -dpages-data pages(no of index pages accross which the data is spread about) -in clustered index the dpages will show the actual data itself and where as in non clustered index the dpages will show the no of indexes -sp_space will give the toltal size used by a table managing an index: -when rows are added then they are automatically inserted in to the correct position in to the table. *********************** types of fragmentation: *********************** 1)internal: -occurs when space is available with in the index pages 2)external: -occurs when the logical order of the pages does not match with the physical order -extent scan fragmentation=[no of gaps/no of extents]=[no of switches /no of extents] -logical scan fragmentation=[no.of out of order pages/no.of pages in the table] detecting fragmentation: command: dbcc showcontig syntax: use <dbname> go dbcc showcontig --if the fragmentation value is <5% the no action need to be taken --if the fragmentation value is between 5% and 30% then we will have to reorganize the index --if the fragmentation value is >30% then we will have to rebuild the index -index seek is used for the clustered index and index scan is used for the non

clustered index. removing fragmentation: method(1): -rebuilding or recreating the index -in this we give the commands like create index or drop index and we also give the command dbcc dbreindex<index name> -table will not be available while the index is rebuilt method(2): -reorgnise or repair the index -we do the fragmentation of the fragmented index -dbcc indexdefrag<index name> -table is available while these operations are being performed. -it does an inplace ordering again called as bubble sort. -compacts the pages in the index depending upon the fill factor. ************************ sql server architecture: ************************ -each database is a collection of an 8kb pages these pages are of 8 types: -data pages--all the data except the text and the image data are stored in the data pages. -text\image pages--all the text and image data are stores in these pages -index pages--page free space pages(pfs)--these pages have the information about the free pages and where they have the free space -global allocation map pages(gam)--it is like a map that has entire information about the pages -index allocation map pages(iam)--these are present for the index. -bulk changed map pages(bcm)--when any bulk operations are done then the changes that take place are stored in these pages. -differential change map pages(dcm)--these pages will have the list of all the pages that are changed after the differential backup is taken -an extent is a unit of 8 pages with 64kb -an extent is again subdivided in to 2 catogaries 1)uniform 2)mixed **************************** database and database files: **************************** -the different database that are present are: 1)master database: -system catalogs -info about the disk space -file allocations -system wide configuration -login accounts -existance of another account 2)model database: -its a template

3)temp db: -it is a workspace used by the application and a system process -its recreated but cannot be recovered 4)msdb: -used by the sql server agent service -the thread that performs auto shrink has spid 6 and by default shrinks at a 30 min interval. --the tables sysloginfo,sysperfinfo,syscursconfit,sysprocessors are dynamically built. --master database has 26 views -if we want to run the update statistics on complete database update sttistics <table name> sp_updatestats -this is used when a bulk operation is performed. database options: 1)state optons: -alter database<db name> -we can use the database as the single user or the multiple user or restricted user -offline/online,by default its online -read only 2)cursor options: -local/global cursor -cursor is the variable that keeps on fetching values. cursor close on commit -this closes the cursor so that the space is not occupied 3)recovery options: -full,bulk logged and simple recovery options 4)auto options: -auto shrink/auto statistics 5)sql options: -ansy sql -set commands declare @id int begin set@id=100 select * from emp where empid =@id end dbcc shrink database('nag',10) -by this command we will leave about 10% of free space in the database so that any further tables or values can be inserted. raid configurations: 1)raid 0 -data stripping -no fault tolerance(1 read + 1 write) -good speed

2)raid 1 -disk mirroring -(1 read + 2 writes) -it has 2 drives -recommened for transactional log backups 3)raid 5 -ithas more than 2 drives -it adds parity whenever it writes -data stripping with parity -good speed -good fault tolerance 4)raid 10 -data mirroring -faster data reads and writes as it does not need a manage parity -second optimization for fault tolerance ********** migration: ********** -migration can be done in 2 different methods: they are the -inplace migration and -side by side migration. the steps that need to be implemented in order to do migration: 1)inplace migration: -upgrading the existing instance on the same hardware (installing sql 2005 on top of an existing instance) 2)side by side migration: -installing a sql 2005 instance in a -after we install it on a new system the old system to the new system and -usually a side by side migration is new hardware then we attach and detach all the files form then the system database is upon running. preffered for the production system.

3)run the sql2005 upgrade advisor tool on the existing sql 2000 instance -we can down load it from www.microsoft.com/sql/solutions/upgrade/default.mspx 4)address all the compatible issues and re run the tool again to ensure zero compatible issues. 5)and when the hardware is available and the down time is decieded ,detach the databases from the existing sql instance using sp_detach_dbprocedure -here we will have to stop replication if any before detach. 6)move the detached database file or files and the log file or files in to the destination location 7)attach the copied files to the new sql 2005 by using thecreate database statement with the for attach or for attach_rebuild_log -create database aircheck_dev on (filename = 'g:\mssql\data\aircheck_dev_data.mdf'),

(filename='f:\blobdata\aircheck2_dev.ndf'), (filename =�e:\logs\aircheck_dev_log.ldf') for attach; go 8)once the databse is attached ,change the compatability mode to 90 9)run the following commands: -dbcc check db--can run dbcc checkdb while users are using the database because of a change in the type of locks that dbcc checkdb holds on the database tables as it checks each one. -dbcc check alloc--checks the consistency of disk space allocation structures for a specified database -dbcc updateusage(dbname)--reports and corrects inaccuracies in the sysindexes table, which may result in incorrect space usage reports by the sp_spaceused system stored procedure. 10) transfer: -transfer logins-generate the script for user/logins using sp_help_revlogin and execute it on destination (d/n from http://support.microsoft.com/kb/246133 ) -be sure to disable the "enable pwd policy" options after logins are created 11)transfer all the old dts packages using ssis sts import tool 12)transfer all the jobs,db maintaenance plans using ssis tasks.while moving the jobs,the databases on which jobs are scheduled must exist on the destination server. ****** locks: ****** -there are 3 types of locks: 1)shared lock (read)(s) 2)exclusive lock (write)(x)and 3)update lock(update)(1x) 1)shared lock: -acquires automatically when the data is read -if we issue a select statement tht a shared lock is issued. -a process can not acquire an exclusive lock on the data when it already has a shared lock on it by other processes. -when a shared lock is placed we can also place another shared lock -to avoid shared locks on a table while accessing them then we will ahve to give the option [no lock] 2)exclusive lock: -acquires automatically on data when it is modified by an insert,update or delete operation. -only one process at a time can hold an exclusive lock on a particular data resource. -whenever any modifications are going on an exclusive lock is placed -whenever an exclusive lock is placed we cn also place a shared lock but cannot place another exclusive lock on the process. -exclusive lcoks are held untill the end of transaction. 3)update lock:

-acquires during data modificationbut first needs to search the table to find the resource that needs to be modified(intent-to-update) -provides compatibility during reading the data. -escalates the update lock to an exclusive lock for a modification. s s x 1x o o o x x x x 1x o x x

-intent locks: there are 3 types of intent locks 1)intent shared lock 2)intent exclusive lock 3)intent update lock -special lock modes: --schema stability lock --schema modification lock --bulk update lock --six lock --six lock: holds a shared lock on a resource and then later an exclusive lock is needed key locks: -tries to lock the actual index keys accessed while processing the query. -select * form emp where salary between 3000 and 5000 blocks: -blocks occur when multiple spids waiting to access the same object or resource. dead lock: -dead lock occurs when 2 processes or spids are waiting for a resourse and are neither processed can advance because the other prevents it from getting the resource -without intervension neither processes can never progress there are 2 types of dead locks: 1)cycle dead lock: -results from 2 processess holding a resource needed by other 2)converstion dead lock: -results from 2 processes wanting to promote there locks from shared to exclusive on the same resourse with in a transaction. -dead lock -when a dead lock occurs one of the spid is automatically killed and is considered as the victim of the dead lock and the error message is 1205 -in order to fix this dead lock we will have to give the command dbcc traceon(1205,3605,-1) go dbcc tracestatus(-1) go

(or) -we will have to enable as a start up parameter of 1205 and 3605 -then we will have to trace the dead lock by trace on other options to avoid dead lock scenario -set log priorities -keep transactins as short as possible -set transaction isolation levels. isolation levels: -read uncommited-(dirty read,can read anything) -read commited-deafult -repeatable read -serializabl -default isolation is read commited -set command will be alive untill the transaction happened ina a particular session. query tuning /query optimization: -query tuning has 3 parts 1)indexes 2)joins--a join is used to combine rows from multiple tables. a join is performed whenever two or more tables is listed in the from clause of an sql statement. -inner join--inner joins return all rows from multiple tables where the join condition is met. for example, select suppliers.supplier_id, suppliers.supplier_name, orders.order_date from suppliers, orders where suppliers.supplier_id = orders.supplier_id; this sql statement would return all rows from the suppliers and orders tables where there is a matching supplier_id value in both the suppliers and orders tables. -outer join--this type of join returns all rows from one table and only those rows from a secondary table where the joined fields are equal (join condition is met). for example, select suppliers.supplier_id, suppliers.supplier_name, orders.order_date from suppliers, orders where suppliers.supplier_id = orders.supplier_id(+); this sql statement would return all rows from the suppliers table and only those rows from the orders table where the joined fields are equal. the (+) after the orders.supplier_id field indicates that, if a supplier_id value in the suppliers table does not exist in the orders table, all fields in the orders table will display as <null> in the result set.

3)cursors/temp tables

performance tuning: -hardware level -sql configuration -i/o subsystems -application code --sql configuration: -a 32bit operating system detects only 3gb ram -so in order to increase the size of the ram here we will have to enable awe in boot.ini and then increase the size of ram. command used for this is sp_configure'awe enabled',1 reconfigure -affinity mask will let us know about how many processors have to be run on the sql server -query governor value is set to avoid long running queries ****************** high availability: ****************** sql server has the advantages like the high availability and the disaster recovery. -there are 2 types of high availabity models.they are; 1)active-passive: -in this there are 2 servers -when the first server goes down then the second server comes in to work so in this the second server is take as the back up server and the data from the first server is contanstly written to the second server. 2)active-active: -in this there are 2 servers -in tis some databases are on the first server and the other databases are on the second server. -so in this there will be load balancing -for this type we require 5 ip addresses. -when there is a connection between the serversthey keep talking which is called heartbeat monitor. features of high availability: -failover clustering -database mirroring -log shipping -replication ******************* database mirroring: ******************* in database mirroring we have the path like database mirrioring---principle->mirror->witness -database mirroring is fully supported in the enterprise edition and is safety full mode in the standard edition.

-database mirroring is a dual write concept. it has 3 modes of operation: 1)high availability: -in this mode it writes in the principle(p) and then writes in mirror(m) then commits in (m) and then commits in (p) -so in this we will not have any data loss.its a zero data loss -this mode has automatic failover 2)high safety: -in writes data simulataneously in both (p) and in (m) and does not wait for the confirmation. 3)high performance: -when (p) writes and commits only then (m) starts writing -its an asynchronous process --in synchronous process both (p) and (m) will write and (p) will be waiting for the confirmation from (m) writing then (p) commits the transaction. ************ replication: ************ sql server replication allows database administrators to distribute data to various servers throughout an organization. you may wish to implement replication in your organization for a number of reasons, such as: �load balancing. replication allows you to disseminate your data to a number of servers and then distribute the query load among those servers. �offline processing. you may wish to manipulate data from your database on a machine that is not always connected to the network. �redundancy. replication allows you to build a fail-over database server that�s ready to pick up the processing load at a moment�s notice. in any replication scenario, there are two main components: �publishers have data to offer to other servers. any given replication scheme may have one or more publishers. �subscribers are database servers that wish to receive updates from the publisher when data is modified. there�s nothing preventing a single system from acting in both of these capacities. in fact, this is often done in large-scale distributed database systems. microsoft sql server supports three types of database replication.they are: �snapshot replication: it acts in the manner its name implies. the publisher simply takes a snapshot of the entire replicated database and shares it with the subscribers. of course, this is a very time and resource-intensive process. for this reason, most administrators don�t use snapshot replication on a recurring basis for databases that change frequently. there are two scenarios where snapshot replication is commonly used. first, it is used for databases that rarely change. second, it is used to set a baseline to establish replication between systems while future updates are propagated using transactional or merge replication. �transactional replication:

it offers a more flexible solution for databases that change on a regular basis. with transactional replication, the replication agent monitors the publisher for changes to the database and transmits those changes to the subscribers. this transmission can take place immediately or on a periodic basis. �merge replication: it allows the publisher and subscriber to independently make changes to the database. both entities can work without an active network connection. when they are reconnected, the merge replication agent checks for changes on both sets of data and modifies each database accordingly. if changes conflict with each other, it uses a predefined conflict resolution algorithm to determine the appropriate data. merge replication is commonly used by laptop users and others who can not be constantly connected to the publisher. each one of these replication techniques serves a useful purpose and is wellsuited to particular database scenarios. if you're working with sql server 2005, you'll need to choose your edition based upon your replication needs. each edition has differing capabilities: �express edition has extremely limited replication capabilities. it's able to act as a replication client only. �workgroup edition adds limited publishing capabilities. it's able to serve five clients using transactional replication and up to 25 clients usin merge replication. it can also act as a replication client. �standard edition has full, unlimited replication capabilities with other sql server databases. �enterprise edition adds a powerful tool for those operating in mixed database environments -- it's capable of replication with oracle databases as you've undoubtedly recognized by this point, sql server's replication capabilities offer database administrators a powerful tool for managing and scaling databases in an enterprise environment. ********************************************** how to start sql server in a single user mode: ********************************************** to start sql server in single user mode: 1.start a command session (start > run > "cmd" [return]). 2.to ensure that sql server is not running type: net stop mssqlserver this may say that this will also stop another service (typically sqlserveragent), in which case allow it to continue. it will also indicate if the service is already stopped. 3.to start sql server in single user mode type: sqlservr -m if this is not recognised then cd to 'c:\program files\microsoft sql server\mssql\binn' and try again. note: �single user mode will allow only a single user to connect to the database. �while sql server is in single user mode you should still be able to connect from sql server query analyzer if you connect using the "sa" account. �you can set a single database into single user mode using an alternative procedure described here: how to set a single database into single user mode - but this process cannot be used to set the master database into single user mode, for

that you must start sql server in single user mode. to shutdown the server when it is in single user mode, simply type control-c in the command window where sqlservr is running. it will then ask you if you wish to shutdown sql server. you can then restart sql server by restarting the sql server process: net start mssqlserver be aware that this will not start any other processes which may have been stopped, such as sqlserveragent. so if any other processes where stopped then you will need to manually restart them (or reboot if that is easier - assuming the necessary services are set to start automatically). -fail over is a concept where in when the principle server stops working then the mirror server takes up the databases so this is called fail over. -read ahead mechanism: the data is pulled out of a buffer memory and then it takes from the mdf if it is not ikn the buffer memory. -we find a instance by select @@ instance name -we indentify the named instance by mssqlserver$() ************************************ how do we restore a master database? ************************************ -if changes have been made to master since a backup was created, those changes are lost when the backup is restored. you must re-create those changes by executing the statements that re-create the missing changes. for example, if any sql server logins have been created since the backup was performed, the logins are lost when master is restored. re-create the logins by using sql server management studio or by using the original scripts with which the logins were created. -you can restore the master database only from a backup that is created on an instance of sql server 2005. note:any database users that were previously associated with lost logins are orphaned, that is they cannot access the database. for more information, see troubleshooting orphaned users. -after you restore master, the instance of sql server is stopped automatically. if you have to make additional repairs and want to prevent more than a single connection to the server, restart the server in single-user mode. otherwise, the server can be restarted regularly. if you decide to restart the server in single-user mode, first stop all sql server services, except the server instance itself, and stop all sql server utilities, such as sql server agent. by stopping the services and utilities, you prevent them from trying to access the server instance. reconstructing changes that are made after the backup was created if a user database was created after the restored backup of master, that user database is inaccessible until one of the following occurs: �the database is attached. we recommend this method. attaching a database requires that all of the database files are available and usable. we recommend specifying the log files, and also the data files, instead of having the attach operation try to rebuild the log file or files. for information about how to attach a database, see how to: attach a database

(sql server management studio) or create database (transact-sql). �the database is restored from one or more backups. restore the database only if its data files or transaction log files no longer exist or are unusable. attaching or restoring a database, re-creates the necessary system table entries, and the database becomes available in the same state as before the master database was restored. if any objects, logins, or databases, have been deleted after master is backed up, you must delete those objects, logins, and databases from master. important: if any databases no longer exist but are referenced in a backup of master that is restored, sql server may report errors when it starts, because it can no longer find those databases. those databases should be dropped after the backup is restored. when master has been restored and any changes have been reapplied, back up master immediately. difference between deny and revoke: deny: -it denies the access for the user in order to access the previlages of database revoke: -withdraws all the access previleges given with the grant command. difference between truncate and delete: truncate; -if truncate is performed then all the files are deleted permanently and cannot be rolled back delete: -if delete is performed than all the files are deleted but can be rolled back if we need them back. what is an orphan user? -an orphan user is the user in the database with a sid that does not exits in the syslogins table in the master database -an orphan user just has a login name but not the password. why do we need file groups for? -file groups are created in the database in order to set the data in a definte order so that it is easy to take a file group backup and do a file group restore rather than a regular backup and a restore.we can create objects on a specific file group and default file group is primary file group and we change the default file group. what if an ldf file size is growing continously how can we control this? -ldf file size increases everytime when a trabsactional backup is taken and its a uncommited transaction.so in order to control the growth of ldf file we will have to commit the transaction. what is the difference between recovery and restore? -recovery is something that sql server does automatically -restoring a database is something that we manually do. what is the difference between backup and replication? backup: -in this everything gets backed up and in this we cannot take a backup of small things. replication: -in this we distribute the data between different servers and its limited to tables,user defined functions,triggers,etc.,

-in this we can repliacate few columns. what is the difference between where and having clause? where: -selects a subset of table rows having: -selects down certain froups based on summary values.

migration : i have done the migration.... i have created aan automation scripts in order to transfer the logins and u as it is i think the automation is the best way...as gui gets hanged as it also maintain migration on the other side and apart form that whne we start writing scripts in te gui had some problems while shifting the logins and users schema : beginning in sql server 2005, each object belongs to a database schema. a database schema is a distinct namespace that is separate from a database user. you can think of a schema as a container of objects. schemas can be created and altered in a database, and users can be granted access to a schema. a schema can be owned by any user, and schema ownership is transferable.