Welcome to Scribd. Sign in or start your free trial to enjoy unlimited e-books, audiobooks & documents.Find out more
Download
Standard view
Full view
of .
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
Oracle - Block Size

Oracle - Block Size

Ratings: (0)|Views: 23|Likes:
Published by oracle412
oracle foreign key primary key constraints performance tuning MTS IOT 9i block size backup rman corrupted column drop rename recovery controlfile backup clone architecture database archives export dump dmp duplicate rows extents segments fragmentation hot cold blobs migration tablespace locally managed redo undo new features rollback ora-1555 shrink free space user password link TNS tnsnames.ora listener java shutdown sequence
oracle foreign key primary key constraints performance tuning MTS IOT 9i block size backup rman corrupted column drop rename recovery controlfile backup clone architecture database archives export dump dmp duplicate rows extents segments fragmentation hot cold blobs migration tablespace locally managed redo undo new features rollback ora-1555 shrink free space user password link TNS tnsnames.ora listener java shutdown sequence

More info:

Published by: oracle412 on Jun 17, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

06/17/2011

pdf

text

original

 
Choosing a Blocksize Administration TipsCopyright © Howard Rogers 2001 10/17/2001 Page
1 of 2
 
Choosing a Block Size
The answer to the question "what is a good block size?" is: "It depends entirely on youroperating and file system".This, of course, is not what Oracle itself teaches on its courses, nor what the Oracle PressBooks say, nor what the usual 'DBA Folklore' suggests. All these sources tend to say "Itdepends on the type of application you are running". They go on to say that for OLTPenvironments, you should go for small blocks (say, 2K or 4K). But for a data warehouse,you should go for big blocks (say, 8K, 16K or even 32K).This is, of course, complete rubbish since it rather overlooks one tiny, but crucial, fact:Databases have a file system to contend with, since data files don't live in a vacuum, buton a disk that has been formatted with a file system. And on Unix, the file system has abuffer of its own that needs to be filled precisely. That buffer is usually 8K in size, so achoice of any other block size will result in additional I/O operations, and hence degradedperformance. There's no magic about this: it's just a question of physics.On NT, there is no file system buffer to worry about -it uses what is known as 'Direct I/O'.So, incidentally, do Unix systems talking to 'raw partitions'. So for these environments, youcan basically pick any block size you want -but bigger is still better, so my usualrecommendation is to go for 16K blocks.So why did that hoary old myth about "OLTP=SMALL, DSS=BIG" ever develop, then? BecauseOLTP systems generally have lots of Users concurrently doing transactions, and DSS systemsusually involve running mammoth reports that look at a billion and one rows before comingto a conclusion. If blocks are small, they will tend to contain fewer rows. If big, more.So, when 1000 Users all simultaneously fire off a new transaction, if you have big blocks,there is a good chance that many will find the rows they want to update within the sameblock -and thus will begin the mother of all contention battles, as each User tries tosquirm his or her way into an already extremely popular block. Block contention in anOLTP environment *is* a big performance issue, and it certainly needs monitoring, andfixing if it happens. To suggest that you should fix it, however, by adopting a small blocksize is tantamount to proclaiming the cure for headaches to be "decapitation". It willcertainly work, but it's a tad drastic, and it brings along one or two side effects (such asthe inability to perform adequately!) which people might notice. There are better curesfor block contention: setting a higher PCTFREE springs to mind. Or increasing INITRANSand MAXTRANS. All three have the effect of reducing the amount of space within a blockwhich can be used to actually store real data, which is what picking a small block sizewould do. But none of them have the side-effect of inducing additional I/O by simplyignoring what the File System Buffer needs to do its job.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->