This action might not be possible to undo. Are you sure you want to continue?
Choosing a Block Size
The answer to the question "what is a good block size?" is: "It depends entirely on your operating and file system". This, of course, is not what Oracle itself teaches on its courses, nor what the Oracle Press Books say, nor what the usual 'DBA Folklore' suggests. All these sources tend to say "It depends on the type of application you are running". They go on to say that for OLTP environments, you should go for small blocks (say, 2K or 4K). But for a data warehouse, you should go for big blocks (say, 8K, 16K or even 32K). This is, of course, complete rubbish since it rather overlooks one tiny, but crucial, fact: Databases have a file system to contend with, since data files don't live in a vacuum, but on a disk that has been formatted with a file system. And on Unix, the file system has a buffer of its own that needs to be filled precisely. That buffer is usually 8K in size, so a choice of any other block size will result in additional I/O operations, and hence degraded performance. There's no magic about this: it's just a question of physics. On NT, there is no file system buffer to worry about -it uses what is known as 'Direct I/O'. So, incidentally, do Unix systems talking to 'raw partitions'. So for these environments, you can basically pick any block size you want -but bigger is still better, so my usual recommendation is to go for 16K blocks. So why did that hoary old myth about "OLTP=SMALL, DSS=BIG" ever develop, then? Because OLTP systems generally have lots of Users concurrently doing transactions, and DSS systems usually involve running mammoth reports that look at a billion and one rows before coming to a conclusion. If blocks are small, they will tend to contain fewer rows. If big, more. So, when 1000 Users all simultaneously fire off a new transaction, if you have big blocks, there is a good chance that many will find the rows they want to update within the same block -and thus will begin the mother of all contention battles, as each User tries to squirm his or her way into an already extremely popular block. Block contention in an OLTP environment *is* a big performance issue, and it certainly needs monitoring, and fixing if it happens. To suggest that you should fix it, however, by adopting a small block size is tantamount to proclaiming the cure for headaches to be "decapitation". It will certainly work, but it's a tad drastic, and it brings along one or two side effects (such as the inability to perform adequately!) which people might notice. There are better cures for block contention: setting a higher PCTFREE springs to mind. Or increasing INITRANS and MAXTRANS. All three have the effect of reducing the amount of space within a block which can be used to actually store real data, which is what picking a small block size would do. But none of them have the side-effect of inducing additional I/O by simply ignoring what the File System Buffer needs to do its job.
Copyright © Howard Rogers 2001
Page 1 of 2
Choosing a Blocksize
The converse applies to DSS systems: if a big block contains more rows than a small one, then any huge report will benefit from big blocks, because that way the *number* of blocks needed to be fetched off disk to satisfy a given query will be reduced. True enough: but don't go beserk, because a mammoth block size that ignores the File System Buffer will still induce additional I/Os. At this point, the sceptics will usually pipe up and say "But big blocks will waste space in the Buffer Cache! All I wanted was 3 rows, and you've made me pull an additional 7.5K of stuff into the Cache which I shall never use!!". To which I respond: what do you think the LRU List is for? It's to age out duff information that nobody wants. Proivided the fetching of that data was efficient in the first place, do you care that stuff in addition to the stuff you actually wanted happened to hitch a ride into memory? If some of the other 7.5K happens, after all, to turn out to be useful to another User, the buffer will remain at the MRU end of the list, and clearly it was worth the effort. If it doesn't, who cares? Oracle has an ageing mechanism that will deal with it. "No" cry the die-hards. "I could be making better use of that memory, which is -after all- a finite resource!" To which I respond: have you checked the price of RAM lately? Memory should not be considered a finite resource in the first place. Oracle's a serious database, and demands serious resourcing, otherwise: why bother?. An additonal allocation of RAM is not going to kill you (and if it is, you should be running on Access, anyway). In any case, if you have 1000 8K buffers or 2000 4K buffers -you still have 8Mb of data in cache. Who cares how it's organised, provided there's no contention issues arising? The fact of the matter is that a large block size makes it possible to gain enormous efficiencies when reading indexes: traversing the base of an index is much easier when the leaf nodes are large. That's good for DSS systems, of course. But what is most access in an OLTP environment achieved through? Er, that's right... indexes. So it's good for them too. So, in summary: for Unix systems with cooked file systems, always go for 8K blocks (unless you have a wierd Unix that has a different File System Buffer size). And for NT systems, or Unix using raw devices, go for 16K. Of course (and here's the Weasel Out clause), your specific application demands might suggest something different -that's what tuning is all about. So I don't want to be *completely* dogmatic about it: Your Mileage Might Vary, as they say. But I bet a hefty sum that it won't vary that much. For a rather more technical justification for the "8K or bigger" argument, you might care to visit http://www.ixora.com.au/tips/creation/block_size.htm. Steve Adams puts it better than I ever could.
Copyright © Howard Rogers 2001
Page 2 of 2