Oracle, Technical

Formula for db_file_multiblock_read_count

Just sent this to someone in an email and it seems like something that could be useful for everyone. They were asking how to determine the best setting for db_file_multiblock_read_count. First off, you probably will need assistance from a system admin or SAN admin if you’re running on a SAN. And contrary to one popular belief, it is very possible that setting this parameter too high can cause performance degradation. [Max I/O chunk size] / [db_block_size] is not necessarily the best value for the MBRC. Yes Steve Adams has a script to determine the max possible size for your system – but that doesn’t necessarily mean that this is the best value.

Also, note that getting system statistics is far more important than the MBRC value. It’s really critical that you get good system stats in place so that Oracle costs the read operations far more accurately. For more a good overview of system statistics check out Jonathan Lewis’ article on OTN, Understanding System Statistics.

Calculating the Value

Historically, the best theoretical value for the MBRC was dependent on your storage configuration and basically followed this formula: [stripe width] x [stripe size] / [db block size]. This is because Oracle’s optimizer (without system statistics) is based on the assumption that a single block disk request and a multi-block disk request have about the same cost. Once you gather system statistics the game changes a little; Oracle uses real (measured) values to cost the reads. At that point you just want the optimal read size.

Here’s a simple “script” to determine a good value:

# time dd bs=32768 if=/largest/busiest/oracle/datafile.dbf of=/dev/null
# time dd bs=65536 if=/largest/busiest/oracle/datafile.dbf of=/dev/null
# time dd bs=131072 if=/largest/busiest/oracle/datafile.dbf of=/dev/null
# time dd bs=262144 if=/largest/busiest/oracle/datafile.dbf of=/dev/null
# time dd bs=524288 if=/largest/busiest/oracle/datafile.dbf of=/dev/null
# time dd bs=1048576 if=/largest/busiest/oracle/datafile.dbf of=/dev/null

Make sure to do this while no databases are running and there is no other activity on the system. Run it two or three times before you start taking measurements, to warm up the various caches in the read path. Then take the value of “bs” with the best performance and divide by the database block size – use that for your MBRC. Each platform has a maximum allowed value; on Solaris I think it’s 128. It’s theoretically based on the largest size the OS supports for a single I/O.

Now actually an even better way to get the value might be with Oracle’s ORION tool (get there from the Links page) – but I haven’t tried that yet to determine this value. I’ll have to post something after I try it out.

One last note worth mentioning is that in 10g you can leave this parameter unset and Oracle will try to choose the best value for it – apparently based on your SGA size and number of sessions. However I think that it’s still best to set it explicitly.


Having the right setting for the MBRC is critical. There are two primary impacts of this parameter on the DB: cost and memory management.

I mentioned the cost component earlier: without system statistics Oracle assumes that a multiblock read has about the same cost as a single block read. With accurate system statistics in place the impact isn’t as big because Oracle measures how long reads actually take and uses these numbers to calculate costs – however this can still be dangerous as Oracle can easily get bad values if it measures the MBRC on a table that’s cached in the storage system for example. It can be tricky to get the best values for “mreadtim” and “sreadtim” and I think that a few people such as Jonathan Lewis approve of manually setting these values [second post about it] based on your knowledge of the storage subsystem.

As for memory management, you will remember that multiblock reads always read into the cold end of the LRU list. Julian Dyke has some great slides about Logical IO that illustrate this very clearly. Your setting for MBRC will determine how much memory from the buffer cache Oracle uses for multiblock-reads. On a heavily transactional system with many concurrent users you don’t want too much memory being used for multiblock reads!

More Resources

For more info read these articles by Jonathan Lewis:

And the thread from the oracle-l list, especially this message:

Here’s another interesting paper with some research on performance in several different situations, illustrating one system where performance actually tanked after the MBRC was set larger than a certain optimal value:

About Jeremy

Building and running reliable data platforms that scale and perform. about.me/jeremy_schneider


Comments are closed.


This is my personal website. The views expressed here are mine alone and may not reflect the views of my employer.

contact: 312-725-9249 or schneider @ ardentperf.com




Enter your email address to receive notifications of new posts by email.

Join 56 other subscribers
%d bloggers like this: