6

Data Cache Size Recommendation – Why?

 3 years ago
source link: https://www.cubecoder.com/data-cache-size-recommendation-why/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Data Cache Size Recommendation – Why?

April 1, 2017April 1, 2017 by TimG

A quick post prompted by a recent OTN thread. The Essbase Database Administrator’s Guide tells us that the recommended size of the Data Cache is 1/8 the size of the Data File Cache. Even granted the usual caveats on these recommendations, I think this one is really odd.

The first headscratcher here is that if you use Buffered I/O (as almost everyone does) there is no Data File Cache. I suppose that you can take the DBAG sizing recommendation for the Data File Cache – which is based on total .pag file size – as your starting point, even if you don’t actually use the cache.

The second more serious headscratcher is this: Per the DBAG, the .pag files and the Data File Cache (if used) contain compressed blocks. The Data Cache contains uncompressed blocks. Using bitmap compression, for ‘normal’ block size ranges, the ratio of compressed to uncompressed block size approximately follows block density, which is a data-dependent statistic that varies between real-life cubes by several orders of magnitude.

If the size of the Data Cache has an effect on performance, it (surely?) must be down to how many of your data blocks fit into it. But if you size a cache that contains uncompressed blocks based on the size of compressed blocks as the DBAG effectively advises, you wind up with a data cache able to hold a number of actual data blocks that varies just as wildly (i.e. several orders of magnitude) as block density. 1/8 of the Data File Cache might mean almost anything: perhaps .001%, .1% or 10% of your actual blocks.

If some proportion of uncompressed blocks is considered optimal for the Data Cache in the same way that a proportion of compressed blocks is considered optimal for the Data File Cache, a better recommendation would (surely?) start from block size and actual block count and not from the .pag file size.

I think most people who have been around Essbase for a while know that these recommendations are to be taken only as starting points for experimentation, but this one in particular now makes no sense to me at all.

Would love to hear comments.

Posted in: BSO, Essbase | Tagged: BSO, Data Cache, Essbase, Size

Post navigation

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Comment

Name *

Email *

Website

Save my name, email, and website in this browser for the next time I comment.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK