11

Binary Trees are optimal… except when they’re not. | Harder, Better, Faster, Str...

 4 years ago
source link: https://hbfs.wordpress.com/2021/07/20/binary-trees-are-optimal-except-when-theyre-not/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Binary Trees are optimal… except when they’re not.

The best case depth for a search tree is O(\log_b n), if b is the arity (or branching) of the tree. Intuitively, we know that if we increase b, the depth goes down, but is that all there is to it? And if not, how do we chose the optimal branching b?

measuring-tree

While it’s true that an optimal search tree will have depth O(\log_b n), we can’t just increase b indefinitely, because we also increase the number of comparisons we must do in each node. A b-ary tree will have (in the worst case) b-1 keys, and for each, we must do comparisons for lower-than and equal (the right-most child will be searched without comparison, it will be the “else” of all the comparisons). We must first understand how these comparisons affect average search time.

If we use a naïve comparison scheme, each keys ask for two (potentially expensive) comparisons. One to test if the sought value, v is smaller than a key k; one to test if they’re equal. If neither tests is true, we move on to the next key. If none of the tests are true, it is an “else” case and we go down the rightmost branch. That scheme does way more work than necessary and would ask for 2(b-1) comparison per node (because there are b-1 keys per node).

The main problem with this straightforward approach is that comparisons can be very expensive. While comparing two integral types (int and the like) is often thought of as “free”, comparing string is expensive. So you clearly do not want to test two strings twice, once for < and once for =. The much better approach is to use a three-way comparison, also known as the “spaceship operator”.

The spaceship operator, <=> is C++20’s version of C’s old timey qsort comparison function (it is in fact much better because it also automagically provides all comparison operators). Basically, a<=>b can return <0 if a<b, 0 if they’re equal, and >0 if a>b. We can therefore store the result of the expensive comparison, then do inexpensive comparisons for the result. That reduces the number of expensive comparisons to b-1 per node.

The search complexity, counting the number of comparisons in the worst case for the best-case tree is

\displaystyle (b-1)\log_b n=(b-1)\frac{\ln n}{\ln b}=\frac{b-1}{ln b}\ln n,

a strictly increasing function in b>1 (as we can treat \ln n as a constant, since we’re not optimizing for n):

search-cost

We therefore conclude that b=2 is the optimal solution.

Except when it isn’t

We have neglected, or rather abstracted, something important: the cost of accessing the node. In main memory, it’s very convenient to suppose that reading a node is free, that is, incurs no cost. That’s of course false, because even reading from cache incurs a delay; fortunately very small. It is even more false if we read the node from disk (yes, even NVMe). A classical spinning rust disk reading a block will cause a few millisecond delay, that may be really, really expensive relative to the comparison of two keys (once they’re) in memory.

The new cost function to optimize will be

\displaystyle ((b-1)c_1+c_2)\log_b n=\frac{(b-1)c_1+c2}{\ln b}\ln n,

with c_1 the comparison cost, and c_2, the cost of accessing the node. We can set c_1 to 1 and make c_2 relative to c_1. Let’s say something like c_2=100. Now, luckily for us, this function is convex in b, so we can solve for the optimal b given c_1 and c_2. To do so, we take the derivative and find where it is zero, that is, solve

\displaystyle \frac{\partial}{\partial b}\frac{(b-1)c_1+c_2}{\ln b}=\frac{b(\ln b-1)c_1+c_1-c_2}{b(\ln b)^2}=0

for b. Since the denominator is strictly increasing in b>1, we must solve for a zero numerator. However, there’s a slight complication:

\displaystyle b(\ln b-1)=\frac{c_2}{c_1}-1

isn’t algebraic. We must use a numerical solution to the Lambert W Function. It gives us, with u=\frac{c_2}{c_1}-1,

\displaystyle b^*=\frac{u}{LambertW(\frac{u}{e})}.

The following graph shows the function’s surface with c_1=5 and c_2=200, one axes being b, block size and the other being n, the number of items in the tree. The green plane shows the optimal b found with the Lambert W function.

search-cost-with-blocks

To conclude, binary trees are optimal when we ignore the cost of accessing a node, but they aren’t when it becomes costly to access a node. When we access the nodes on disk, with a high cost, it becomes interesting to bundles many keys in a node, and we gave the optimal solution. However, the problem is often solved quite more directly. We just fit as many keys as possible in an I/O block. Disks operates on small blocks of (typically) 512 bytes, but file systems operate in somewhat larger, but fixed-sized blocks, something like 4 or 8k. A good strategy is therefore to fit as many keys as possible in that block, since even if the number of comparisons is large, it will still be faster than bringing that block into main memory.

This entry was posted on Tuesday, July 20th, 2021 at 2:52 am and is filed under Uncategorized. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

Post navigation

« Previous Post

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK