2

European Championship 2024 (EUC) Editorial

 1 month ago
source link: https://codeforces.com/blog/entry/127514
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

European Championship 2024 (EUC) Editorial

By Petr, history, 4 days ago,

1949A - Grove

Let . Note that all trees need to be planted at least away from the boundary of the lawn, so their coordinates have to satisfy .

For integers , denote by any configuration that maximizes the number of trees while only using locations such that . Our task is to compute . We compute values of recursively, and store all results using memoization.

Let , and let be such that . Note that, if , then because any tree would be too close to the left boundary of the lawn. Assuming that , we try to plant or not plant a tree at the location .

If we do not plant a tree at , an optimal configuration is given by , where if and .

If we plant a tree at , an optimal configuration is given by , where is the largest integer such that locations and are at distance .

The number of recursive calls increases as decreases. For this reason, it can be beneficial to solve by hand the cases where is small. For instance, if , then we can plant trees at all integer coordinates in the interior of the lawn; if , we can plant trees at all integer coordinates in the interior of the lawn such that is even.

If the implementation of the above algorithm is not fast enough, it is also possible to pre-compute all optimal configurations offline. Indeed, for every , there is a finite number of intervals such that the valid configurations of trees are the same for all . The boundaries of these intervals are of the form for integers . For instance, the first two intervals are always and , which we have already encountered above.

Finally, it is also possible to solve this problem using a generic max-clique algorithm, especially in combination with the above idea to pre-compute optimal configurations offline.

1949B - Charming Meals

Let's sort spicinesses, assume , .

Lemma: Let be some nondecreasing arrays of length . If there exists some permutation , such that for all , then for all .

Proof: for each , there can be at most s smaller than : , as for any we have .

Consider some optimal pairing, assume we paired with , and got a minimum charm of . Let's call pairs with small, and pairs with large. Consider small pairs, assume there are of them. Note that, by the lemma, we can assume that s and s in these pairs are nondecreasing. Similarly, they are nondecreasing in large pairs.

But then we can just pair smallest s with largest s, and largest s with smallest s, and we will still have a charm of at least . More formally:

  • For , pair with ;
  • For , pair the with .

So, it's enough to check such a pairing for each possible , and to pick the best one!

Total runtime is .

1949C - Annual Ants' Gathering

First Solution

Since ants can not move to an empty house the region of non-empty homes always needs to stay connected. Therefore, only ants on a leaf of the tree (of non-empty homes) can ever move and they can only move in one direction. Further, notice that if the smallest group of ants on a leaf can not move the solution is impossible (all other leaf groups are larger and can therefore also never be merged into the parent group of the smallest group). We can simply simulate this process.

Second Solution

Notice that the centroids of the tree are the only homes where all ants can gather up. For a fixed root we can greedily simulate the process with a DFS for each centroid.

1949D - Funny or Scary?

In graph terms, this problem can be formulated as follows: you are given a complete undirected graph on vertices. Some of the edges of the graph are already colored with two colors. You need to color all remaining edges with two colors in such a way that the graph does not have long monochromatic simple paths.

First of all, what happens when no edges are already colored? It turns out this exact question is well-studied in mathematics, and the definitive answer was given 57 years ago in the following paper: L. Gerencsér, A. Gyárfás, On Ramsey-type problems, Ann. Univ. Sci. Budapest Eötvös Sect. Math. 10 (1967), pp. 167-170. They prove that for any such graph on vertices there always exists a monochromatic path with at least edges, and provide an example of such a graph where the longest monochromatic path has exactly edges.

What does their example look like? The key idea is to use a bipartite graph with unbalanced parts. More specifically, we split all vertices into two groups, the first one of size roughly , and the second one of size roughly . We color the edges between the two groups red, and the edges within each group blue. Now each blue path will have to stay within one of the groups, and therefore will not be longer than . The red edges form a bipartite graph, so each red path will have to alternate groups, which means that it will have to go to the smaller group every second step, and since the smaller group has only vertices, the path also cannot be longer than .

Sadly, not everyone will have read this seminal paper from 1967, so how can one come up with this construction during the contest? Since every edge in the complete graph will have one of the two colors, at least one of the colors will be used by a lot of edges. So we need to come up with examples of graphs with lots of edges, but no long paths. Disconnected graphs are a natural way to achieve this, and then we can consider what edges the complement of a disconnected graph must have, and arrive at the above construction. One other thing one can do when stuck is experimentation: try generating random complete graphs colored with two colors until you find some examples with no long monochromatic paths, for example where there is no monochromatic path with edges, and then try to generalize their structure. In fact, the answer to the second sample was generated in exactly this fashion: we repeatedly tried to color each edge randomly and independenty until we got a graph with 12 vertices and no monochromatic path with 10 edges. It actually took a few hours to find just one such graph. Studying this answer could be another way to discover the above construction, as the funny edges do form an almost-bipartite structure with nodes 5 and 11 in one part, and the rest in the other part (for example row 5 of the matrix is ).

We still need to deal with edges that are already colored, but we also still have some reserves: our paths do not exceed , while this problem allows paths up to .

How bad is a wrongly colored edge to the above construction? If an edge that is supposed to be red is actually blue, it is very bad: now the two blue groups are connected, and therefore it is easy to create a blue path through all vertices. However, if an edge that is supposed to be blue is actually red, it is not so bad: even when this edge is used in a red path, all "originally" red edges still have to have one of their endpoints in the smaller group, and therefore the number of originally red edges in a red path still does not exceed two times the size of the smaller group, so changing one blue edge to red increases the boundary on the red path length by at most 1.

We have the freedom to choose which vertex belongs to which group, so we need to use this freedom to make sure that all edges that are already blue connect vertices within one group, as even one incorrectly blue edge is bad. This means that we need to find the connected components using the preexisting blue edges, and then take some of those components to the bigger group and the rest to the smaller group.

The edges that are already colored red can each increase the boundary on the red path length by 1. If all already colored edges are red, this would increase the path length by , which is a lot. However, we also have the freedom to decide which one out of funny and scary corresponds to red. Let us choose one that has fewer edges, and therefore the number of already colored red edges will not exceed .

Because the length of the red path can be increased by through the already colored edges, we need to make the smaller group even smaller: since the red path must not exceed , we can have at most vertices in the smaller group, which is roughly , and more precisely it is .

If the smaller group has vertices, the bigger group has vertices, which means that blue paths have at most edges. Notice how all pieces of the puzzle come together now: this is good enough for this problem, but only barely, by 1 edge. So we can allow the smaller group to have either or vertices.

The only remaining question is: since we add vertices to groups not one by one, but an entire preexisting blue edge component at a time, can we actually achieve the required size of the smaller group? It turns out that we can, using the following simple approach: sort the components in increasing order of size, and take them in this order to the smaller group until the size of the smaller group is or .

The only way this could fail is if we jump from at most to at least , meaning that the component we have added has at least 3 vertices. But since we add components in increasing order of size, it means that the remaining components also all have at least 3 vertices, which means that components with at least 3 vertices cover at least vertices of the graph. Since we need 2 edges for a component with 3 vertices, and with bigger components we still need at least edges for a component with vertices, we need at least edges to get into this situation, and . So we need more than edges to get into this situation, which is impossible under the constraints of the problem, therefore the greedy approach of taking already blue components in increasing order of size always works.

The solution is complete now, and it runs in time linear in the input size, in other words . Why does the problem have the very low limit of in this case? The reason is that we also need to implement the checker that will read your output and check if there are no long monochromatic paths. For that we use a relatively standard dynamic programming approach that runs in , sped up using bitmasks to do only operations with integers. It takes a few seconds for so we could not go much higher. We have explored using various heuristic approaches to find the longest monochromatic paths for larger values of , but it turns out that even though those heuristic approaches work quite well on random graphs, they actually cannot find the longest path reliably in the type of graph output by the solution above, namely an unbalanced bipartite graph with a few additional edges.

The low value of allows to simplify the implementation of the above solution in the following manner: we can skip finding connected components of preexisting blue edges, instead just iterating over all subsets of vertices as candidates for the smaller group, and then checking if the size of the smaller group is appropriate and that there are no preexisting blue edges going from the smaller group to its complement.

1949E - Damage per Second

Once the sugar-coating is removed, the problem boils down to:

Let , and . Find the minimum of .

Sort the values so that . Define . From now on, we will assume that is even, so that is a valid choice (the case odd is absolutely equivalent, just more annoying to write down).

Let us begin with two simple observations.

Observation 1. What are the "most interesting" values of ? If we ignore the ceiling in the definition of , we get whose minimum is achieved by . Unless the values are well-crafted, we expect the minimum of to be achieved for an very close to .

Observation 2. Speeding up the computation of . Let us describe a strategy to compute . Iterate from to until . Say that such condition is satisfied for and fails for (for notational clarity we will omit the dependence of on ). We compute the first part of , that is , naively in . For the quantity takes at most different values. Therefore the second part of , that is , can be computed in by using binary search.

So, we have described a way to compute in where satisfies .

Description of the algorithm. These two observations suggest the following algorithm.

  • Maintain the running minimum of , initializing it as .
  • Iterate over all , starting from those closest to and moving further away.
  • For each , if then skip this choice of (as it cannot be a minimizer in view of the first observation).
  • Otherwise compute in as described in observation 2 and update accordingly.

This algorithm produces the correct answer, but is it provably fast enough? Unexpectedly yes! We will see that this algorithm has the remarkable running time .

Running time analysis. Let us begin by making one further remark about the second observation. By definition of , we have So, we are able to compute in time. This is awful if is huge, but can be potentially very good if is controlled and is comparable to . We will see that the opposite happens for observation 1: it is very powerful if is huge and very weak if is small. So there is a tradeoff: for large the first observation is powerful, for small the second one is powerful. This will be enough to provide a good running time for the whole algorithm.

Let us now discuss the first observation. For any , We want to use this inequality to bound the values of that we need to check.

We have and . Therefore, if we don't have to check as it cannot yield the minimum value for . In particular, we care only about the values of that satisfy As we anticipated, when is huge this bound is great, while when is small it is not particularly useful.

Conclusion. Now we have all the tools and we can conclude by considering two cases (which, morally, correspond to large and small).

If , then the interesting values of are comparable to . Thus can be computed in and we shall do such computation times. So the overall complexity is .

On the other hand, if (which implies ) then we use only the second observation to bound the complexity. Let us assume that we care only about values of of magnitude comparable to , the other cases are easy to handle. Then, the computation of requires . To compute for all , we need

1949F - Dating

After formalizing the statement, we get the following:

Given sets of activities, find a pair such that all three sets are non-empty.

In essence, this means that for a pair to not be good, it must suffice that and are either disjoint or one included in the other. Consequently, it means that, if there are no such good pairs, then the sets must induce a tree structure! In this structure, some vertex is an ancestor of vertex if and only if .

The solution now can be summarized by the following idea: try to construct the tree (assuming there are no good pairs), and check for errors along the process (this will, in fact, expose a good pair). There are multiple ways of achieving that, both top-down and bottom-up (and they are more or less equivalent). We will describe a solution based on a top-down approach.

Top down approach. Let's start by sort the activity sets in decreasing order of lengths. For all activity sets in this order, we will create a parent-child relationship with the last processed index that contains some activity (or , if there is no such index). The activity based on which we choose is not relevant, for reasons that will be apparent as follows. After finding such (or deciding that ), we have some cases to consider:

  1. If and , then is a good pair, as cannot possibly include due to the length of being at least as much as .
  2. If some activity is also in some other set where , then is a good pair.
  3. Otherwise, the activity sets still form a tree, therefore no good pairs exist (yet).

Complexity is or , where is the total number of activities, depending on whether one uses ordered sets or hash maps. It is possible to implement this solution in using just arrays, but this is not required.

Alternative approach. One may solve this problem without leveraging the tree structure of the activity sets. It stems from the observation that any good pair must have a common element . Let's fix all such , and sort the sets having as an element in the order of length. Then, similarly to the first solution, one may prove that it is sufficient that a set with a smaller length has an element not present in some set with a larger length. We can solve this by simply keeping an array which keeps track of existing elements present in the sets, and iterating from right to left.

This approach does not quite work yet, as it has quadratic complexity; however, by considering the sets with large size (i.e., larger than ) separately than the elements with a small size (i.e. smaller than ), then one may achieve complexity using this approach.

1949G - Scooter

First and foremost, let's notice that we can ignore professors that are already in a proper building, as well as empty buildings (in essence, all positions such that ).

More so, if there are no positions such that then we can simply answer without doing any operations.

From this point on, we will assume that for all , we have .

General idea. Intuitively, we would want to iteratively pick up the professor from some building and drive them to a building where a corresponding class is held. However, we must make sure to not visit the same place twice. Multiple greedy solutions might solve this, but we also have to take into account implementation effort and the number of edge cases to consider. In this sense, we advise the reader to not rush into writing an implementation just yet, and instead try to simplify the solution as much as possible on paper.

An easier subproblem. The issue with the problem currently is that there is much room for making choices and, consequently, much room for making an erroneous decision. Let's consider the easier subproblem where there are exactly as many professors of a given class types as there are classes. Even though one might say that this makes the problem tighter, it actually paradoxically makes it easier! This is mainly because, as tight as it is, there is not much room for bad decisions.

A key observation is that this subproblem (and, as it turns out, the original problem as well) is if there are no buildings where no class is being held, as there is simply no way of returning to the starting position to drop off the corresponding professor (note, again, that we are ignoring positions with ).

Therefore, a greedy strategy that works for this subproblem is simply repeatedly choosing a professor from a building where no class is being held and driving them to one of the buildings where their class is being held. If one prioritizes buildings where there are professors before those where there are no professors, the key condition never becomes unsatisfied, thus the problem is solved.

Solving the original problem. Going back, let's solve the original problem. We can do that by simply reducing it to the easier version, by removing "excess" professors from some of the buildings. At the same time, note that we have to make sure the key observation above remains valid throughout the removal process. One easy and correct way of doing that is, yet again, the greedy approach of simply prioritizing removing professors from buildings where there are classes to buldings with no classes.

Final complexity of the solution is .

1949H - Division Avoidance

Looking at the solution for the sample, the most difficulty seems to come from the condition in bold: A division of a cell can only happen if the cells and are not yet part of the organism. It forces us to clean up space, potentially recursively, before a division that we want to happen can actually happen.

What happens if we remove this condition, instead allowing to have multiple copies of a cell with the same coordinates in the organism? Then of course it is possible to avoid any finite set of forbidden cells, so this version of the problem is not very useful. However, consider the following modification instead: we allow to have multiple copies of a cell with the same coordinates temporarily, but the final state of the organism must have at most one copy of any cell, and of course zero copies of the forbidden cells. It turns out that this modification is equivalent to the original problem!

To see why, consider a sequence of operations that starts from and ends in a valid state as described above, but has multiple copies of some cells in its intermediate states. We claim that it is possible to reoder its sequence of operations in such a way that all operations stay valid, but we never have multiple copies of the same cell. Notice that reodering the operations does not change the end state, because each operation can be seen as subtracting 1 from the number of occurrences of the cell and adding 1 to the numbers of occurrences of the cells and , and such subtractions and additions clearly commute.

To avoid getting multiple copies of the same cell, we can repeatedly do the following to produce a reordering: from all operations that are yet to be done in the reference sequence of operations, we choose the operation such that is part of the organism and is maximized. Maximizing guarantees that we are not going to have two copies of the same cell: when we decide to divide , we cannot have in the organism, as otherwise would also be one of the operations yet to be done (since the final state has at most one cell at ), so we should have chosen as the opeation to do over since , which is a contradiction.

Now we know that we can focus on what operations to do, and no longer care about the order in which we do them. It means that we can choose the order that suits us best, and as long as we do only necessary operations, we will always arrive at exactly the same result. It is natural to go by diagonals: first consider all operations with , then all operations with , then all operations with , and so on. The operations with only create new cells with , so those operations are independent within each value of , and we never have to return to smaller values of .

This leads to the following solution: we go in increasing order of , and before processing a given value of we know how many copies of each cell with we have, as an array . Now for each forbidden cell with we have to apply division to it times, which means that we have to add to and to . Similarly, for each non-forbidden cell we have to apply division to it times. We continue while the array has at least one non-zero value. If this process terminates, the answer is . If it runs infinitely, the answer is .

For example, in the first sample case we get the following arrays:

However, this solution is still not practical: how do we check if the process runs infinitely? Looking at a few examples can help spot a pattern: if we ever get a 3, then the process never stops. Having identified this pattern, it is relatively easy to prove. Since we always add the same number to two adjacent elements of , getting a 3 in it means that one of its adjacent numbers will be at least 2. And then, even if there are no forbidden cells at all anymore, becomes on the next diagonal, so we have a 3 adjacent to a 2 again, and this will repeat forever.

What if we never get a 3? Then it turns out that as soon as we pass all forbidden cells, we will start going down towards all zeros and eventually get there. With no forbidden cells, a 1 does not propagate to the next diagonal at all, and a 2 propagates two adjacent +1s to the next diagonal. So if we have a sequence that looks like with 2s, on the next diagonal we will get with 2s, so after diagonals the 2s will disappear.

This argument also demonstrates that the process will converge after steps, since every forbidden cell will create a constant amount of additional 2s, and without forbidden cells the number of 2s decreases by at least 1 per step. Therefore we finally have a complete solution, albeit one that runs in which is too slow for .

In order to make it faster, we have to take a closer look at the changes of the array . Suppose the array looks like for a certain diagonal. How is it going to look for the next diagonal depending on the locations of the forbidden cells on this diagonal?

  • If a cell with a 0 is forbidden, this changes nothing.
  • If a cell with a 2 is forbidden, then we are going to get a 3 on the next diagonal, and the answer will be .
  • So the only interesting question is whether the two cells with 1 are forbidden:
  • If none of the 1s are forbidden, then we get the same structure on the next diagonal, but with the left 1 moving one position to the right (and therefore the number of 2s decreases by one).
  • If only the left 1 is forbidden, then we still get the same pattern on the next diagonal, but the left 1 stays in the same position (and therefore the number of 2s is unchanged).
  • If only the right 1 is forbidden, then it moves one position to the right on the next diagonal (and therefore the number of 2s is also unchanged).
  • Finally, if both 1s are forbidden, then we still get the same pattern on the next diagonal, but with the number of 2s actually increasing by one.

This suggests the following idea: when there are no 3s or higher on a diagonal, and when not all numbers are 0s, the array for it always has the form : exactly two 1s, zero or more 2s in between the 1s, and 0s on the outside. This is not entirely correct: in the sample discussed above there is also the case of : exactly two 2s surrounded by all 0s. This pattern always appears after when the 2 is forbidden, and it is actually the only situation where a 2 can be forbidden without getting a 3. It turns out that this is the only special case, and we can easily prove by induction that we always have one of the patterns mentioned above.

Therefore as we go in increasing order of , for the diagonal we can simply store two integers and a boolean: the position of the left 1, the position of the right 1, and whether we have the special case. We need to find all forbidden cells on the given diagonal, which we can achieve by sorting the forbidden cells by the sum of the coordinates and keeping a pointer into the sorted array as we increase . After that we can determine the state of the next diagonal in , so the total running time of this solution is if we use normal sorting, or if we use counting/radix sorting (we can use counting sorting since we only care about the first diagonals).

The time limit in this problem was pretty tight, so even the solution had to be implemented with care. The reason for this was that the solution mentioned above yields itself perfectly to optimizations. The only quadratic part is a couple of small inner loops looking roughly like this:

for (int i = left; i <= right; ++i) {
nam[i] = am[i - 1] + am[i];
}

The C++ compilers are very good at vectorizing such code using SIMD instructions (which you can enable using some pragmas). But we can also help the compiler because we know that the values in our arrays are always only 0, 1 or 2, so we can actually represent our array as two bitsets. We could not get the compiler to efficiently vectorize std::bitset, but it turns out that gcc supports a relatively new syntax

typedef uint64 fast_vec __attribute__ (( vector_size(32) ));

This defines a new type that works like an array of 4 uint64's, but supports element-wise operations including bitwise operations that automatically use vectorization. Using this trick made the solution mentioned above run in under 7 seconds for , therefore to make sure this solution does not pass we had to set the tight time limit.

One other potentially relevant remark for implementing the solution: we have mentioned above that we only care about the first diagonals, but what is the constant hidden in the -notation? Initially one might think that a case with the following forbidden cells is the worst: . The final state of the organism in this case is a big rectangle (so it also catches solutions that assume we only do a linear number of divisions), and we have nonzero diagonals in this case.

However, the exception from the solution above allows to construct a testcase that is twice nastier: . In this case the array constantly changes from to and back, and we get nonzero diagonals. So if your solution hardcodes the number of diagonals to process you need to go as high as .

For more context about the origins of this problem and related topics you can check out the following paper: F. Chung, R. Graham, J. Morrison, A. Odlyzko, Pebbling a Chessboard, Amer. Math. Monthly 102 (1995), pp. 113–123.

1949I - Disks

To start, we build the graph that describes the tangency relation between the disks: each disk is represented by a node in the graph, and an edge connects two nodes if and only if the corresponding disks are tangent. Constructing this graph can be done in time by checking all pairs of disks. In fact, it can be done in faster than quadratic time, but this is not necessary to solve the problem with the given constraints.

Suppose that you change the radii of the disks by real numbers . In order to maintain tangency, we must have whenever the -th and -th disks are initially tangent. In fact, this is condition is also sufficient, provided that are small enough in absolute value (to avoid overlaps and to keep all the radii positive).

Fix a connected component of the tangency graph, and fix any node in this component. The value determines the values of for all in the same connected component, and each has to be equal to either or . If the connected component is not bipartite, it contains a cycle of odd length, and such a cycle shows that and so . On the other hand, if the connected component is bipartite, then it is possible to assign any real value to . Color the nodes of such a bipartite connected component black and white, so that every edge connects a white node and a black node; use white to color the node . The sum of the radii in this connected component changes by , where is the difference between the number of white nodes and the number of black nodes. In order to strictly decrease the sum of the radii, we need a different number of black and white nodes (); if this happens, we can pick to be any sufficiently small real number such that .

To summarize, the answer is YES if and only if there is at least one connected component which is bipartite and has a different number of white and black nodes. This condition can be checked in time .

1949J - Amanda the Amoeba

To solve this problem, we first find a path connecting two pixels and such that

  • Pixel is a part of the initial position.
  • Pixel is a part of the final position.
  • All inner pixels of (let us denote their sequence as ) are free pixels, neither inside the initial position nor the final one.

Note that it is generally allowed to select two neighbouring pixels or even having , in which case will be empty. For the solution to work, it is not important how such a path is found, one possible way is to find the shortest path between the initial and final position.

If no such path exists, the problem has no solution, as it means the initial and final position are completely separated by blocked cells. Otherwise, the final position is reachable and the problem has a solution, which will be proven by describing its construction.

As the next step, we will construct a tree rooted in pixel and containing all pixels of the initial position (and no other pixels), and also a tree rooted in pixel and containing all pixels of the final position (and only them). Again, it is not important how the trees look like, as far as they contain all appropriate pixels. It is easy to construct them using either DFS or BFS.

The solution then consecutively removes pixels of bottom-up, starting from leaves and proceeding to parents when they become leaves. Such removed pixels are first added to fill the sequence , and then to tree top-down, starting with and going to lower nodes after their parents were added. In the end, pixels of need to be removed (and added to the bottom of ). After that, the amoeba will occupy the exact final position.

There is one more important issue to cope with. The trees and are not necessarily disjoint, which means that some (or even many) pixels may appear in both. While removing pixels from and adding them to , the following situations may occur:

  • Some pixel should be removed from but we know it is also part of and has not been added yet. In this case, the algorithm proceeds just normally, removing the pixel. It will later be added again to the body, which adds to the total number of moves, but there is no requirement to minimize that number.
  • Some pixel should be added to but it is still part of the body, as it was not yet removed from . In such a case, the pixel is skipped (and stays a part of the body to keep it connected) and it has to be marked as such. When the algorithm later proceeds to that pixel as part of , it must not be removed anymore, and the algorithm proceeds to other pixels in .

These steps keep the amoeba connected at all times, and in the end, all pixels of are occupied, which means reaching the final position. Although the algorithm may remove some pixels and then add the same pixels again, this happens at most once for each of the pixels. In fact, each pixel is removed at most once and also added at most once, which means the maximal number of steps produced by the algorithm is limited by the total number of pixels in the pixture, , which is way below the required limit.

1949K - Make Triangle

We will denote these groups as correspondingly.

Wlog , and . Let . We just want the sum in each group to be smaller than . Let's note some obvious conditions for construction to be possible:

  • The largest group is not too large: ;
  • The largest item is not too large: .

It turns out that these conditions are sufficient! We will prove even stronger statement.

Lemma. Assume we already placed some numbers, which are than any remaining numbers. Assume current sum in group is , the number of empty spots in group is , and there are numbers remaining, . Then, it's possible to distribute the remaining numbers if and only if the following conditions hold:

  • No group is too large: for any group ,
  • The largest item is not too large: there exists a group with , such that

Proof. These conditions are obviously necessary; let's show that they are sufficient. Forget about for now Wlog we can put the largest element in group , so . Put the remaining numbers in the other two groups arbitrarily. If both have sum less than , we are good! Otherwise, wlog, sum in group is at least .

Now, start swapping free elements in and one by one. If at some point sums in both and were smaller than , we are good. Note that it's not possible that at one moment, the sum in is , and at next the sum in is : the sum in currently is at most , and it will decrease by at most and increase by at least after the swap, so it will still be at most . So, the only possibility is that the sum in is still .

Now, note that if we swap free elements in and , the sum in is still . Also, note that if , that is, there are no free elements in remaining, then we swap free elements in and , and we will either get a valid partition, or the sum in will still be , as it's not possible for sum in to get after one swap, for the same reason.

So, here is the idea: we will start swapping elements, so that the smallest end up in . We will get a contradiction, since we know that .

This lemma also gives us a way to find a construction: add elements from largest to smallest. When deciding where to put an element, put it in any group, such that the conditions will hold after we put it there. We can check these conditions in after we sorted s and precomputed their prefix sums.

Runtime .


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK