1

JOI Spring Training 2024 Online Contest

 1 month ago
source link: https://codeforces.com/blog/entry/127315
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

JOI Spring Training 2024 Online Contest

3 days ago, # |

Contest Day4 is Over

The contest Day4 (final day) is finished. The overall ranking is as follows. Thank you for your participation, and let's discuss the problem.

The overall ranking is as follows.

Place Username Day1 Day2 Day3 Day4 Total
1st tricyzhkx 300 215 271 300 1086
2nd TheContingency 296 250 200 300 1046
3rd PersistentLife 296 176 211 300 983
4th SuddeNDeath 291 182 197 300 970
5th dXqwq 239 210 211 300 960
6th JoesSR 254 205 211 272 942
7th zhoukangyang 200 182 257 300 939
8th ChenyaoZhong 268 167 200 300 935
9th qiuzx 223 210 201 300 934
9th smwtcat 292 99 271 272 934
9th zh0uhuanyi 293 130 211 300 934
12th sjc061031 159 220 211 300 900
13th tute7627 200 210 271 204 885
14th Little09 200 136 211 300 847
15th Alan_Zhao 208 230 130 272 840
16th AFewSuns 242 117 131 300 790
17th map 200 217 100 272 789
18th goujingyu 186 136 179 277 778
19th zhouhuanyi 243 130 100 300 773
20th abcsadeveryhour 185 116 163 300 764
20th koosaga 295 168 136 165 764

3 days ago, # |

Will the problems be available for upsolving ?

if so then on which website ?

3 days ago, # |

I've decided to share my solutions to the problems I've managed to fully solve. If you can add solutions to the rest of the problems/provide a better solution/ask a question, please, feel free to do so.

I'll begin with a little transformation of the statement — instead of adding or , I will subtract from the numbers and aim to achieve subarrays of zeroes. My first observation was that one's goal should be to use operation to sort the numbers in the queried range and then continue with operation , as you can achieve any sorted sequence with it. We will answer the queries offline. If we are willing to answer the queries ending on some position , we should seek to find the optimal usage of operations to sort the sequence for to , as then if is the number of usages of operation on , the answer for the query from to will be . This is true because adding a number to the left of an interval will only result in a potential addition of operation -s to it. Imagine we've found for some and now we want to see how -s change with the appendment of . Let's denote the changed heights with . You can see, that after sorting, the sequence is split into blocks of "dominoes" — subarrays , for which if you decrease by , then will also decrease by , as if you push the last one domino, every other will fall as well. The blocks of dominoes are separated by "barriers" between the end and the beginning of such blocks, which if broken will result in the merge of the two ranges. Each of these barriers takes some number of -s before breaking. So, now imagine the following two scenarios — and . If , then it's easy — and only possibly add a barrier between and . The other case is much more interesting, as you then must decrease the height of . Let be the required number of times you need to perform operation on . Then, every element in the domino-range of will each decrease by . Then, when you meet the barrier between the current and previous range, you should check whether you should remove the barrier and decrease , or stop the procedure. Let's say that you need to decrease by in order to merge the ranges. If , then every element in the previous range should be decreased by units. Then you repeat the procedure until you run out of "decreases" or of barriers. If implemented with the correct data structure for range updates, this algorithm will be fast, as with every addition of an element, you add at most barrier, remove some number of barriers, then stop at barrier. This will result in amortized traversal of barriers. In order to implement the range updates, you can keep a segment tree with lazy propagation. The answer will be, as mentioned, the sum of -s and the check whether an interval is valid is to see if . The final complexity is .

As it gets messy very quickly, I will only mention the key ideas of my solution. Its main idea is to use a centroid decomposition of the tree and keep online updates. By keeping a centroid decomposition, our goal is to calculate the number of triplets in each centroid's subtree, whose path passes trough the current centroid. Let's denote -s centroid subtree, where is root, by . The "subtrees" in are the subtrees of the children of in . The paths are split into the following categories:

Triplets, including the centroid

• If , their number is the count of paths of the type in each subtree.

• If , their number is the ways of choosing and from separate subtrees in .

• If , their number is the count of paths of the type in each subtree.

Paths, not including the centroid

• , equal to the number of ways of selecting paths of the type and from different subtrees.

• , equal to the number of ways of selecting paths of the type and from different subtrees.

The whole solution is based on keeping the mentioned pairwise products in a sane way. You should be able to track down how an update is changing the count of and paths for every subtree. For every node, you keep in which subtree is situated for every of its parent centroids, and keep three Fenwicks on the dfs order for each , one for , , and updates, which helps calculating the change in the count of triplets. Instead of keeping separate fenwicks, I glued them together for better memory and time management. With very careful implementation of lines, I managed to provide an solution with a big constant, and it passed for seconds out of the provided .

The first thing I did was to solve the third subtask. It only requires to be able to determine whether you can reach a certain position or not. It could be solved by finding . Then, you sort the remaining intervals and keep a vector with the reachable positions. Imagine you have determined the reachable positions of the first intervals and now you seek to find the available positions of the -th interval, which is . If you can reach the -th position , then you can reach every position in the range . Now, we need to find that position . As the intervals are uninteresecting, then we can't reach the interval by doing a simple step, so we need to find the farthest reachable position down, from which we can jump to the current interval. This can be done with a binary search, because what you really search for is the first interval, which intersects with . To answer the queries you can do a binary search on the intervals of available positions. Now, after solving the third subtask, we can notice that the optimal path of all paths to a certain is one using the least or the most -jumps possible. Finding the least amount of -jumps to a certain position is easy, the count is equal to the minimum -jumps to reach the interval as a whole, so if the jumps for every interval is , and the interval, which firstly intersects with is with index , then . Now, the more interesting part is to count the path with the most jumps. I tried the following greedy strategy — if you want to find the path with the most jumps, which ends at position , check whether position is reachable, and if it is, then jump to it. If it isn't, go to . It passed the tests for the first subtask, so I decided to optimize it, hoping for the subtask, by doing the following:

1) Have a memoization, so I don't end up calculating the same answer twice.

2) Instead of always checking whether is reachable, find the interval you're in and do all the jumps possible at once, so you stay in the interval.

3) If is outside the current interval, check if it's in another. If it is, directly jump to it.

4) Otherwise find the first interval, before . Directly jump to its right border.

By doing the described optimizations, the solution passed for points. I haven't proved it yet, so if you could provide proof/antitest to it it wouldn't be left unappreciated. Still, I believe that the tests are strong enough to not allow such a solution to pass, if it has bad complexity.

We can follow this given greedy strategy: for every query, catch the flight which arrives at the next destination the earliest. Then you can choose which flight you'll be taking first and then stick to the greedy. By doing this, you will score points.

Then, instead of doing this procedure over and over again, you can precalculate for every flight the transferring one from the next position, where and follow some kind of numeration. After doing this, you can build a tree, where the edges are between and . Every edge has fixated costs — one when transferring and one when ending your jouyney with the current flight. Every path between can be broken down to flights of the first type and one of the second type. Now, every query is depicted as follows: for a set of nodes (the flights, coming from ), find the path with edge count , which is with the lowest length. For the subtasks with , this can be implemented directly with binary lifting for a complexity of , but there is a better approach. Instead of using binary lifting, you can notice, that the set of the mentioned -flights is all the nodes of a certain depth in the tree, and all the destination nodes (-flights) are at another depth . You answer the aforementioned queries offline by doing a dfs traversal of the tree, writing down the parents at all depths, and when reaching a node with the dfs, you can access all the parents you want with complexity. This will remove the from the complexity, giving you , still giving you points.

Now, the only thing what remains is what with do with big -s. Let's split the -s into two types — big and small. The small -s will be with and the big ones will be with . We can answer the queries with the small -s for a complexity of , which is fast enough. The only thing what remains is to find a way to answer all the queries for the big -s. One can notice, that there will be at most big -s, and if we answer the queries for a big with a complexity of , our solution would be finished. For doing this, we can do a dfs, similar to the one mentioned before, as we keep the best children at the depth of -s flights for every node and chkmin the answer for it's . We now have a solution with complexity , which passes for points.

This analysis of mine isn't meant to be professional and I'm sure I've made mistakes in it. I hope it still is useful for solving the tasks. I very much hope to receive as comments for the rest of the tasks. Also, If you have any questions, again, feel free to ask :). And also, sorry if my English isn't the best, but I think it's understandable enough :).

  • 8 hours ago, # ^ |

    I haven't proved it yet, so if you could provide proof/antitest to it it wouldn't be left unappreciated.

    Really depends on what "Have a memoization" means, but if I am not mistaken, it seems what you are effectively doing is precalculating the maximum jumps for each right end, and seeing to which of those the current query rounds down to? Or on second thought, I feel that a testcase where the availible segments are something akin to might fail your code (because memoization might fail on such cases where you succesively apply step 3)-- could you link your code somewhere and maybe I can figure out some counter-testcase :))?

    Maybe a more «sure» solution which looks to be the same?

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK