5

Add #[track_caller] to track callers when initializing poisoned Once by reez12g...

 2 years ago
source link: https://github.com/rust-lang/rust/pull/94236
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Conversation

Copy link

Contributor

@reez12g reez12g commented 18 days ago

This PR is for this Issue.
#87707

With this fix, we expect to be able to track the caller when poisoned Once is initialized.

qpSHiNqp reacted with thumbs up emoji

Copy link

Collaborator

rust-highfive commented 18 days ago

Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @yaahc (or someone else) soon.

Please see the contribution instructions for more information.

Copy link

Member

yaahc commented 17 days ago

Hey, thank you for taking on this issue :D. Let's start with a perf test.

@bors try @rust-timer queue

Copy link

Collaborator

rust-timer commented 17 days ago

Awaiting bors try build completion.

@rustbot label: +S-waiting-on-perf

Copy link

Contributor

bors commented 17 days ago

hourglass Trying commit f457601 with merge d383a74...

Copy link

Contributor

bors commented 17 days ago

sunny Try build successful - checks-actions
Build commit: d383a74 (d383a74d639169718e7de27d4754aebb019b5d4a)

Copy link

Collaborator

rust-timer commented 17 days ago

Copy link

Collaborator

rust-timer commented 17 days ago

Finished benchmarking commit (d383a74): comparison url.

Summary: This benchmark run shows 4 relevant regressions crying_cat_face to instruction counts.

  • Average relevant regression: 0.8%
  • Largest regression in instruction counts: 0.8% on incr-unchanged builds of externs debug

If you disagree with this performance assessment, please file an issue in rust-lang/rustc-perf.

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR led to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: +S-waiting-on-review -S-waiting-on-perf +perf-regression

Copy link

Contributor

Author

reez12g commented 17 days ago

@yaahc
Thanks for running the perf test!
Looking at the results, it looks like there is regression, so I would like to discuss how to deal with it.

  1. ignore the regression
  2. resolve the regression (but I haven't figured out how to do that yet)
  3. close this Issue (since the method that gave #[track_caller] is a cold one)

Which is better?

Copy link

Member

yaahc commented 16 days ago

edited

Which is better?

I wasn't sure at first since I'm not super familiar with the rustc-perf test suite so I asked some other team members on the zulip and it seems that the regressions on the externs crate stress test are small enough that we can ignore them.

For next steps, Can you add a UI test to make sure we've added #[track_caller] to all the functions we need to in order to get the correct panic location in usercode? I suspect that in the process of setting up the UI test we will discover that Mark was right and that the location where the Once was accessed won't be nearly as useful as the location where it was poisoned. Though lets see what kind of error messages we get first before we consider more significant changes to track poisoning locations.

Copy link

Contributor

nnethercote commented 16 days ago

@rustbot label: +perf-regression-triaged

Yeah, externs is a stress test and a tiny regression is not an issue.

Copy link

Contributor

Author

reez12g commented 16 days ago

OK, thanks.
I'll add a UI test.

It's worth noting that perf is probably a pretty bad way to measure the effects of a patch like this. The compiler doesn't use std::sync::Once pretty much at all, so this is really at best measuring the extra codegen time if any of our benchmarks in the compiler suite are using it. Constructing proper benchmarks for something like Once is probably pretty difficult, so in the absence of them I would likely not block on perf concerns.

We definitely do want a test, though: I suspect the current impl is actually broken; it looks like the track_caller is not put on the call_once method, so this is just moving the caller tracking one level of indirection up, but still pointing to a spot in std.

Copy link

Contributor

Author

reez12g commented 15 days ago

it looks like the track_caller is not put on the call_once method

Oh, I'm sure you're right.
I'll amend it, thanks.

Copy link

Member

yaahc commented 14 days ago

edited

it looks like the track_caller is not put on the call_once method

Oh, I'm sure you're right. I'll amend it, thanks.

Just checked your force push diff, I think you'll want to keep the #[track_caller] annotation on both the inner and outer functions to get the location to propagate all the way in from user code.

Copy link

Contributor

Author

reez12g commented 14 days ago

Thanks for checking.
You're right, I fixed it.

Copy link

Member

yaahc commented 11 days ago

Thanks for checking. You're right, I fixed it.

Awesome, looks perfect. Now we just need to add that UI test12 then we should be good to go.

Copy link

Contributor

Author

reez12g commented 8 days ago

I have started writing tests, but am a little stuck.

src/test/ui/issues/issue-87707.rs

// test for #87707
// edition:2018

use std::sync::Once;
use std::panic;

fn main() {
    let o = Once::new();
    let _ = panic::catch_unwind(|| {
        o.call_once(|| panic!("Here Once instance is poisoned."));
    });
    o.call_once(|| {});
}

That is, I am having trouble getting the standard error output to do what I expect.
It outputs in Playground, but when I run the test at hand, it does not output as shown below.

$ ./x.py test src/test/ui/issues/issue-87707.rs --bless --stage 1

~~~snip~~~

running 1 test
F
failures:

---- [ui] ui/issues/issue-87707.rs stdout ----

error: ui test compiled successfully!
status: exit status: 0
command: "/Users/reez12g/development/rust/build/aarch64-apple-darwin/stage1/bin/rustc" "/Users/reez12g/development/rust/src/test/ui/issues/issue-87707.rs" "-Zthreads=1" "--target=aarch64-apple-darwin" "--error-format" "json" "--json" "future-incompat" "-Ccodegen-units=1" "-Zui-testing" "-Zdeduplicate-diagnostics=no" "--emit" "metadata" "-C" "prefer-dynamic" "--out-dir" "/Users/reez12g/development/rust/build/aarch64-apple-darwin/test/ui/issues/issue-87707" "-A" "unused" "-Crpath" "-O" "-Cdebuginfo=0" "-Lnative=/Users/reez12g/development/rust/build/aarch64-apple-darwin/native/rust-test-helpers" "--edition=2018" "-L" "/Users/reez12g/development/rust/build/aarch64-apple-darwin/test/ui/issues/issue-87707/auxiliary"
stdout:
------------------------------------------

------------------------------------------
stderr:
------------------------------------------

------------------------------------------



failures:
    [ui] ui/issues/issue-87707.rs

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 12657 filtered out; finished in 1.42s

Some tests failed in compiletest suite=ui mode=ui host=aarch64-apple-darwin target=aarch64-apple-darwin
Build completed unsuccessfully in 0:00:08

Could you please tell me what I am doing wrong?
(Please point out if it is more appropriate to ask in zulip (#t-lib?))

Copy link

Member

yaahc commented 8 days ago

edited

Could you please tell me what I am doing wrong?
(Please point out if it is more appropriate to ask in zulip (#t-lib?))

I think its best to ask in the issue but its also okay to ask in both places to try to get a response sooner. My only concern is that I don't want the discussion to get lost because it's split across platforms. To answer your question though, I'm happy to help debug the issue. Can you go ahead and push your copy of your test so I can take a look at it?

Copy link

Contributor

Author

reez12g commented 8 days ago

I think its best to ask in the issue

I will do so next time.
I greatly appreciate it.

d71a4ab
↑ I pushed the current code.
My expectation is that this test code will output some kind of standard error, but the current situation is that it doesn't output anything when I run it at hand.

This comment has been hidden.

Copy link

Member

yaahc commented 8 days ago

d71a4ab ↑ I pushed the current code. My expectation is that this test code will output some kind of standard error, but the current situation is that it doesn't output anything when I run it at hand.

awesome ty, taking a look right now.

Copy link

Member

yaahc commented 8 days ago

edited

Looks like you were just missing some header commands1 to tell the testsuite to check the output of the test.

I went ahead and pushed the fix since I'd already done it locally when testing. Once that passes CI I'll go ahead and approve the PR. Thank you for the help! ^_^

Copy link

Member

yaahc commented 7 days ago

@bors r+

Copy link

Contributor

bors commented 7 days ago

pushpin Commit 9aaf9f1 has been approved by yaahc

Squashed commits down into one. I suspect the header commands may not be enough if the RUST_BACKTRACE env is set, though -- that will fail the llvm-12 builder in CI if so.

@bors r=yaahc rollup=iffy

Copy link

Contributor

bors commented 7 days ago

pushpin Commit bca67fe has been approved by yaahc

Copy link

Contributor

bors commented 7 days ago

hourglass Testing commit bca67fe with merge 1b14fd3...

bors

merged commit 904c6ca into

rust-lang:master 7 days ago

10 of 11 checks passed

rustbot

added this to the 1.61.0 milestone

7 days ago

reez12g

deleted the add_track_caller_87707 branch

7 days ago

Copy link

Contributor

Author

reez12g commented 7 days ago

Thanks for the review & the help!
I appreciate it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Reviewers

No reviews

Assignees

yaahc

Projects

None yet

Milestone

1.61.0

Linked issues

Successfully merging this pull request may close these issues.

None yet

9 participants

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK