-
-
Notifications
You must be signed in to change notification settings - Fork 34.6k
gh-135953: Implement sampling tool under profile.sample #135998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
82092dd
cd5c814
b26d1fa
2e8e0a9
bf9e3fa
aeca768
7a76f68
97ba97e
543b13d
a8f1bdd
0440856
65d60e9
bf15570
219670e
f3dc377
3002ab8
59137e4
fe12677
c28b0e0
0c7b1f1
d7706e6
5c8d939
b75aee1
acace5b
f50cfd7
8d5dc18
d534f1e
0360a72
4fa0832
b42812e
8d9cbfe
260d934
f0de45a
dbe2c0a
bc43ec7
85f12d0
a33d166
0235127
90260a6
5683b76
5a83439
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
- Loading branch information
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,9 +1,11 @@ | ||
| import argparse | ||
| import _remote_debugging | ||
| import os | ||
| import pstats | ||
| import statistics | ||
| import time | ||
| import sys | ||
| import sysconfig | ||
| import time | ||
| from collections import deque | ||
| from _colorize import ANSIColors | ||
|
|
||
|
|
@@ -47,8 +49,14 @@ def sample(self, collector, duration_sec=10): | |
| try: | ||
| stack_frames = self.unwinder.get_stack_trace() | ||
| collector.collect(stack_frames) | ||
| except (RuntimeError, UnicodeDecodeError, OSError): | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is wrong: the profiler can and will fail naturally with OSError
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I had to modify the windows part of remote debugging in dbe2c0a to not raise a generic |
||
| except ProcessLookupError: | ||
| break | ||
| except (RuntimeError, UnicodeDecodeError, MemoryError): | ||
| errors += 1 | ||
| except Exception as e: | ||
| if not self._is_process_running(): | ||
| break | ||
| raise e from None | ||
|
|
||
| # Track actual sampling intervals for real-time stats | ||
| if num_samples > 0: | ||
|
|
@@ -89,6 +97,22 @@ def sample(self, collector, duration_sec=10): | |
| f"({(expected_samples - num_samples) / expected_samples * 100:.2f}%)" | ||
| ) | ||
|
|
||
| def _is_process_running(self): | ||
| if sys.platform == "linux" or sys.platform == "darwin": | ||
| try: | ||
| os.kill(self.pid, 0) | ||
| return True | ||
| except ProcessLookupError: | ||
| return False | ||
| elif sys.platform == "win32": | ||
| try: | ||
| _remote_debugging.RemoteUnwinder(self.pid) | ||
| except Exception: | ||
| return False | ||
| return True | ||
| else: | ||
| raise ValueError(f"Unsupported platform: {sys.platform}") | ||
|
|
||
| def _print_realtime_stats(self): | ||
| """Print real-time sampling statistics.""" | ||
| if len(self.sample_intervals) < 2: | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies if these comments are not 100% accurate but I just had a quick scan of this source and the implementation of the unwinder. My initial reaction is that it is not clear what the data that is being collected is actually representing. Clearly this is not a CPU-time profile (the profiler is just counting stacks, but with a rough conversion factor that depends on the sampling rate one could turn those into physical time estimates, if one really wants to) because we don't know if the collected stacks were on CPU or not. In the build of CPython with the GIL, it looks like we're only sampling the thread that is holding the GIL. One may assume that a thread that holds the GIL is on CPU, but this does not need to be the case (indeed the application might have a "bug" whereby it's holding the GIL while it might actually release it), so the profiles one get are on-GIL profiles, which are not wall-time nor CPU-time profiles in general. I think it would be beneficial if the unwinder returned extra information, such as whether the stack was (likely) on CPU, whether its thread was (likely) holding the GIL. And if one wants a wall-time mode for the profiler, sampling just the on-GIL thread won't provide an accurate picture of where each thread is spending its wall-time.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the detailed feedback - you raise very important points about the semantic clarity of what we’re actually measuring here. You’re absolutely right that the current state creates “on-GIL profiles” rather than true CPU-time or wall-time profiles, and that holding the GIL doesn’t guarantee CPU execution.
The missing point here is that the profiler is not finished yet and there are plenty of things we still need to finalize (we are also waiting for the PEP to be approved to put it into its final namespace). Right now we’re focused on getting the base infrastructure in place and working reliably across platforms (we don't even have yet the mode to run scripts or modules, only attaching).
What we had in mind here is something close to what you’re hinting at - in GIL mode, we want to avoid re-sampling threads that aren’t actually moving (i.e., would produce identical stack traces) so the idea is that the profiler only samples the thread with the GIL and signals that the other stacks are the same to the frontend . The frontend will then use the last samples to calculate the stats. For now we don’t have the signaling infrastructure for that, but it’s on the roadmap.
I’m particularly intrigued by your point about CPU detection. Do you have any concrete plan in mind for what you propose? Unless I am missing something there’s no good portable way to reliably determine if a thread is actually on-CPU from a remote process without sacrificing significant performance. On Linux we could theoretically examine kernel stacks from /proc or check the stat pseudohandle but that’s racy, likely slow, and doesn’t work on macOS or Windows. And I don't look forward to start calling NtQuerySystemInformation all over the place
Do you have thoughts on practical approaches for this? What’s your take on the best path forward for providing more semantic clarity about what the samples actually represent?
Also, PRs welcomed! 😉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah fair enough.
That's an interesting idea, but I struggle to see how this could work 🤔 If a thread is essentially always off-GIL then it will never be sampled. Or it could switch between idle functions but the stack would never get re-sampled by the profiler. For example, consider this case
The profiler will likely see
aon the stack, but might miss the samples wherebis on the stack because the thread will be off-GIL. Then in wall-time profiles it will look as ifawas the only function running on that thread.Well this approach is already racy by nature since threads are not being stopped before taking samples so there could be all sorts of inconsistencies already. In Austin we have platform-dependent implementations of
py_thread__is_idlethat determine whether a thread is idle or not (and yes it usesNtQuerySystemInformationon Windows). Surely it is racy, but I don't know of other ways of finding out about the thread status. In all my experiments the accuracy is pretty good, and the overhead not too bad. With simple stacks Austin can still sample at over 100 kHz, with the main overhead coming from the remote reads of datastack chunks.I think samples would have to include the CPU state of threads at the very least to provide both a wall- and CPU-time modes, which are pretty common for profilers. The GIL state might be added bonus for e.g. GIL contention analysis, figuring out if there is a lot of idle time spent with the GIL held etc...)
I'm currently short on bandwidth so I don't think I'll be able to contribute much in the short-term, but I'm more than happy to review PRs if needed and share more of my experience with developing Austin. Also I wonder if there isn't a better place to take this discussion to so to have all the details in one place?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, the condition would look like:
Technically is possible that at low sample rates if you have A -> B -> C then the stack may have moved to A -> B2 -> C where C somehow has the same pointer. I think for this we can introduce some monotonically increased version numbers somewhere but I still need to explore this. In any case the chances of that happening are very low.
Yes but this is worse. Is one thing that we are sampling different threads at different times but that's not too bad as long as we can identify when a single sample is consistent or likely consistent. But if we sample separately the "on-cpu" info and the actual stack that is worse because is possible that the thread has moved or the sys call is existing or whatever. I think I need to benchmark a bit to know how bad this is but I will trust your experience if you say is not too bad.
Well, if the overhead is not too bad maybe we could make it opt-in? So the user can somehow switch between on-CPU and wall time?
Ok, I am happy to explore this once we finish the rest of the stuff we have pending (which is still a lot).
Fair enough. For design we are mostly communicating in char or email to not saturate the issues because that tends to not work fantastically. If you want I could add you to the chain or alternatively you can wait until we can go on a issue-by-issue basis but that won't happen until the PEP is accepted and we finish the other parts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see, but this still feels risky, even more so considering that interpreter frames live in datastack chunks, which foster slot re-use. Maybe checking the executable object and the frame address could mitigate the problem a bit, but the risk is probably still there 🤔
Very true, but the same applies to, say, getting the top frame from PyThreadInfo, and then unwinding that. By the time you unwind the frame the thread could have moved to a different top frame. Unfortunately that's the nature of the process that is being implemented. Unless the stacks vary very wildly I think this approach is still OK (this paper from Emery Berger has some accuracy figures in regards, so that you don't just have to take my word on it 🙂)
Yes I think it would be very useful to have an on-CPU switch.
Sure, I'm happy to wait 👍
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that's where having frame version numbers can help. I need to think how we can have those without impacting runtime though.... In any case the reuse is not that bad since the data stack chunks also contains the locals so unless you have EXACTLY the same locals and exactly the same layout is super unlikely you will reuse.
I just ran an experiment on the entire CPython test suite and I got 1e-4 % of frames that reused addresses with the code object being the same. Pretty sure you can have a custom example that will trigger a much higher percentage but at least we know this is not super common
Yes but this is an order of magnitude less likely because you cache the entire frame storage, so within the sample you get as much atomicity as possible. This doesn't apply to generators though. But in any case the argument does matter because saying that "well everything is racy doesn't remove from the fact that some things are more racy than others and that adding more racy stuff is worse than having less racy stuff. Indeed, in my measurements accessing /proc/PID/stat adds a 1-3 us latency, which is not prohibitively but is still a lot. Maybe is fine when there aren't a lot of threads....
I think we can add both modes and coordinate with the docs working group to explain this to users in a way that's not "too many knobs" situation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sounds like an interesting idea to try from our end too, but then you'd need to keep track of a mapping from python thread id to the last frame pointer, right? As Gab noted above, this could be risky in case if frame goes out of scope, even though you can check for such cases.
It depends on how often you query
/proc/PID/statfor how many threads, but it could be quite expensive to use /proc/PID/stat. For a continuous profiler, it was too much, using a few seconds of cpu time per minute. Might be ok for an add-hoc use cases.Are you implying that users or docs WG are generally against having too many knobs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't see why is risky. Is comparing a memory address not dereferenckng the pointer.
I am saying that normally users tend to be more confused the more knobs they are and that will make the docs harder so I want the docs WG to recommend how to approach whatever we decide to do in the best way for users or even if they recommend having less knobs