Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
82092dd
Move core profiling module into a package
lkollar Jun 4, 2025
cd5c814
Add sampling profiler
lkollar Jul 3, 2025
b26d1fa
Add pstats.SampledStats to display sample results
lkollar Jun 22, 2025
2e8e0a9
Add collapsed sample profiler output format
lkollar Jun 22, 2025
bf9e3fa
Add tests for sampling profiler
lkollar Jun 19, 2025
aeca768
fixup! Add tests for sampling profiler
lkollar Jul 3, 2025
7a76f68
Improve CLI
pablogsal Jul 6, 2025
97ba97e
Add more tests
pablogsal Jul 6, 2025
543b13d
Format files
pablogsal Jul 6, 2025
a8f1bdd
Formatting improvementts
pablogsal Jul 6, 2025
0440856
Fix small issues
pablogsal Jul 6, 2025
65d60e9
Add news entry
pablogsal Jul 6, 2025
bf15570
Moar fixes
pablogsal Jul 6, 2025
219670e
Use the new gil sampling
pablogsal Jul 6, 2025
f3dc377
Correct NEWS entry
pablogsal Jul 6, 2025
3002ab8
Add docs
pablogsal Jul 6, 2025
59137e4
Add what's new
pablogsal Jul 6, 2025
fe12677
Add what's new 2
pablogsal Jul 6, 2025
c28b0e0
Add what's new 2
pablogsal Jul 6, 2025
0c7b1f1
fixup! Add docs
lkollar Jul 6, 2025
d7706e6
fixup! fixup! Add docs
lkollar Jul 6, 2025
5c8d939
fixup! fixup! fixup! Add docs
lkollar Jul 6, 2025
b75aee1
fixup! Add what's new 2
lkollar Jul 6, 2025
acace5b
Fix free threading
pablogsal Jul 7, 2025
f50cfd7
Add sample.profile module reference
lkollar Jul 7, 2025
8d5dc18
Merge branch 'main' into sampling-profiler
lkollar Jul 7, 2025
d534f1e
Fix whatsnew
lkollar Jul 9, 2025
0360a72
Change profile.stack_collectors to singular
lkollar Jul 9, 2025
4fa0832
Remove redundant len check
lkollar Jul 9, 2025
b42812e
Handle process exiting during sampling
lkollar Jul 9, 2025
8d9cbfe
fixup! Handle process exiting during sampling
lkollar Jul 9, 2025
260d934
Protect against permission errors
pablogsal Jul 9, 2025
f0de45a
Sync processes in test_sample_profiler
pablogsal Jul 9, 2025
dbe2c0a
More fixes
pablogsal Jul 9, 2025
bc43ec7
Skip unsupported platforms in profiler tests
lkollar Jul 10, 2025
85f12d0
Add profile directory to Makefile.pre.in
lkollar Jul 10, 2025
a33d166
Raise on unsupported platforms in GetPyRuntimeAddress
lkollar Jul 10, 2025
0235127
fixup! Skip unsupported platforms in profiler tests
lkollar Jul 10, 2025
90260a6
Require subprocess support
pablogsal Jul 10, 2025
5683b76
fixup! fixup! Skip unsupported platforms in profiler tests
lkollar Jul 10, 2025
5a83439
Skip test on Android
lkollar Jul 10, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Handle process exiting during sampling
  • Loading branch information
lkollar committed Jul 9, 2025
commit b42812e80e8e85e59dbf633c08f8ea99c475c12e
28 changes: 26 additions & 2 deletions Lib/profile/sample.py
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies if these comments are not 100% accurate but I just had a quick scan of this source and the implementation of the unwinder. My initial reaction is that it is not clear what the data that is being collected is actually representing. Clearly this is not a CPU-time profile (the profiler is just counting stacks, but with a rough conversion factor that depends on the sampling rate one could turn those into physical time estimates, if one really wants to) because we don't know if the collected stacks were on CPU or not. In the build of CPython with the GIL, it looks like we're only sampling the thread that is holding the GIL. One may assume that a thread that holds the GIL is on CPU, but this does not need to be the case (indeed the application might have a "bug" whereby it's holding the GIL while it might actually release it), so the profiles one get are on-GIL profiles, which are not wall-time nor CPU-time profiles in general. I think it would be beneficial if the unwinder returned extra information, such as whether the stack was (likely) on CPU, whether its thread was (likely) holding the GIL. And if one wants a wall-time mode for the profiler, sampling just the on-GIL thread won't provide an accurate picture of where each thread is spending its wall-time.

Copy link
Copy Markdown
Member

@pablogsal pablogsal Aug 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed feedback - you raise very important points about the semantic clarity of what we’re actually measuring here. You’re absolutely right that the current state creates “on-GIL profiles” rather than true CPU-time or wall-time profiles, and that holding the GIL doesn’t guarantee CPU execution.

The missing point here is that the profiler is not finished yet and there are plenty of things we still need to finalize (we are also waiting for the PEP to be approved to put it into its final namespace). Right now we’re focused on getting the base infrastructure in place and working reliably across platforms (we don't even have yet the mode to run scripts or modules, only attaching).

What we had in mind here is something close to what you’re hinting at - in GIL mode, we want to avoid re-sampling threads that aren’t actually moving (i.e., would produce identical stack traces) so the idea is that the profiler only samples the thread with the GIL and signals that the other stacks are the same to the frontend . The frontend will then use the last samples to calculate the stats. For now we don’t have the signaling infrastructure for that, but it’s on the roadmap.

I’m particularly intrigued by your point about CPU detection. Do you have any concrete plan in mind for what you propose? Unless I am missing something there’s no good portable way to reliably determine if a thread is actually on-CPU from a remote process without sacrificing significant performance. On Linux we could theoretically examine kernel stacks from /proc or check the stat pseudohandle but that’s racy, likely slow, and doesn’t work on macOS or Windows. And I don't look forward to start calling NtQuerySystemInformation all over the place

Do you have thoughts on practical approaches for this? What’s your take on the best path forward for providing more semantic clarity about what the samples actually represent?

Also, PRs welcomed!​​​​​​​​​​​​​​​​ 😉

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The missing point here is that the profiler is not finished yet and there are plenty of things we still need to finalize (we are also waiting for the PEP to be approved to put it into its final namespace).

Ah fair enough.

in GIL mode, we want to avoid re-sampling threads that aren’t actually moving

That's an interesting idea, but I struggle to see how this could work 🤔 If a thread is essentially always off-GIL then it will never be sampled. Or it could switch between idle functions but the stack would never get re-sampled by the profiler. For example, consider this case

def foo():
    a()  # on-CPU
    b()  # I/O-bound, off-GIL

thread = Thread(target=foo)

The profiler will likely see a on the stack, but might miss the samples where b is on the stack because the thread will be off-GIL. Then in wall-time profiles it will look as if a was the only function running on that thread.

I’m particularly intrigued by your point about CPU detection. Do you have any concrete plan in mind for what you propose? Unless I am missing something there’s no good portable way to reliably determine if a thread is actually on-CPU from a remote process without sacrificing significant performance. On Linux we could theoretically examine kernel stacks from /proc or check the stat pseudohandle but that’s racy, likely slow, and doesn’t work on macOS or Windows. And I don't look forward to start calling NtQuerySystemInformation all over the place

Well this approach is already racy by nature since threads are not being stopped before taking samples so there could be all sorts of inconsistencies already. In Austin we have platform-dependent implementations of py_thread__is_idle that determine whether a thread is idle or not (and yes it uses NtQuerySystemInformation on Windows). Surely it is racy, but I don't know of other ways of finding out about the thread status. In all my experiments the accuracy is pretty good, and the overhead not too bad. With simple stacks Austin can still sample at over 100 kHz, with the main overhead coming from the remote reads of datastack chunks.

Do you have thoughts on practical approaches for this? What’s your take on the best path forward for providing more semantic clarity about what the samples actually represent?

I think samples would have to include the CPU state of threads at the very least to provide both a wall- and CPU-time modes, which are pretty common for profilers. The GIL state might be added bonus for e.g. GIL contention analysis, figuring out if there is a lot of idle time spent with the GIL held etc...)

Also, PRs welcomed!​​​​​​​​​​​​​​​​ 😉

I'm currently short on bandwidth so I don't think I'll be able to contribute much in the short-term, but I'm more than happy to review PRs if needed and share more of my experience with developing Austin. Also I wonder if there isn't a better place to take this discussion to so to have all the details in one place?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The profiler will likely see a on the stack, but might miss the samples where b is on the stack because the thread will be off-GIL. Then in wall-time profiles it will look as if a was the only function running on that thread.

No, the condition would look like:

  • Go over all the threads
  • For the threads that don't have the GIL check the current frame
  • If the current frame is the same ( as in 'the same pointer' to the frame) then assume the stack hasn't changed.
  • Return something that signals that the stack is the same instead of resolving the full stack.

Technically is possible that at low sample rates if you have A -> B -> C then the stack may have moved to A -> B2 -> C where C somehow has the same pointer. I think for this we can introduce some monotonically increased version numbers somewhere but I still need to explore this. In any case the chances of that happening are very low.

Well this approach is already racy by nature since threads are not being stopped before taking samples so there could be all sorts of inconsistencies already.

Yes but this is worse. Is one thing that we are sampling different threads at different times but that's not too bad as long as we can identify when a single sample is consistent or likely consistent. But if we sample separately the "on-cpu" info and the actual stack that is worse because is possible that the thread has moved or the sys call is existing or whatever. I think I need to benchmark a bit to know how bad this is but I will trust your experience if you say is not too bad.

In Austin we have platform-dependent implementations of py_thread__is_idle that determine whether a thread is idle or not (and yes it uses NtQuerySystemInformation on Windows). Surely it is racy, but I don't know of other ways of finding out about the thread status. In all my experiments the accuracy is pretty good, and the overhead not too bad. With simple stacks Austin can still sample at over 100 kHz, with the main overhead coming from the remote reads of datastack chunks.

Well, if the overhead is not too bad maybe we could make it opt-in? So the user can somehow switch between on-CPU and wall time?

I think samples would have to include the CPU state of threads at the very least to provide both a wall- and CPU-time modes, which are pretty common for profilers. The GIL state might be added bonus for e.g. GIL contention analysis, figuring out if there is a lot of idle time spent with the GIL held etc...)

Ok, I am happy to explore this once we finish the rest of the stuff we have pending (which is still a lot).

I'm currently short on bandwidth so I don't think I'll be able to contribute much in the short-term, but I'm more than happy to review PRs if needed and share more of my experience with developing Austin. Also I wonder if there isn't a better place to take this discussion to so to have all the details in one place?

Fair enough. For design we are mostly communicating in char or email to not saturate the issues because that tends to not work fantastically. If you want I could add you to the chain or alternatively you can wait until we can go on a issue-by-issue basis but that won't happen until the PEP is accepted and we finish the other parts.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the condition would look like:

  • Go over all the threads
  • For the threads that don't have the GIL check the current frame
  • If the current frame is the same ( as in 'the same pointer' to the frame) then assume the stack hasn't changed.
  • Return something that signals that the stack is the same instead of resolving the full stack.

Ah I see, but this still feels risky, even more so considering that interpreter frames live in datastack chunks, which foster slot re-use. Maybe checking the executable object and the frame address could mitigate the problem a bit, but the risk is probably still there 🤔

But if we sample separately the "on-cpu" info and the actual stack that is worse because is possible that the thread has moved

Very true, but the same applies to, say, getting the top frame from PyThreadInfo, and then unwinding that. By the time you unwind the frame the thread could have moved to a different top frame. Unfortunately that's the nature of the process that is being implemented. Unless the stacks vary very wildly I think this approach is still OK (this paper from Emery Berger has some accuracy figures in regards, so that you don't just have to take my word on it 🙂)

Well, if the overhead is not too bad maybe we could make it opt-in? So the user can somehow switch between on-CPU and wall time?

Yes I think it would be very useful to have an on-CPU switch.

If you want I could add you to the chain or alternatively you can wait until we can go on a issue-by-issue basis but that won't happen until the PEP is accepted and we finish the other parts.

Sure, I'm happy to wait 👍

Copy link
Copy Markdown
Member

@pablogsal pablogsal Aug 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see, but this still feels risky, even more so considering that interpreter frames live in datastack chunks, which foster slot re-use. Maybe checking the executable object and the frame address could mitigate the problem a bit, but the risk is probably still there 🤔

Yeah that's where having frame version numbers can help. I need to think how we can have those without impacting runtime though.... In any case the reuse is not that bad since the data stack chunks also contains the locals so unless you have EXACTLY the same locals and exactly the same layout is super unlikely you will reuse.

I just ran an experiment on the entire CPython test suite and I got 1e-4 % of frames that reused addresses with the code object being the same. Pretty sure you can have a custom example that will trigger a much higher percentage but at least we know this is not super common

Very true, but the same applies to, say, getting the top frame from PyThreadInfo, and then unwinding that. By the time you unwind the frame the thread could have moved to a different top frame. Unfortunately that's the nature of the process that is being implemented.

Yes but this is an order of magnitude less likely because you cache the entire frame storage, so within the sample you get as much atomicity as possible. This doesn't apply to generators though. But in any case the argument does matter because saying that "well everything is racy doesn't remove from the fact that some things are more racy than others and that adding more racy stuff is worse than having less racy stuff. Indeed, in my measurements accessing /proc/PID/stat adds a 1-3 us latency, which is not prohibitively but is still a lot. Maybe is fine when there aren't a lot of threads....

I think we can add both modes and coordinate with the docs working group to explain this to users in a way that's not "too many knobs" situation.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the condition would look like:

Go over all the threads
For the threads that don't have the GIL check the current frame
If the current frame is the same ( as in 'the same pointer' to the frame) then assume the stack hasn't changed.
Return something that signals that the stack is the same instead of resolving the full stack.

This sounds like an interesting idea to try from our end too, but then you'd need to keep track of a mapping from python thread id to the last frame pointer, right? As Gab noted above, this could be risky in case if frame goes out of scope, even though you can check for such cases.

Yes but this is an order of magnitude less likely because you cache the entire frame storage, so within the sample you get as much atomicity as possible. This doesn't apply to generators though. But in any case the argument does matter because saying that "well everything is racy doesn't remove from the fact that some things are more racy than others and that adding more racy stuff is worse than having less racy stuff. Indeed, in my measurements accessing /proc/PID/stat adds a 1-3 us latency, which is not prohibitively but is still a lot. Maybe is fine when there aren't a lot of threads....

It depends on how often you query /proc/PID/stat for how many threads, but it could be quite expensive to use /proc/PID/stat. For a continuous profiler, it was too much, using a few seconds of cpu time per minute. Might be ok for an add-hoc use cases.

I think we can add both modes and coordinate with the docs working group to explain this to users in a way that's not "too many knobs" situation.

Are you implying that users or docs WG are generally against having too many knobs?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sounds like an interesting idea to try from our end too, but then you'd need to keep track of a mapping from python thread id to the last frame pointer, right? As Gab noted above, this could be risky in case if frame goes out of scope, even though you can check for such cases.

I can't see why is risky. Is comparing a memory address not dereferenckng the pointer.

Are you implying that users or docs WG are generally against having too many knobs?

I am saying that normally users tend to be more confused the more knobs they are and that will make the docs harder so I want the docs WG to recommend how to approach whatever we decide to do in the best way for users or even if they recommend having less knobs

Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
import argparse
import _remote_debugging
import os
import pstats
import statistics
import time
import sys
import sysconfig
import time
from collections import deque
from _colorize import ANSIColors

Expand Down Expand Up @@ -47,8 +49,14 @@ def sample(self, collector, duration_sec=10):
try:
stack_frames = self.unwinder.get_stack_trace()
collector.collect(stack_frames)
except (RuntimeError, UnicodeDecodeError, OSError):
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is wrong: the profiler can and will fail naturally with OSError

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to modify the windows part of remote debugging in dbe2c0a to not raise a generic OSError so we can catch it properly

except ProcessLookupError:
break
except (RuntimeError, UnicodeDecodeError, MemoryError):
errors += 1
except Exception as e:
if not self._is_process_running():
break
raise e from None

# Track actual sampling intervals for real-time stats
if num_samples > 0:
Expand Down Expand Up @@ -89,6 +97,22 @@ def sample(self, collector, duration_sec=10):
f"({(expected_samples - num_samples) / expected_samples * 100:.2f}%)"
)

def _is_process_running(self):
if sys.platform == "linux" or sys.platform == "darwin":
try:
os.kill(self.pid, 0)
return True
except ProcessLookupError:
return False
elif sys.platform == "win32":
try:
_remote_debugging.RemoteUnwinder(self.pid)
except Exception:
return False
return True
else:
raise ValueError(f"Unsupported platform: {sys.platform}")

def _print_realtime_stats(self):
"""Print real-time sampling statistics."""
if len(self.sample_intervals) < 2:
Expand Down
27 changes: 27 additions & 0 deletions Lib/test/test_sample_profiler.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,15 @@

try:
from _remote_debugging import PROCESS_VM_READV_SUPPORTED
import _remote_debugging
except ImportError:
raise unittest.SkipTest(
"Test only runs when _remote_debugging is available"
)
else:
import profile.sample
from profile.sample import SampleProfiler



class MockFrameInfo:
Expand Down Expand Up @@ -1600,6 +1603,30 @@ def test_invalid_output_format_with_mocked_profiler(self):
"Invalid output format: unknown_format", str(cm.exception)
)

def test_is_process_running(self):
with (test_subprocess("import time; time.sleep(0.2)") as proc,
Comment thread
pablogsal marked this conversation as resolved.
Outdated
mock.patch("_remote_debugging.RemoteUnwinder") as mock_unwinder_class):
mock_unwinder_class.return_value = mock.MagicMock()
profiler = SampleProfiler(pid=proc.pid, sample_interval_usec=1000, all_threads=False)

self.assertTrue(profiler._is_process_running())
proc.wait()
self.assertFalse(profiler._is_process_running())

@unittest.skipUnless(sys.platform == "linux", "Only valid on Linux")
def test_esrch_signal_handling(self):
with test_subprocess("import time; time.sleep(0.1)") as proc:
unwinder = _remote_debugging.RemoteUnwinder(proc.pid)
initial_trace = unwinder.get_stack_trace()
self.assertIsNotNone(initial_trace)

# Wait for the process to die and try to get another trace
proc.wait()

with self.assertRaises(ProcessLookupError):
unwinder.get_stack_trace()



class TestSampleProfilerCLI(unittest.TestCase):
def test_cli_collapsed_format_validation(self):
Expand Down
3 changes: 3 additions & 0 deletions Python/remote_debug.h
Original file line number Diff line number Diff line change
Expand Up @@ -982,6 +982,9 @@ _Py_RemoteDebug_ReadRemoteMemory(proc_handle_t *handle, uintptr_t remote_address
return read_remote_memory_fallback(handle, remote_address, len, dst);
}
PyErr_SetFromErrno(PyExc_OSError);
if (errno == ESRCH) {
return -1;
}
_set_debug_exception_cause(PyExc_OSError,
"process_vm_readv failed for PID %d at address 0x%lx "
"(size %zu, partial read %zd bytes): %s",
Expand Down
Loading