-
-
Notifications
You must be signed in to change notification settings - Fork 31.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gh-120144: Make it possible to use sys.monitoring for bdb and make it default for pdb
#124533
base: main
Are you sure you want to change the base?
Conversation
|
And of course we need documentation updates, I will do it later when the feature is accepted and the interface is decided. |
|
Its after midnight so will test much later today. If all ok using default, will patch IDLE to pass 'monitoring'. |
|
Ran fine with default backend. Not fine with EDIT: The remote execution process crashes because of an unrecoverable exception in the rpc code. Monitoring does not seem to work across the socket connection. Some of the debugger code likely needs a change (as pdb does). (But IDLE does not have to use 'monitoring'. |
|
Right - the most important thing is I think at least part of the issue is multi-threading. For |
|
Hi @terryjreedy , I "fixed" the multi-threading issue. Well by "fixed" I meant making it the same behavior as before. Let me know if you have some time to test it out :) |
|
With |
The improvement you see here is entirely due to the changes in break_anywhere(self, frame): introduced in python 3.14. If you try with the released alpha versions of 3.14 you get similar results to yours. I have tried your new bdb code, based on sys.monitoring, in different cases and, unfortunately, I did not see any improvements. Quite the opposite! |
#124553 was merged after this PR so that's not possible (I did it btw). But yes, for this specific case, the changes to What case did you try for this implementation? Like I mentioned, this is not the perfect solution, the major problem is solves is when a line event is triggered multiple times. So, for the same code, if you put a breakpoint at When you say |
Yes it does! Massive improvement in this case!
Could you please elaborate on how this is achieved?
One script I saw degradation of performance was the following: from timeit import timeit
class A(object):
def __init__(self):
self.value = 1
class B(A):
@staticmethod
def create_property(name):
return property(
lambda self: getattr(self.srg, name),
lambda self, v: setattr(self.srg, name, v),
)
value = create_property.__func__("value")
def __init__(self):
self.srg = __import__("threading").local()
self.value = 2
super().__init__()
b1 = B()
b2 = B()
print(timeit("b1.value = 4; c = b1.value", number=100000, globals=globals()))
print(timeit("b1.v = 4; c = b1.v", number=100000, globals=globals()))With a breakpoint in the last line, the monitoring-based Bdb takes 4 times as much time to reach it. But let me add some more details about how I am testing. I am using a Bdb-based debugger in PyScripter, which is adapted for multi-treaded debugging. I am not using your modified bdb.py directly. Instead I created a subclass of Bdb integrating your code (see fastbdb.zip). Since, I wanted to back port your code to python 12 and 13, I replaced clear_tool_id with a custom function. Finally, since I am using the code for multi-threaded debugging, I have removed the changes in 99ea70c. |
|
Okay that's a completely different story. Unfortunately this is the CPython PR so I can't fully solve issues for your debugger, but I have some potential theories.
So, if you can reproduce the performance issue on pdb from this branch, I can investigate more into it. If you are experiencing issues on a debugger that's based on a customized version of this bdb, I'm afriad you are on your own because it's highly possible that the issue is caused by your customization (otherwise you should be able to repro it with pure pdb). |
|
I am testing on python 3.14 and I am using the following clear_tool_id def clear_tool_id(tool_id):
import sys
if sys.version_info >= (3,14):
sys.monitoring.clear_tool_id(tool_id)
else:
for event in sys.monitoring.events.__dict__.values():
if isinstance(event, int) and event: # Ensure it's an event constant
sys.monitoring.register_callback(tool_id, event, None)So, no difference here.
No. There is just one thread and it works exactly like your code. PyScripter allows you to debug multi-threaded python code, but for single-threaded scripts it just uses Bdb. And I am not asking you to solve my problems. I reported one script where monitoring degrades debugging performance. Why don't you try it on your side? |
direct pdb usage
Former is monitoring and latter is settrace. Let me know if you have different results. Again, not results from pyscripter, please only send results of pdb monitoring vs pdb settrace. (And you can use the latest PR which merged in the 3.14 changes). |
950e032 to
b595682
Compare
|
Using the attached script I get So monitoring is about 25% slower in this example. Consistent with your results. |
|
Yes, the callback is actually slower because we have more checks. However, Performance of a debugger is important, but 25% overhead is not unacceptable (it's a debugger, so overhead is expected). It's about a tradeoff between how much we gain on more common cases vs how much we lose on others. Eliminating repeating line events is a big gain and we can polish the performance in the future. I think we can actually get much better with the ability to disable events. This is just a start. |
|
Thanks! And by the way, I found why I got the large performance hit in my earlier tests. It looks good. |
|
@gaogaotiantian I would like to mention that using bdb for multi-threaded debugging, even as it stands today, is quite straightforward. What it takes is:
I am doing this in PyScripter, now using your monitoring version, which I have backported to 3.12 and 3.13. It also works with the free-threaded version of python. |
|
Well it's definitely out of the scope of this PR, but from a debugger's view, there are a lot of things to consider, more than "how to trigger a callback on an arbitrary thread". For example, will other threads be halted while a thread is being debugged? If so, how? Otherwise, what if they are printing stuff? How to switch between threads? I believe some of the issues were discussed in an issue specifically for the matter. I do think there are solutions, but I don't think it'll be trivial. For now that's an item on todo list :) |
Of course. I was talking about Bdb. All Bdb is doing is allowing descendent classes to implement user_line, user_call etc. The multi-threaded bdb would just add one more say user_thread to notify about thread start and finish. How these are implemented it depends on the debugger UI. The difficult bit is to think through how for example Pdb would handle multiple threads. This becomes very tricky if the debugger is running in the same process as the UI, especially so in free-threaded python. In an IDE context, it would work like Visual Studio works. In PyScripter the debugger displays the running threads and their state and the user can step over any thread, examine the locals, issue commands in the context of the selected frame, resume any thread or all of them etc. For example in this case, the MainThread is running and all other threads have hit a breakpoint and they are paused. The call stack shown is for the selected (active) thread. You can activate any thread and frame to inspect them or show the respective source code line in the editor. |
|
By the way, I am in awe by the sys.monitoring.DISABLE magic. which appears to work on a per line (code location basis). Your little test: import time
def test():
start_time = time.time()
for i in range(1000):
for j in range(1000):
j + i
cost = time.time() - start_time
print("Elapsed time: ", cost)
f(0)
def f(x):
# Set breakpoint here
x *= 2
return x + 1
test()results in just a few calls to dispatch_line compared to over 1 million for the current bdb. Fantastic! |
|
Hi @brandtbucher @markshannon , I'd like to know if we can move this forward. As you can tell the changes to pdb is minimum (so for other debuggers derived from bdb, they can either keep all the same code to only use the sys.settrace or change a little bit to enjoy both backends). We basically created another optional path to use The documentation is not done yet but it should be relatively simple - the interfaces are mostly kept (except for the new I really want to land this for 3.14 - it will be enabled by default for |
|
@gaogaotiantian Any chance of provide an option to enable the tracing of other threads, when using monitoring? It could be just an option in _MonitoringTracer. |
|
Like I said, it's on the todo-list. For CPython, it's not simply implement it and go - we need to consider other potential impacts. We need to make sure all other stuff works well with the changes. If Also this PR is not even approved at this point. We can discuss features based on this PR after it got merged. |
|
Maybe I did not make myself clear. I did not ask for full mutli-threaded support in this PR. Just an option like this: def callback_wrapper(func):
import functools
@functools.wraps(func)
def wrapper(self, *args):
if self.single_thread and (self._tracing_thread != threading.current_thread()):
return
try:
|
|
Yes, I'm aware what you are asking for - that's not a trivial feature. It's a small change code wise, but it brings multi-thread support to bdb and that's a promise. For now, I'm only focused on how to merge this PR, with as less difference to the existing implementation as possible. |
|
I found a bug in the code. If a breakpoint has a condition or ignore > 0 the code line is disabled and the breakpoint is not triggered when it should. Example: def main():
sum = 0
for i in range(1000):
sum = sum + i
print(sum)
if __name__ == '__main__':
main()If you set a conditional breakpoint in line 4, with a condition "sum > 0" or "i==100", the breakpoint will never fire. Same if you set the ignore property of the breakpoint to a value > 0. The reason is that although break_here returns False the first time it could return True at a later point. Suggested fix: def dispatch_line(self, frame):
"""Invoke user function and return trace function for line event.
If the debugger stops on the current line, invoke
self.user_line(). Raise BdbQuit if self.quitting is set.
Return self.trace_dispatch to continue tracing in this scope.
"""
if self.stop_here(frame) or self.break_here(frame):
self.user_line(frame)
if self.quitting: raise BdbQuit
elif not self.get_break(frame.f_code.co_filename, frame.f_lineno):
self.disable_current_event()
return self.trace_dispatch |
|
You are right, this is a bug. Fixed with regression tests. |



This is the most conservative attempt ever to utilize
sys.monitoringinbdbandpdb.Highlights:
test_pdbandtest_bdbat all with the new backendbdbwill still default tosys.settrace, which keeps all the old behavior. Users can opt-in to the newsys.monitoringbackend, and the interface is still the same, even fortrace_dispatch(that's howtest_bdbpasses).bdbwhere user can disable certain events to improve the peformance.pdb.Pdbwill usesys.settraceby default too, and is configurable withpdb.Pdb.DEFAULT_BACKENDpdbCLI andbreakpoint()uses themonitoringbackend and no noticable difference I can observe at this point.Solution:
Basically, I mimicked the behavior of
sys.settracewithsys.monitoringto keep the old behavior as much as possible. But I had the chance to use the new API ofsys.monitoringto disable certain events.Performance:
It's not as impressive as the original proposal, but for the following code:
On my laptop, without debugger, it takes 0.025s. With the new pdb attached (
b fthenc), it takes the same amount of time, and with Python 3.12 pdb, it takes 1.04s(4100%+ overhead). The performance improvement is significant to say at least.And as you can tell from the diff, the actual changes to
pdbis minimal - just changesys.settrace(tracefunc)toself.start_trace(tracefunc)andsys.settrace(None)toself.stop_trace(). That's what the debugger developers need to do to onboard.sys.monitoringfor pdb/bdb #120144