-
-
Notifications
You must be signed in to change notification settings - Fork 30.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A new tail-calling interpreter for significantly better interpreter performance #128563
Comments
|
Can you show what a typical tail-calling sequence looks like? Does it combine tail-calling with a computed goto as in this example from the protobuf blog post? MUSTTAIL return op_table[op](ARGS); |
|
Mark gives a pretty good example here faster-cpython/ideas#642 (comment) |
|
Neat, thank you! |
|
How much performance is attributed to |
|
@diegorusso I did some experiments on WASI and esmcriptem (which do not support |
|
FYI from Clang docs
|
|
@WolframAlph I think that doesn't matter because we're only using this on |
|
Anyway I pinged the GCC team at Arm and a ticket has been created to implement |
This is what I was expecting. The tail call but itself is not enough (actually I was expecting similar performance to computed goto) and you need Have you tested it on AArch64 as well? |
Not yet. I want to test it on the Faster CPython build bot for macOS (that has clang-19, so it's a fair comparison), but I do not have access to it. If you could run some benchmarks for this I would be really grateful! If you want a quick-and-dirty check that it's working, try just running the pystones benchmark. I got a 25% speedup with tail calls and LTO and PGO on (make sure you enable those, because it contributes like half the perf win for some reason). https://gist.github.com/Fidget-Spinner/e7bf204bf605680b0fc1540fe3777acf And pass |
|
@Fidget-Spinner If I understand correctly, the whole trick is:
Do I get it right? |
Yeah. Also I suspect most of the speedup is there because the current interpreter loop is too big to optimize properly, so all the pre-existing compilers perform not-so-well for it. For example, PGO gives this roughly another 10% speedup over just -O3. LTO roughly another 10% over PGO and -O3. Normally if we compare equally, PGO and LTO should optimize both the old interpreter and new one similarly, but I guess the new interpreter is easier to optimize so it produces better quality code. |
Makes sense. By splitting cases into functions, compiler can optimize each one of them better individually rather than one giant chunk I assume. Same was mentioned in the protobuf article you linked. |
|
Here is the result for cross-checking bm_pystones with @Fidget-Spinner on macOS aarch64 Baseline: 2228e92
ConfigureResultTail-calling https://github.com/Fidget-Spinner/cpython/tree/tail-call
ConfigureResultcc @diegorusso |
|
According to Donghee's comment above, Pystones is nearly 50% faster on macOS AArch64 vs 25% faster on my Ubuntu AMD64 machine. I suspect it might be because AArch64 has more registers. However, who knows at this point :)? |
|
Reading https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118328, I have not tested on GCC trunk. This is pure speculation on my part that perf is bad on there. I can try with GCC eventually after the PR lands and we can test it from there. However, testing with clang with just musttail and no preserve_none, the performance was quite bad. |




Feature or enhancement
Proposal
Experimental branch: main...Fidget-Spinner:cpython:tail-call
Prior discussion at: faster-cpython/ideas#642
I propose adding a tail-calling interpreter to CPython for significantly better performance on compilers that support it.
This idea is not new, and has been implemented by:
CPython currently has a few interpreters:
The tail-calling interpreter will be the 4th that coexists with the rest. This means no compatibility concerns.
Performance
Preliminary benchmarks by me suggest excellent performance improvements --- 10% geometric mean speedup in pyperformance, with up to 40% speedup in Python-heavy benchmarks: https://gist.github.com/Fidget-Spinner/497c664eef389622d146d632990b0d21. These benchmarks were performed with clang-19 on both main and my branch, with ThinLTO and PGO, on AMD64 Ubuntu 22.04. PGO seems especially crucial for the speedups based on my testing. For those outside of CPython development: a 10% speedup is roughly equal to 2 minor CPython releases worth of improvements. For example, CPython 3.12 roughly sped up by 5%.
The speedup is so significant that if accepted, the new interpreter will be faster than the current JIT compiler.
Drawbacks
I will address maintainability by using the interpreter generator that was introduced as part of CPython 3.12. This generator will allow us to automatically generate most of the infrastructure needed for this change. Preliminary estimates suggest the generator will be only 200 lines of Python code, most of which is shared/copied/same conceptually as the other generators.
For portability, this will fix itself (see the next section).
Portability and Precedent
At the moment, this is only supported by clang-19 for AArch64 and AMD64, with partial support on clang-18 and gcc-next, but likely bad performance on those. The reason is that we need both the
__attribute__((musttail))and__attribute__((preserve_none))attributes for good performance. GCC only hasgnu::musttailbut notpreserve_none.There has been prior precedence on adding compiler-specific optimizations for CPython. See for example the original computed goto issue by Antoine Pitrou https://bugs.python.org/issue4753. At the time, it was a new feature only on GCC and not on Clang, but we still added it anyways. Eventually a few years later, Clang also introduced the feature. The key point gcc will likely eventually catch up and add these features.
EDIT: Added that it's only a likely case to have bad perf on GCC. Reading https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118328, I have not tested on GCC trunk. This is pure speculation that perf is bad. I can try with GCC eventually after the PR lands and we can test it from there. However, testing with clang with just
musttailand nopreserve_none, the performance was quite bad.Implementation plan
_PyEval_EvalFrameDefaultto use function calls corresponding to their common existing labels. This will need careful benchmarking. E.g.becomes
Worries about new bugs
Computed goto is well-tested, so worrying about the new interpreter being buggy is fair.
I doubt logic bugs will be the primary one, as we are using the interpreter generator. This means we share common code between the base interpreter and the new one. If the new one has logic bugs, it is likely the base interpreter has it too.
The other one is compiler bugs. However to allay such fears, I point out that the GHC calling convention (the thing behind
preserve_nonehas been around for 5 years1, andmusttailhas been around for almost 4 years2.cc @pitrou as the original implementer of computed gotos, and @markshannon
Future Use
Kumar Aditya pointed out this could be used in regex and pickle as well. Likewise, Neil Schemenauer pointed out marshal and pickle might benefit from this for faster Python startup.
Has this already been discussed elsewhere?
https://discuss.python.org/t/a-new-tail-calling-interpreter-for-significantly-better-interpreter-performance/76315
Links to previous discussion of this feature:
No response
The text was updated successfully, but these errors were encountered: