Description
Feature or enhancement
I propose to parallelize deepfreezing to speed up compile time for core devs.
Pitch
deepfreezing / deepfreeze.py is a choke point for building CPython. Deepfreezing happens very late in the build process and takes considerable amount of time, especially for pydebug builds. Deepfreezing is a single process and does not benefit from multiple cores. Every build that touches a core file will result in re-freezing. On my system and with ccache populates, make spends between 60 to 75% time just in deepfreeze.py for debug builds. For regular builds it's about 25-30% of total build time.
$ ./configure -C --with-pydebug
$ make clean
$ time make
real 0m9,339s
user 0m13,630s
sys 0m2,657s
$ rm Python/deepfreeze/deepfreeze.c
$ time make Python/deepfreeze/deepfreeze.c
real 0m7,267s
user 0m7,033s
sys 0m0,204s
The process can be parallelized. One possible solution is to split the work. For each input module create a make rule that outputs an inc file, e.g. Python/deepfreeze/"importlib._bootstrap.inc. An another rules creates deepfreeze.c with #include "importlib._bootstrap.inc, ... This keeps all code in one object file.

