Description
Bug report
As discussed in #94242 the CPython interpreter detects when WaitForMultipleObjects is called with too many handles, and then throws a ValueError exception.
The problem is that this is too late. An exception is thrown and a message is displayed but the processes remain hung. If output is not being displayed then the hang will be inexplicable. Testing with Python 3.8.10 (slightly different limit, but the results should otherwise be the same) gives me this output before hanging:
Exception in thread Thread-1:
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
0
1
4
9
16
25
36
49
64
81
400
27028
[12596, 29932, 22420, 28684]
Traceback (most recent call last):
File "c:\src\depot_tools\bootstrap-2@3_8_10_chromium_23_bin\python3\bin\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "c:\src\depot_tools\bootstrap-2@3_8_10_chromium_23_bin\python3\bin\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "c:\src\depot_tools\bootstrap-2@3_8_10_chromium_23_bin\python3\bin\lib\multiprocessing\pool.py", line 519, in _handle_workers
cls._wait_for_updates(current_sentinels, change_notifier)
File "c:\src\depot_tools\bootstrap-2@3_8_10_chromium_23_bin\python3\bin\lib\multiprocessing\pool.py", line 499, in _wait_for_updates
wait(sentinels, timeout=timeout)
File "c:\src\depot_tools\bootstrap-2@3_8_10_chromium_23_bin\python3\bin\lib\multiprocessing\connection.py", line 879, in wait
ready_handles = _exhaustive_wait(waithandle_to_obj.keys(), timeout)
File "c:\src\depot_tools\bootstrap-2@3_8_10_chromium_23_bin\python3\bin\lib\multiprocessing\connection.py", line 811, in _exhaustive_wait
res = _winapi.WaitForMultipleObjects(L, False, timeout)
ValueError: need at most 63 handles, got a sequence of length 65
None
This is the test code that I used:
from multiprocessing import Pool, TimeoutError
import time
import os
def f(x):
return x*x
if __name__ == '__main__':
pool = Pool(processes=63) # start 4 worker processes
# print "[0, 1, 4,..., 81]"
print(pool.map(f, range(10)))
# print same numbers in arbitrary order
for i in pool.imap_unordered(f, range(10)):
print(i)
# evaluate "f(20)" asynchronously
res = pool.apply_async(f, (20,)) # runs in *only* one process
print(res.get(timeout=1)) # prints "400"
# evaluate "os.getpid()" asynchronously
res = pool.apply_async(os.getpid, ()) # runs in *only* one process
print(res.get(timeout=1)) # prints the PID of that process
# launching multiple evaluations asynchronously *may* use more processes
multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
print([res.get(timeout=1) for res in multiple_results])
# make a single worker sleep for 10 secs
res = pool.apply_async(time.sleep, (1,))
try:
print(res.get(timeout=3))
except TimeoutError:
print("We lacked patience and got a multiprocessing.TimeoutError")
I think that the Pool constructor should throw an exception when passed too large a number (61, 62, or 63, depending on the Python version) so that the issue is immediately understood before spawning many processes and then hanging.
Your environment
Python 3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:48:03) [MSC v.1928 64 bit (AMD64)] on win32
Windows 10

