admin管理员组

文章数量:1394544

Why

  • when using separate with, the 2nd executor is blocked until all tasks from 1st executor is done
  • when using a single compound with, the 2nd executor can proceed while the 1st executor is still working

This is confusing because i thought executor.submit returns a future and does not block. It seems like the context manager is blocking.

Is this true, and are there official references mentioning this behaviour of individual vs compound context managers?

Separate context managers

import time
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor

def task(name, delay):
    print(f"{time.time():.3f} {name} started")
    time.sleep(delay)
    print(f"{time.time():.3f} {name} finished")

if __name__ == "__main__":
    print(f"{time.time():.3f} Main started")

    with ThreadPoolExecutor(max_workers=1) as thread_pool:
        thread_pool.submit(task, "Thread Task", 0.2)

    with ProcessPoolExecutor(max_workers=1) as process_pool:
        process_pool.submit(task, "Process Task", 0.1)

    print(f"{time.time():.3f} Main ended")

Output:

1743068624.365 Main started
1743068624.365 Thread Task started
1743068624.566 Thread Task finished
1743068624.571 Process Task started
1743068624.671 Process Task finished
1743068624.673 Main ended

Notice Thread Task must finish before Process Task can start. I have ran the code numerous times and still don't see below patterns.

Single context manager

import time
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor

def task(name, delay):
    print(f"{time.time():.3f} {name} started")
    time.sleep(delay)
    print(f"{time.time():.3f} {name} finished")

if __name__ == "__main__":
    print(f"{time.time():.3f} Main started")

    with ThreadPoolExecutor(max_workers=1) as thread_pool, ProcessPoolExecutor(max_workers=1) as process_pool:
        thread_pool.submit(task, "Thread Task", 0.2)
        process_pool.submit(task, "Process Task", 0.1)

    print(f"{time.time():.3f} Main ended")

Output

1743068722.440 Main started
1743068722.441 Thread Task started
1743068722.443 Process Task started
1743068722.544 Process Task finished
1743068722.641 Thread Task finished
1743068722.641 Main ended

With a compound context manager, Process Task can start before Thread Task finishes no matter their delay ratios.

Delay of Process Task is designed to be shorter than delay of Thread Task to additionally show that Process Task can finish before Thread Task finishes with a compound context manager

Please point out if this interpretation is erroneous.

Why

  • when using separate with, the 2nd executor is blocked until all tasks from 1st executor is done
  • when using a single compound with, the 2nd executor can proceed while the 1st executor is still working

This is confusing because i thought executor.submit returns a future and does not block. It seems like the context manager is blocking.

Is this true, and are there official references mentioning this behaviour of individual vs compound context managers?

Separate context managers

import time
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor

def task(name, delay):
    print(f"{time.time():.3f} {name} started")
    time.sleep(delay)
    print(f"{time.time():.3f} {name} finished")

if __name__ == "__main__":
    print(f"{time.time():.3f} Main started")

    with ThreadPoolExecutor(max_workers=1) as thread_pool:
        thread_pool.submit(task, "Thread Task", 0.2)

    with ProcessPoolExecutor(max_workers=1) as process_pool:
        process_pool.submit(task, "Process Task", 0.1)

    print(f"{time.time():.3f} Main ended")

Output:

1743068624.365 Main started
1743068624.365 Thread Task started
1743068624.566 Thread Task finished
1743068624.571 Process Task started
1743068624.671 Process Task finished
1743068624.673 Main ended

Notice Thread Task must finish before Process Task can start. I have ran the code numerous times and still don't see below patterns.

Single context manager

import time
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor

def task(name, delay):
    print(f"{time.time():.3f} {name} started")
    time.sleep(delay)
    print(f"{time.time():.3f} {name} finished")

if __name__ == "__main__":
    print(f"{time.time():.3f} Main started")

    with ThreadPoolExecutor(max_workers=1) as thread_pool, ProcessPoolExecutor(max_workers=1) as process_pool:
        thread_pool.submit(task, "Thread Task", 0.2)
        process_pool.submit(task, "Process Task", 0.1)

    print(f"{time.time():.3f} Main ended")

Output

1743068722.440 Main started
1743068722.441 Thread Task started
1743068722.443 Process Task started
1743068722.544 Process Task finished
1743068722.641 Thread Task finished
1743068722.641 Main ended

With a compound context manager, Process Task can start before Thread Task finishes no matter their delay ratios.

Delay of Process Task is designed to be shorter than delay of Thread Task to additionally show that Process Task can finish before Thread Task finishes with a compound context manager

Please point out if this interpretation is erroneous.

Share Improve this question edited Mar 27 at 10:16 Bergi 666k161 gold badges1k silver badges1.5k bronze badges asked Mar 27 at 9:52 Han QiHan Qi 5094 silver badges12 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 2

When you use a concurrent.futures.Executor instance as a context manager (e.g. with ThreadPoolExecutor() as executor:), then when the block is exited there is an implicit call to executor.shutdown(wait=True).

So when you have two with blocks that are not nested (i.e. they are coded one after the other), the second with block will not start executing until all submitted tasks (pending futures) to the first executor have completed. In your second case, i.e.:

    with ThreadPoolExecutor(max_workers=1) as thread_pool, ProcessPoolExecutor(max_workers=1) as process_pool:
        thread_pool.submit(task, "Thread Task", 0.2)
        process_pool.submit(task, "Process Task", 0.1)

The above is more or less equivalent to:

    with ThreadPoolExecutor(max_workers=1) as thread_pool:
        with ProcessPoolExecutor(max_workers=1) as process_pool:
            thread_pool.submit(task, "Thread Task", 0.2)
            process_pool.submit(task, "Process Task", 0.1)

You are submitting tasks to both pools before either with block exits and so there is no implicit call to thread_pool.shutdown(wait=True) occurring before the call to process_pool.submit is made.

本文标签: pythonIs with concurrentfuturesExecutor blockingStack Overflow