admin管理员组文章数量:1406926
I have a binded C++ python library with a class that can only be initialized once per process (unfurtonately, due to legacy C++ code).
To overcome this, I created a subprocess wrapper around my class that runs it using multiprocessing.Process:
Simplified code example:
import multiprocessing
import pickle
class ProcessManager:
def __init__(self):
self._parent_conn, self._child_conn = multiprocessing.Pipe()
self._ready_event = multiprocessing.Event()
self._process = multiprocessing.Process(target=self.worker, args=(self._child_conn, self._ready_event))
self._process.start()
self._ready_event.wait()
def close(self):
"""
Closes the sub-process and the communication pipes
"""
if hasattr(self, '_process'):
if self._process.is_alive():
self._parent_conn.send('exit')
self._process.join()
self._parent_conn.close()
self._child_conn.close()
def __del__(self):
self.close()
@staticmethod
def worker(pipe, ready_event):
from pyLib import MyClass
obj = MyClass()
ready_event.set()
while True:
try:
msg = pipe.recv()
except EOFError:
break
if msg == 'exit':
break
method, args, kwargs = msg
try:
result = getattr(obj , method)(*args, **kwargs)
pipe.send(pickle.dumps(('ok', result)))
except Exception as e:
pipe.send(pickle.dumps(('error', e)))
This worked, until I tested a new functionality for the binded MyClass. This functionality includes calling code from an external .so.
As it turns out, the external .so relies on another .so that is already loaded (a different version of it) in the parent process due to a different python package using it (can't be avoided).
I read about it and saw that multiprocessing.process uses fork by default in unix and so all the .so's that were loaded by the parent also exist in the child's memory space - which causes the .so I'm calling to look for a symbol (mangled) that doesn't exist in the version of the .so already loaded by the parent.
I realize now that I need to have this process isolated from the parent interpreter process environment and was wondering what is the most recommended approach of solving this issue?
- Use subprocess.Popen instead and have the entire worker logic in a different python script? From what I know this approach is more suitable for non-python programs?
- Use multiprocessing with set_start_method('spawn') - I have reservations here because this method can only be called once in a process from my understanding and the ProcessManager I'm working on will be a part of a bigger package so I don't want to have this constraint...
Thanks.
I have a binded C++ python library with a class that can only be initialized once per process (unfurtonately, due to legacy C++ code).
To overcome this, I created a subprocess wrapper around my class that runs it using multiprocessing.Process:
Simplified code example:
import multiprocessing
import pickle
class ProcessManager:
def __init__(self):
self._parent_conn, self._child_conn = multiprocessing.Pipe()
self._ready_event = multiprocessing.Event()
self._process = multiprocessing.Process(target=self.worker, args=(self._child_conn, self._ready_event))
self._process.start()
self._ready_event.wait()
def close(self):
"""
Closes the sub-process and the communication pipes
"""
if hasattr(self, '_process'):
if self._process.is_alive():
self._parent_conn.send('exit')
self._process.join()
self._parent_conn.close()
self._child_conn.close()
def __del__(self):
self.close()
@staticmethod
def worker(pipe, ready_event):
from pyLib import MyClass
obj = MyClass()
ready_event.set()
while True:
try:
msg = pipe.recv()
except EOFError:
break
if msg == 'exit':
break
method, args, kwargs = msg
try:
result = getattr(obj , method)(*args, **kwargs)
pipe.send(pickle.dumps(('ok', result)))
except Exception as e:
pipe.send(pickle.dumps(('error', e)))
This worked, until I tested a new functionality for the binded MyClass. This functionality includes calling code from an external .so.
As it turns out, the external .so relies on another .so that is already loaded (a different version of it) in the parent process due to a different python package using it (can't be avoided).
I read about it and saw that multiprocessing.process uses fork by default in unix and so all the .so's that were loaded by the parent also exist in the child's memory space - which causes the .so I'm calling to look for a symbol (mangled) that doesn't exist in the version of the .so already loaded by the parent.
I realize now that I need to have this process isolated from the parent interpreter process environment and was wondering what is the most recommended approach of solving this issue?
- Use subprocess.Popen instead and have the entire worker logic in a different python script? From what I know this approach is more suitable for non-python programs?
- Use multiprocessing with set_start_method('spawn') - I have reservations here because this method can only be called once in a process from my understanding and the ProcessManager I'm working on will be a part of a bigger package so I don't want to have this constraint...
Thanks.
Share asked Mar 4 at 8:32 lielblielb 414 bronze badges 4 |1 Answer
Reset to default 2set_start_method('spawn')
won't un-load any shared libraries that were already loaded, they are generally needed so that your child process won't crash immediately. It will only close parent's file handles and sockets in the child process, so the child won't overwrite them unintentionally.
It looks like you should start the whole new Python interpreter process using popen
, or create the child process before dynamically loading any of your .so
libraries.
本文标签: pythonMultiprocessingProcess with spawn vs SubprocessPopenStack Overflow
版权声明:本文标题:python - Multiprocessing.Process with spawn vs Subprocess.Popen - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1745054947a2639872.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
ProcessManager
does not require the import. But in both cases you should be able to use spawn. I cannot offer much more advice than that without a MRE. – Dunes Commented Mar 4 at 10:29