admin管理员组

文章数量:1406926

I have a binded C++ python library with a class that can only be initialized once per process (unfurtonately, due to legacy C++ code).

To overcome this, I created a subprocess wrapper around my class that runs it using multiprocessing.Process:

Simplified code example:

import multiprocessing
import pickle

class ProcessManager:
    def __init__(self):
        self._parent_conn, self._child_conn = multiprocessing.Pipe()
        self._ready_event = multiprocessing.Event() 
        self._process = multiprocessing.Process(target=self.worker, args=(self._child_conn, self._ready_event))
        self._process.start()
        self._ready_event.wait()

    def close(self):
        """
        Closes the sub-process and the communication pipes
        """
        if hasattr(self, '_process'):
            if self._process.is_alive():
                self._parent_conn.send('exit')
                self._process.join()
            self._parent_conn.close()
            self._child_conn.close()

    def __del__(self):
        self.close()

    @staticmethod
    def worker(pipe, ready_event):
        from pyLib import MyClass
        obj = MyClass()
        ready_event.set()
        while True:
            try:
                msg = pipe.recv()
            except EOFError:
                break
            if msg == 'exit':
                break
            method, args, kwargs = msg
            try:
                result = getattr(obj , method)(*args, **kwargs)
                pipe.send(pickle.dumps(('ok', result)))
            except Exception as e:
                pipe.send(pickle.dumps(('error', e)))

This worked, until I tested a new functionality for the binded MyClass. This functionality includes calling code from an external .so.

As it turns out, the external .so relies on another .so that is already loaded (a different version of it) in the parent process due to a different python package using it (can't be avoided).

I read about it and saw that multiprocessing.process uses fork by default in unix and so all the .so's that were loaded by the parent also exist in the child's memory space - which causes the .so I'm calling to look for a symbol (mangled) that doesn't exist in the version of the .so already loaded by the parent.

I realize now that I need to have this process isolated from the parent interpreter process environment and was wondering what is the most recommended approach of solving this issue?

  1. Use subprocess.Popen instead and have the entire worker logic in a different python script? From what I know this approach is more suitable for non-python programs?
  2. Use multiprocessing with set_start_method('spawn') - I have reservations here because this method can only be called once in a process from my understanding and the ProcessManager I'm working on will be a part of a bigger package so I don't want to have this constraint...

Thanks.

I have a binded C++ python library with a class that can only be initialized once per process (unfurtonately, due to legacy C++ code).

To overcome this, I created a subprocess wrapper around my class that runs it using multiprocessing.Process:

Simplified code example:

import multiprocessing
import pickle

class ProcessManager:
    def __init__(self):
        self._parent_conn, self._child_conn = multiprocessing.Pipe()
        self._ready_event = multiprocessing.Event() 
        self._process = multiprocessing.Process(target=self.worker, args=(self._child_conn, self._ready_event))
        self._process.start()
        self._ready_event.wait()

    def close(self):
        """
        Closes the sub-process and the communication pipes
        """
        if hasattr(self, '_process'):
            if self._process.is_alive():
                self._parent_conn.send('exit')
                self._process.join()
            self._parent_conn.close()
            self._child_conn.close()

    def __del__(self):
        self.close()

    @staticmethod
    def worker(pipe, ready_event):
        from pyLib import MyClass
        obj = MyClass()
        ready_event.set()
        while True:
            try:
                msg = pipe.recv()
            except EOFError:
                break
            if msg == 'exit':
                break
            method, args, kwargs = msg
            try:
                result = getattr(obj , method)(*args, **kwargs)
                pipe.send(pickle.dumps(('ok', result)))
            except Exception as e:
                pipe.send(pickle.dumps(('error', e)))

This worked, until I tested a new functionality for the binded MyClass. This functionality includes calling code from an external .so.

As it turns out, the external .so relies on another .so that is already loaded (a different version of it) in the parent process due to a different python package using it (can't be avoided).

I read about it and saw that multiprocessing.process uses fork by default in unix and so all the .so's that were loaded by the parent also exist in the child's memory space - which causes the .so I'm calling to look for a symbol (mangled) that doesn't exist in the version of the .so already loaded by the parent.

I realize now that I need to have this process isolated from the parent interpreter process environment and was wondering what is the most recommended approach of solving this issue?

  1. Use subprocess.Popen instead and have the entire worker logic in a different python script? From what I know this approach is more suitable for non-python programs?
  2. Use multiprocessing with set_start_method('spawn') - I have reservations here because this method can only be called once in a process from my understanding and the ProcessManager I'm working on will be a part of a bigger package so I don't want to have this constraint...

Thanks.

Share asked Mar 4 at 8:32 lielblielb 414 bronze badges 4
  • How is the this second external library loaded? Does the python interpreter itself load it? Is it loaded because the classes you are pickling need it? – Dunes Commented Mar 4 at 9:12
  • it is loaded once another package is imported by python (I see it exists right after the import statement) – lielb Commented Mar 4 at 9:26
  • If anything from this package is part of the data you send to the child process then you cannot use pickle. You'll have to use a different serialisation method (eg. json). Otherwise you can structure your code such that the ProcessManager does not require the import. But in both cases you should be able to use spawn. I cannot offer much more advice than that without a MRE. – Dunes Commented Mar 4 at 10:29
  • yeah its not its a combination of strings and dicts – lielb Commented Mar 4 at 12:25
Add a comment  | 

1 Answer 1

Reset to default 2

set_start_method('spawn') won't un-load any shared libraries that were already loaded, they are generally needed so that your child process won't crash immediately. It will only close parent's file handles and sockets in the child process, so the child won't overwrite them unintentionally.

It looks like you should start the whole new Python interpreter process using popen, or create the child process before dynamically loading any of your .so libraries.

本文标签: pythonMultiprocessingProcess with spawn vs SubprocessPopenStack Overflow