admin管理员组

文章数量:1356333

I have a worker that runs periodically that executes a function in Postgres. This function reads from my job table and gets IDs of records that it needs to process in table foo. Order of processing does not matter. Recently I decided to increase the number of workers so that they run concurrently.

create table foo(id integer);

-- source table of IDs that need to be processed. 
create table job (foo_id integer);

I'm concerned that with my architecture this is completely not going to work because if two workers use my postgres function, those functions execute in distinct transactions, and any changes they make to the job table will not be visible to the other function until commit. Problem is that functions cannot commit.

The first thing that came to mind to me on how to handle this is

  • Turn the functions into procedures
  • Delete from the job table returning the IDs to process into a temp table.
  • Commit
  • Have the job process as normal
  • Commit

One vulnerability I see here right off the bat is if the job crashes somewhere in the processing, I lose those IDs and never process them again.

Would select for update skip locked be an acceptable alternative here to avoid collisions in the context of a postgres function?

本文标签: postgresqlHow to allow for concurrent operation for functions in postgresStack Overflow