admin管理员组文章数量:1405144
Simple and nice and blocking way to get an exclusive lock:
SELECT pg_advisory_lock(hashtext(lock_id));
is there a way to fail if more than X number of attemptors are trying to get the lock? Would be a nice feature to just error out
Simple and nice and blocking way to get an exclusive lock:
SELECT pg_advisory_lock(hashtext(lock_id));
is there a way to fail if more than X number of attemptors are trying to get the lock? Would be a nice feature to just error out
Share asked Mar 9 at 2:21 Alexander MillsAlexander Mills 101k166 gold badges537 silver badges918 bronze badges 4 |2 Answers
Reset to default 1It should be enough to wrap pg_advisory_lock()
with a query checking the current state of the queue in pg_locks
.
This demonstrates that it's not, unless you lock the queue spots too.
As anticipated by @Laurenz Albe (which also happened to be one of the few parts that made sense in your now removed AI-generated idea), you need to add another layer of locking to avoid situations where multiple clients check the queue length at the same time:
demo at db<>fiddle
create function public.pg_advisory_lock_with_limit(bigint,smallint)
returns void as $f$
begin
perform pg_advisory_lock(hashtext('meta_'||$1));
if $2 <= (select count(*) from pg_locks
where locktype='advisory'
and $1 = ((classid::bigint << 32) | objid::bigint))
then perform pg_advisory_unlock(hashtext('meta_'||$1));
raise '%',format('Lock queue for key %L reached limit of %L.',$1,$2);
else perform pg_advisory_unlock(hashtext('meta_'||$1)), pg_advisory_lock($1);
------------------------------------------------------^
--That's where you get the race condition.
--Unlocking the `meta_` immediately resumes the work of next agent in queue.
--The next one's queue length check can be faster than the completion
--of this one's `pg_advisory_lock($1)`. Makes the queue look unchanged.
--Even agents following the next one might not yet see the locks
--that were just acquired by those that came earlier, leading to violation.
end if;
end $f$ language plpgsql;
To test this, you can use dblink
to deploy concurrent clients requesting the same lock:
create extension dblink;
select dblink_connect('client_'||generate_series(1,3),'');
select dblink_send_query(
'client_'||n
,format('/*client_%s*/
begin;
select pg_advisory_lock_with_limit(999888777,3);',n))
from generate_series(1,3)n;
One gets the lock, the other two queue up (under more load, that's not necessarily the case):
select pid=pg_backend_pid() as is_me
, (l.classid::bigint << 32) | l.objid::bigint as lock_key
, l.granted
, l.waitstart
, pid
, left(query,12) as client
, p.state
, p.wait_event
, l.mode
from pg_stat_activity as p
join pg_locks as l
using(pid)
where l.locktype='advisory';
is_me | lock_key | granted | waitstart | pid | client | state | wait_event | mode |
---|---|---|---|---|---|---|---|---|
f | 999888777 | f | 2025-03-10 19:23:10.316974+00 | 785 | /*client_1*/ | active | advisory | ExclusiveLock |
f | 999888777 | t | null | 786 | /*client_2*/ | idle in transaction | ClientRead | ExclusiveLock |
f | 999888777 | f | 2025-03-10 19:23:10.316985+00 | 787 | /*client_3*/ | active | advisory | ExclusiveLock |
Now if you try to be the 4th in that queue, it fails (as intended):
select pg_advisory_lock_with_limit(999888777,3);
ERROR: Lock queue for key '999888777' reached limit of '3'. CONTEXT: PL/pgSQL function pg_advisory_lock_with_limit(bigint,smallint) line 8 at RAISE
As demonstrated here, counting sessions awaiting a lock in pg_locks
is tricky to do without leaving a window of time when two concurrent agents may see the same count and both queue up, resulting in one of them exceeding the imposed limit.
Example below relies on lockable queue spots. Since hashtext()
you planned to use returns an int
, not a bigint
, instead of pg_advisory_lock(bigint)
you can use the two-integer pg_advisory_lock(int,int)
variant. The first int
can indicate what target key the queue is for, the second one the actual locked spot in the queue. For the main lock, cast the main key to ::bigint
and use the first variant.
demo at db<>fiddle
create function pg_advisory_lock_with_limit(int,smallint)returns void as $f$
declare queue_length int; selected_queue_spot int; did_get_queue_spot bool;
begin --mimic `pg_advisory_lock()`'s stacking of already held locks:
if pg_advisory_lock_is_mine($1) then
perform pg_advisory_lock($1::bigint);
return;
end if;
--expect a single attempt but be prepared to retry during high traffic:
loop with available_queue_spots as(
select generate_series(1,$2) as objid
except select pg_advisory_lock_queue($1) )
,queued_up as(
select $2-count(*)::int as queue_length
, any_value(objid) as selected_queue_spot
from available_queue_spots)
select q.queue_length
, q.selected_queue_spot
, pg_try_advisory_lock($1,q.selected_queue_spot) as did_get_queue_spot
from queued_up as q
into queue_length
, selected_queue_spot
, did_get_queue_spot;
if queue_length>=$2 then
perform pg_advisory_unlock($1,selected_queue_spot);
raise '%',format('Lock queue for key %L reached limit of %L.',$1,$2);
elsif did_get_queue_spot then
perform pg_advisory_lock($1::bigint);
perform pg_advisory_unlock($1,selected_queue_spot);
return;
end if;
end loop;
end $f$ language plpgsql;
The two support functions:
create function pg_advisory_lock_is_mine(int)returns boolean return
exists(select from pg_locks as l
join pg_database as d on l.database=d.oid
where d.datname=current_database()--ignore other dbs' locks
and l.locktype='advisory'
and granted and pid=pg_backend_pid()--means I'm holding it
and classid=$1 and objsubid<>2 and objid is null);--queue spot lock
create function pg_advisory_lock_queue(int)returns setof int as $f$
select objid::int as objid
from pg_locks as l
join pg_database as d on l.database=d.oid
where d.datname=current_database()--no conflict with other dbs' locks
and l.locktype='advisory' and objsubid=2 and classid=$1
and pid<>pg_backend_pid();--hide already held, those must stack
$f$ language sql;
pg_locks
now shows both the lock queue and separately, all occupied spots in it. Note that those are simply spot numbers, the actual order is still dictated by waitstart
. Also, the second parameter dictates the queue length excluding the current lock holder.
lock_keyclassid |
spot_in_queueobjid |
granted | waitstart | pid | client | wait_event | mode |
---|---|---|---|---|---|---|---|
999888777 | null | T | null | 796 | client_0 | null | ExclusiveLock |
999888777 | null | f | 2025-03-13 10:46:10.634 | 803 | client_7 | advisory | ExclusiveLock |
999888777 | null | f | 2025-03-13 10:46:10.639 | 805 | client_9 | advisory | ExclusiveLock |
999888777 | null | f | 2025-03-13 10:46:10.640 | 799 | client_3 | advisory | ExclusiveLock |
999888777 | null | f | 2025-03-13 10:46:10.645 | 804 | client_8 | advisory | ExclusiveLock |
999888777 | 1 | T | null | 803 | client_7 | advisory | ExclusiveLock |
999888777 | 2 | T | null | 799 | client_3 | advisory | ExclusiveLock |
999888777 | 3 | T | null | 804 | client_8 | advisory | ExclusiveLock |
999888777 | 4 | T | null | 805 | client_9 | advisory | ExclusiveLock |
An alternative would be to swap out arbitrary, generated spot numbers for the pid
of the last one in the queue, forming an actual queue in the form of a one-way linked list. Whole that sounds nice, I'd expect it to increase congestion: all concurrent requests would see the same tail, only one manage to queue up behind it (lock it), the rest fail and have to keep re-checking the new tail and retrying to lock it.
The above instead tries to make them claim a random one from the whole set of those available, with no preference - in hope to minimise overlapping requests for the same spot.
It's tempting to try and simplify this by maintaining lock queue lengths using upserts on a tally table. Problem is, that requires concurrent workers to commit
requests against such a queue table, for others to be able to take them into account. This complicates the usage and makes it plain impossible to do transaction-level locking.
Meanwhile, the example above does locks without the need to commit
. Repurposing it for transaction-level locking means just swapping the underlying pg_advisory_lock()
for pg_advisory_xact_lock()
and pg_try_advisory_lock()
for pg_try_advisory_xact_lock()
. You can also do a similar swap to add shared locks (pg_advisory_lock_shared()
) with one limit of how many sessions can share one at the same time, and another restricting how many can queue up once they reach the first limit.
版权声明:本文标题:postgresql - how to fail if more than X number of requests are trying to get pg_advisory_lock - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1744880249a2630167.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
pg_advisory_lock()
in a function accepting an additional param for the queue length, with a query that checks it in system views and fails if it's exceeded. – Zegarek Commented Mar 9 at 11:10