admin管理员组

文章数量:1305516

I have two Firebase functions that run hourly and process the same list of users. They share a batch document(this.batchId: MM-dd-yyyy-HH so it's unique for each hour) and use transactions to coordinate processing. Each instance gets a batch of users after a specific ID (lastProcessedId) and updates the batch document with:

  • A new lastProcessedId
  • A global totalCount increment
  • Instance-specific counts and processed IDs

The global totalCount is always correct, however, sometimes the instance-specific fields show that both instances processed some of the same IDs (A batch of two usually). This conflicts with how i understand optimistic locking to work - why aren't the transactions preventing this overlap in processing? Any advice or insight would be apprechiated.

private async getNextBatchTransaction(): Promise<{ userDocs: QueryDocumentSnapshot<DocumentData>[] | null, needsCleanup: boolean }> {
  return this.firestore.runTransaction(async (transaction) => {
    const batchRef = this.firestore.collection("batch_sequence").doc(this.batchId);
    const batchDoc = await transaction.get(batchRef);

    const data = (batchDoc.exists ? batchDoc.data() : {
      lastProcessedId: null,
      complete: false,
    }) as BatchDocument;

    if (dataplete) {
      return { userDocs: null };
    }

    let query = this.firestore
      .collection("users")
      .orderBy("__name__")
      .limit(this.batchSize);

    if (data.lastProcessedId) {
      query = query.startAfter(data.lastProcessedId);
    }

    const userSnapshot = await transaction.get(query);

    if (userSnapshot.empty) {
      transaction.set(
        batchRef,
        { complete: true },
        { merge: true }
      );
      return { userDocs: null };
    }

    const batchLength = userSnapshot.docs.length;
    const lastDoc = userSnapshot.docs[batchLength - 1];
    const processedIds = userSnapshot.docs.map(doc => doc.id);
    
    transaction.set(
      batchRef,
      {
        lastProcessedId: lastDoc.id,
        totalCount: FieldValue.increment(batchLength),
        [`instance.${this.instanceId}`]: FieldValue.increment(batchLength),
        [`processedIds.${this.instanceId}`]: FieldValue.arrayUnion(...processedIds),
      },
      { merge: true }
    );

    return { userDocs: userSnapshot.docs};
  });
}

I would expect Thread 1 to commit the transaction, updating 'lastProcessedId'. Thread 2, which started simultaneously, would see that lastProcessedId has been updated (from firestore.collection("batch_sequence").doc(this.batchId)) and consequently fail. It would then retry, this time grabbing the 'lastProcessedId' that Thread 1 set, and process the next batch of users. This cycle would repeat until all users are processed.

Considering the following notes on runTransaction, I'm surprised it doesn't act like the above.

    /**
     * Executes the given updateFunction and commits the changes applied within
     * the transaction.
     *
     * You can use the transaction object passed to 'updateFunction' to read and
     * modify Firestore documents under lock. You have to perform all reads
     * before you perform any write.
     *
     * Transactions can be performed as read-only or read-write transactions. By
     * default, transactions are executed in read-write mode.
     *
     * A read-write transaction obtains a pessimistic lock on all documents that
     * are read during the transaction. These locks block other transactions,
     * batched writes, and other non-transactional writes from changing that
     * document. Any writes in a read-write transactions are committed once
     * 'updateFunction' resolves, which also releases all locks.
     *
     * If a read-write transaction fails with contention, the transaction is
     * retried up to five times. The `updateFunction` is invoked once for each
     * attempt.
     *
     * Read-only transactions do not lock documents. They can be used to read
     * documents at a consistent snapshot in time, which may be up to 60 seconds
     * in the past. Read-only transactions are not retried.
     *
     * Transactions time out after 60 seconds if no documents are read.
     * Transactions that are not committed within than 270 seconds are also
     * aborted. Any remaining locks are released when a transaction times out.
     *
     * @param updateFunction The function to execute within the transaction
     * context.
     * @param transactionOptions Transaction options.
     * @return If the transaction completed successfully or was explicitly
     * aborted (by the updateFunction returning a failed Promise), the Promise
     * returned by the updateFunction will be returned here. Else if the
     * transaction failed, a rejected Promise with the corresponding failure
     * error will be returned.
     */

I have two Firebase functions that run hourly and process the same list of users. They share a batch document(this.batchId: MM-dd-yyyy-HH so it's unique for each hour) and use transactions to coordinate processing. Each instance gets a batch of users after a specific ID (lastProcessedId) and updates the batch document with:

  • A new lastProcessedId
  • A global totalCount increment
  • Instance-specific counts and processed IDs

The global totalCount is always correct, however, sometimes the instance-specific fields show that both instances processed some of the same IDs (A batch of two usually). This conflicts with how i understand optimistic locking to work - why aren't the transactions preventing this overlap in processing? Any advice or insight would be apprechiated.

private async getNextBatchTransaction(): Promise<{ userDocs: QueryDocumentSnapshot<DocumentData>[] | null, needsCleanup: boolean }> {
  return this.firestore.runTransaction(async (transaction) => {
    const batchRef = this.firestore.collection("batch_sequence").doc(this.batchId);
    const batchDoc = await transaction.get(batchRef);

    const data = (batchDoc.exists ? batchDoc.data() : {
      lastProcessedId: null,
      complete: false,
    }) as BatchDocument;

    if (dataplete) {
      return { userDocs: null };
    }

    let query = this.firestore
      .collection("users")
      .orderBy("__name__")
      .limit(this.batchSize);

    if (data.lastProcessedId) {
      query = query.startAfter(data.lastProcessedId);
    }

    const userSnapshot = await transaction.get(query);

    if (userSnapshot.empty) {
      transaction.set(
        batchRef,
        { complete: true },
        { merge: true }
      );
      return { userDocs: null };
    }

    const batchLength = userSnapshot.docs.length;
    const lastDoc = userSnapshot.docs[batchLength - 1];
    const processedIds = userSnapshot.docs.map(doc => doc.id);
    
    transaction.set(
      batchRef,
      {
        lastProcessedId: lastDoc.id,
        totalCount: FieldValue.increment(batchLength),
        [`instance.${this.instanceId}`]: FieldValue.increment(batchLength),
        [`processedIds.${this.instanceId}`]: FieldValue.arrayUnion(...processedIds),
      },
      { merge: true }
    );

    return { userDocs: userSnapshot.docs};
  });
}

I would expect Thread 1 to commit the transaction, updating 'lastProcessedId'. Thread 2, which started simultaneously, would see that lastProcessedId has been updated (from firestore.collection("batch_sequence").doc(this.batchId)) and consequently fail. It would then retry, this time grabbing the 'lastProcessedId' that Thread 1 set, and process the next batch of users. This cycle would repeat until all users are processed.

Considering the following notes on runTransaction, I'm surprised it doesn't act like the above.

    /**
     * Executes the given updateFunction and commits the changes applied within
     * the transaction.
     *
     * You can use the transaction object passed to 'updateFunction' to read and
     * modify Firestore documents under lock. You have to perform all reads
     * before you perform any write.
     *
     * Transactions can be performed as read-only or read-write transactions. By
     * default, transactions are executed in read-write mode.
     *
     * A read-write transaction obtains a pessimistic lock on all documents that
     * are read during the transaction. These locks block other transactions,
     * batched writes, and other non-transactional writes from changing that
     * document. Any writes in a read-write transactions are committed once
     * 'updateFunction' resolves, which also releases all locks.
     *
     * If a read-write transaction fails with contention, the transaction is
     * retried up to five times. The `updateFunction` is invoked once for each
     * attempt.
     *
     * Read-only transactions do not lock documents. They can be used to read
     * documents at a consistent snapshot in time, which may be up to 60 seconds
     * in the past. Read-only transactions are not retried.
     *
     * Transactions time out after 60 seconds if no documents are read.
     * Transactions that are not committed within than 270 seconds are also
     * aborted. Any remaining locks are released when a transaction times out.
     *
     * @param updateFunction The function to execute within the transaction
     * context.
     * @param transactionOptions Transaction options.
     * @return If the transaction completed successfully or was explicitly
     * aborted (by the updateFunction returning a failed Promise), the Promise
     * returned by the updateFunction will be returned here. Else if the
     * transaction failed, a rejected Promise with the corresponding failure
     * error will be returned.
     */
Share Improve this question edited Feb 4 at 3:26 wipallen asked Feb 3 at 19:04 wipallenwipallen 1542 silver badges11 bronze badges 7
  • 1 "This conflicts with how i understand optimistic locking to work - why aren't the transactions preventing this overlap in processing?" Can you edit your question to explain which part of the code/query you expect to trigger the second transaction to exclude (or retry) on the document that is already processed? – Frank van Puffelen Commented Feb 3 at 19:23
  • Also: please edit your question to explicitly show the Firebase SDK you import, as there's a big difference between the client-side and the server-side SDKs in how they handle transactions (client-side: optimistic, server-side: pessimistic). – Frank van Puffelen Commented Feb 3 at 19:24
  • Since I explicitly mentioned Firebase Functions and Node.js, isn't it implied that it's server-side - though perhaps I'm missing something? Regarding the request for clarification, the code I posted is a Firebase transaction. Before any update is made, if the underlying data has been changed, I would expect the full transaction to rollback. I update the question to clarify this. – wipallen Commented Feb 3 at 20:29
  • Many devs use the term "Firebase functions" for their client-side functions, and node wasn't mentioned anywhere until I just added the tags. ¯_(ツ)_/¯ ---- "the underlying data has been changed" What specific data? Are both transactions updating the same batchRef document? – Frank van Puffelen Commented Feb 3 at 20:52
  • Yes, both transactions read/update the same batchRef (this.batchId format: MM-dd-yyyy-HH) and run at the top of each hour. They're accessing batchRef.lastProcessedId - Thread 1 reads it, fetches users, and updates it. If Thread 2 reads simultaneously but updates slower, it should fail when lastProcessedId differs from its initial read. It would then retry with Thread 1's updated value and succeed. – wipallen Commented Feb 3 at 21:40
 |  Show 2 more comments

1 Answer 1

Reset to default 0

I was using firebase-admin version 11.8.0, which is two major releases behind the newest 13.0.2 version. After upgrading to version 13.0.2, the code works as expected.

After test running it 100 times, each instance processed unique ids each time.

本文标签: nodejsFirebase Optimistic Locking Duplicate ProcessingStack Overflow