admin管理员组

文章数量:1401215

I wonder if the mongoDB pipeline is atomic duraing stages.

My setup is as below:

2 collections A and B. There are 100 resources in collection A and empty in collection B.

I have two processes. One of them update resources in collection A and update the same resources in collection B. If resources do not exist in collection B, then an aggreate pipeline below is called.

pipeline := []bson.M{
    {"$match": bson.M{"uid": uid}},
    {"$merge": collB},
}
_, err := store.database().Collection(collA).Aggregate(ctx, pipeline)

Another process will call the pipeline above for each resources in collection A. However, those resources in collection B are all updated perfectly.

What do I expect is that some of the pipeline might be problematic. For the pipeline might store stale data in $match stage and apply to collection B in $merge stage. But it just doesn't happen.

Another version of the second process is to query and update separately. Race condition does happen this time under the circumstance of sleep 100 milliseconds between query and update.

How could it be? Does pipeline not work as what I describe? Or there are some special locking behavior of mongoDB that I should know. Maybe as the workload of mongoDB grows, the probability of race condition of the first version also grows.

本文标签: mongo goIs MongoDB Aggregate Pipeline (match or merge) AtomicStack Overflow