admin管理员组文章数量:1401215
I wonder if the mongoDB pipeline is atomic duraing stages.
My setup is as below:
2 collections A and B. There are 100 resources in collection A and empty in collection B.
I have two processes. One of them update resources in collection A and update the same resources in collection B. If resources do not exist in collection B, then an aggreate pipeline below is called.
pipeline := []bson.M{
{"$match": bson.M{"uid": uid}},
{"$merge": collB},
}
_, err := store.database().Collection(collA).Aggregate(ctx, pipeline)
Another process will call the pipeline above for each resources in collection A. However, those resources in collection B are all updated perfectly.
What do I expect is that some of the pipeline might be problematic. For the pipeline might store stale data in $match
stage and apply to collection B in $merge
stage. But it just doesn't happen.
Another version of the second process is to query and update separately. Race condition does happen this time under the circumstance of sleep 100 milliseconds between query and update.
How could it be? Does pipeline not work as what I describe? Or there are some special locking behavior of mongoDB that I should know. Maybe as the workload of mongoDB grows, the probability of race condition of the first version also grows.
本文标签: mongo goIs MongoDB Aggregate Pipeline (match or merge) AtomicStack Overflow
版权声明:本文标题:mongo go - Is MongoDB Aggregate Pipeline ($match or $merge) Atomic? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1744204171a2595104.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论