admin管理员组文章数量:1398604
I have a cosmosdb mongodb instance, in which I've had large documents stored in it for some time. I've been able to update them fine - until the beginning of this week (24/03/2025).
This is the error:
Mongo Server error (MongoWriteException): Write operation error on server test-db-uksouth.mongo.cosmos.azure:10255. Write error: WriteError{code=16, message='Error=16, Details='Response status code does not indicate success: RequestEntityTooLarge (413); Substatus: 0; ActivityId: b926cdda-ca85-4a5a-9880-cc79cfa957cb; Reason: (Message: {"Errors":["Request size is too large"]}
ActivityId: b926cdda-ca85-4a5a-9880-cc79cfa957cb, Request URI: /apps/c425cd55-b446-43d7-ae68-8fd6be6f658b/services/bb4327ba-6de5-4e62-971c-f0c6f97644f4/partitions/b0ec57aa-cebe-4683-9833-a75f8052ce1c/replicas/133873962017831188p/, RequestStats: Microsoft.Azure.Cosmos.Tracing.TraceData.ClientSideRequestStatisticsTraceDatum, SDK: Windows/10.0.20348 cosmos-netstandard-sdk/3.18.0);', details={}}.
Now Microsoft states the limit for documents at 2mb. Our document DOES exceed this in it's json UTF-8 length (at almost 4mb), however - we were able to update the document just fine until 24/03 (and we'd made an assumption that because it was working, it must be based on the bson size, which our problem document is under - at 1.5mb).
So it looks like the server is rejecting it based on the request size being too big (413). I've now found this limit (as of 24/03) to be at roughly 3mb for a json UTF-8 length. It appears like Microsoft has likely just made the api server more strict on the length of the content it allows through.
Now I understand, to have a document of the size above is not optimal - but moreover, the point here is how this setting (that triggers the 413 error) appears to have been introduced silently and without warning. This should be returned back to what it was, and introduced in a transparent manner with time for applications to be updated.
Has anyone else been experiencing this at all recently?
I have a cosmosdb mongodb instance, in which I've had large documents stored in it for some time. I've been able to update them fine - until the beginning of this week (24/03/2025).
This is the error:
Mongo Server error (MongoWriteException): Write operation error on server test-db-uksouth.mongo.cosmos.azure:10255. Write error: WriteError{code=16, message='Error=16, Details='Response status code does not indicate success: RequestEntityTooLarge (413); Substatus: 0; ActivityId: b926cdda-ca85-4a5a-9880-cc79cfa957cb; Reason: (Message: {"Errors":["Request size is too large"]}
ActivityId: b926cdda-ca85-4a5a-9880-cc79cfa957cb, Request URI: /apps/c425cd55-b446-43d7-ae68-8fd6be6f658b/services/bb4327ba-6de5-4e62-971c-f0c6f97644f4/partitions/b0ec57aa-cebe-4683-9833-a75f8052ce1c/replicas/133873962017831188p/, RequestStats: Microsoft.Azure.Cosmos.Tracing.TraceData.ClientSideRequestStatisticsTraceDatum, SDK: Windows/10.0.20348 cosmos-netstandard-sdk/3.18.0);', details={}}.
Now Microsoft states the limit for documents at 2mb. Our document DOES exceed this in it's json UTF-8 length (at almost 4mb), however - we were able to update the document just fine until 24/03 (and we'd made an assumption that because it was working, it must be based on the bson size, which our problem document is under - at 1.5mb).
So it looks like the server is rejecting it based on the request size being too big (413). I've now found this limit (as of 24/03) to be at roughly 3mb for a json UTF-8 length. It appears like Microsoft has likely just made the api server more strict on the length of the content it allows through.
Now I understand, to have a document of the size above is not optimal - but moreover, the point here is how this setting (that triggers the 413 error) appears to have been introduced silently and without warning. This should be returned back to what it was, and introduced in a transparent manner with time for applications to be updated.
Has anyone else been experiencing this at all recently?
Share Improve this question edited Mar 26 at 22:10 David Makogon 70.9k22 gold badges145 silver badges198 bronze badges asked Mar 26 at 8:12 DCEx1DCEx1 217 bronze badges 3- It's also worth noting, I cannot update the database to 16mb mode - as continuous backup has been enabled (which disables that ability to use 16mb mode). – DCEx1 Commented Mar 26 at 8:13
- Interestingly, I also found in creating a blank cosmosdb mongo instance, I could create a collection prior to turning on 16mb mode - and after switching it over to 16mb, it would allow me to put the large doc into the old collection. This is something which MS says it should not happen. Again, helping prove that the db's limit is different to the api's limit. MS needs to clarify all of this and revert this silent change. – DCEx1 Commented Mar 26 at 8:16
- 1 fyi there is a 2MB request size at the call level, regardless of document size, and that appears to be what you're exceeding. However, without more detail (e.g. your data shape, the actual operation causing the error, etc), it's hard to tell. Please edit to provide a minimal reproducible example with relevant detail. – David Makogon Commented Mar 26 at 17:21
1 Answer
Reset to default 1I don't think the issue you're facing is due to a recent change in the limit. This question on MS Q&A (from January 2022) explains the same problem, with more detail. They are able to insert a 4+ MB document without an error but some other >2 MB document inserts fail.
The accepted answer on that post states:
Cosmos DB's API for MongoDB has a binary storage format that compresses the data. The amount of compression depends on the shape of data within your documents. Documents with deeper hierarchies tend to compress more than those which are flatter. As a result, you are able to store uncompressed data greater than the documented 2MB limit.
(emphasis mine)
So when you mention updates no longer working,
Our document DOES exceed this in it's json UTF-8 length (at almost 4mb), however - we were able to update the document just fine until 24/03
It's possible that the document update changed the shape enough to not be able to compress down to 2 MB.
Wrt
but moreover, the point here is how this setting (that triggers the 413 error) appears to have been introduced silently and without warning. This should be returned back to what it was, and introduced in a transparent manner with time for applications to be updated.
That's feedback you should share with the CosmosDB team, or report it as a bug.
本文标签: Cosmosdb MongoDb API413 Request too large on documents under 2mb (bson)Stack Overflow
版权声明:本文标题:Cosmosdb MongoDb API - 413 Request too large on documents under 2mb (bson) - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1744158995a2593246.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论