admin管理员组文章数量:1410730
Hi at the company I work for there is a need to store a json in millions of rows that are fetched in groups of up to 5-10 for the same session.
e.g
id, session_id, json, created date, meta_data_1
, meta_data_2
// currently column store
since
- json can be up to a 1kb each
- we won't be doing any indexing on json fields etc or any OLAP
- column store for the above causes out of memory issues due to columnstore_segment_rows (default 1M which makes segments of 1gb being fetched)
- we do sorting on created date and meta data 1 & 2 filtering
I wonder if the solution to make it a row store like that and extract json to a separate low segment size join table is a good solution:
id, session_id, json_id, created date, meta_data_1
, meta_data_2
etc // row store
json_id, json
// column store with columnstore_segment_rows <10000 (default is 1M) /
if not what is a better alternative other than having increasingly bigger machines with huge ram to accommodate the json data to not suffer from memory outages?
本文标签: best approach for SingleStoreMemsql 1kb json in 10 million rows and OLTP workloadStack Overflow
版权声明:本文标题:best approach for SingleStoreMemsql ~1kb json in ~10 million rows and OLTP workload - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1744927091a2632665.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论