admin管理员组

文章数量:1410730

Hi at the company I work for there is a need to store a json in millions of rows that are fetched in groups of up to 5-10 for the same session.

e.g

id, session_id, json, created date, meta_data_1, meta_data_2 // currently column store

since

  1. json can be up to a 1kb each
  2. we won't be doing any indexing on json fields etc or any OLAP
  3. column store for the above causes out of memory issues due to columnstore_segment_rows (default 1M which makes segments of 1gb being fetched)
  4. we do sorting on created date and meta data 1 & 2 filtering

I wonder if the solution to make it a row store like that and extract json to a separate low segment size join table is a good solution:

id, session_id, json_id, created date, meta_data_1, meta_data_2 etc // row store

json_id, json // column store with columnstore_segment_rows <10000 (default is 1M) /

if not what is a better alternative other than having increasingly bigger machines with huge ram to accommodate the json data to not suffer from memory outages?

本文标签: best approach for SingleStoreMemsql 1kb json in 10 million rows and OLTP workloadStack Overflow