admin管理员组文章数量:1391934
The TLDR: In a high churn Apache Derby database, if we never compress the database and only do an update statistics, will that affect performance long term?
We have an Apache Derby database running as a network server on a linux box. This is a sort of microsystem with a GUI app running on a laptop, hitting its own processing box where this "server" resides. It's a one to one: client to server environment that way. This database goes through high data churn: up to 20,000 new records a day followed by up to 20,000 deletes of old data daily totaling up to 4 GB coming and going essentially. Also this system does not run all the time, so we can't schedule something to happen during off hours.
If I do a compress on startup, this naturally can take a long time (I've seen up to 10 minutes). Naturally it's not great to make the user to wait 10 minutes before they can do anything. Compress also updates statistics which is important for performance for us. So can we simply skip the compress completely, and only ever do an update statistics on startup? Will this slow down query performance overtime if we never compress?
本文标签: databaseIn Apache Derbyfor large data churndo you need to compressStack Overflow
版权声明:本文标题:database - In Apache Derby, for large data churn, do you need to compress? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1744688572a2619851.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论