Hi, we have a on-prem Datomic system (version 0.9.5786 — we aren’t renewing our license any more, and that’s one of the most recent we can still use; this is a legacy system that I’m trying to trim down to as small a footprint as possible) that uses DynamoDB for storage. We are currently using on-demand pricing mode for DynamoDB, because we haven’t had any luck with autoscaling and we don’t know how to provision throughput.
I’m seeing read capacity usage hover under 10 most of the time, with regular spikes around 100 (I believe these spikes are our backups, which we run periodically). Write capacity hovers at under 20, but spikes every few hours to 1200 (some logs analysis seems to point to Datomic flushing indexes). Setting the read capacity to 100 would be fine, and I think would save some money, but setting the write capacity to 1200 is out of the question. I’m certain our app is writing out more than it reads, given these numbers, which isn’t ideal but might be harder for me to change.
Is there a good way to configure Datomic to have a lighter-weight indexing process? Or is there a good DynamoDB configuration with autoscaling that can handle this load better? I think the last time I looked at migrating to Datomic Cloud the pricing just wasn’t good enough, it would be the same or more than our current system, and the migration path isn’t great still.
I realize this isn’t a great way to ask a question, since we aren’t actually paying for Datomic any longer, but again it’s a legacy system that I want to keep running but not be too expensive.