Valcache initialization

Are there any downsides or gotchas to “sharing” valcache between VMs? When we autoscale our VMs, the fresh instances take a while to warm up. Looking at some performance profiles, vs. steady state, a lot of the delays are attributable to DDB reads.

Generally I prefer valcache over memcached, and it works well for application restarts (e.g. during software deploys), but still suffers a cold start on fresh VMs.

Is there anything wrong with rsyncing the valcache directory from a warm/running host over to a fresh host with no datomic process running, yet? Do we risk introducing some corruption, e.g. if the warm host is in the middle of writing to valcache and we sync some partial segment?

1 Like

is there anything wrong with rsyncing the valcache directory from a warm/running host over to a fresh host with no datomic process running, yet? Do we risk introducing some corruption, e.g. if the warm host is in the middle of writing to valcache and we sync some partial segment?

Hi @adam this might result in corrupt valcache segments. We are assessing this to see if it will work and we will get back to you with our findings.

In the same line of questioning. We deploy to Kubernetes and and could use a readwritemany mounted ssd backed filestore instance. Would valcache be safe for multiple concurrent writers?