With the goal of decreasing load on the datomic transactor, would it advisable or helpful to use the datomic.api/with function to check if a transaction had an effect beyond inserting the same data that was already there with a new timestamp?
I have a vague notion this would increase work on the peer, at least in the on prem model, but maybe it would put less work on the main transaction?
What are your thoughts about this? What do you do, if anything, to decrease load on the datomic transactor?
With the goal of decreasing load on the datomic transactor
Load in what sense? Number of transactions? Costliness of individual transactions? What goal are you trying to reach/problem are you trying to solve?
would it advisable or helpful to use the datomic.api/withh function to check if a transaction had an effect beyond inserting the same data that was already there with a new timestamp?
This is a technique, but note that it’s only an eventually-consistent check–it can only know that given the peer’s latest db, what transaction operations supplied would do to produce a new db. The peer’s latest db is lagging behind the transactor’s latest db because of physics, plus there could be some other in-flight transaction not applied yet but which is in front of your transaction which would change the result (i.e. a transaction race).
Under very particular circumstances this technique or variations on it may be helpful. For example, if the load problem on the transactor is because of an expensive query or calculation that is easy to revalidate, you could compute it ahead of time and have a cheaper transaction function check if the reads done by the calculation are still valid; if not, you can cancel the transaction and have the peer recalculate and retry.