When transactor broadcasts novelty to all connected peers, does it really stream all new data to all the peers, even if that data is not in the working set of that peer?
If one of the peers works on one set of app objects, and another works on another set (as happened by load balancing), they are not inerested in the changes made by another one (unless one fails and the second starts receiving requests for boths sets). Straming all data (pure datoms or transaction functions can be brodcasted too?) will produce unnecesary traffic and memory pressure on peers (they need to keep in memory all the novel, unindexed yet datoms, even if these datoms are not of inerest and were never touched by the application code running in that peer, right?).
Or, if I connect many many peers to utilize Datomic’s unlimited read scalability, the transactor network will become a bottleneck if it needs to broadcast copy of all new data to all the peers. So write performace will degrade as the number of connected peers grow. Does it happen?