I am designing an app that will ingest and persist many different datasets (e.g. CSV files or intermediate computational results). For each column I am creating a new attribute. Is this approach feasible? What do I need to consider when growing a set of attributes exponentially?
Datomic enforces the total number of schema elements to be fewer than 2^20, which includes schema attributes, value types and partitions. Attempts to create more than 2^20 schema elements will fail.
Part of the reason for the limit of schema elements is that all of the idents are stored in memory in every transactor and peer. The more attributes you create the more expensive the cost of storing these in memory.
As long as you keep the 2^20 schema limit in mind you can use Datomic to dynamically create attributes with generated IDs and meet this desired use case.