Datomic Cloud is ready and we’re currently waiting on the AWS Marketplace process. We hope to ring in the new year with a Cloud announcement and we will continue to update Cloud’s status here.
Thanks,
Jaret
Datomic Cloud is ready and we’re currently waiting on the AWS Marketplace process. We hope to ring in the new year with a Cloud announcement and we will continue to update Cloud’s status here.
Thanks,
Jaret
Thanks for the update…
Very much looking forward to trying this out.
Nice!
We have two projects (still under development, currently with Datomic Free) waiting for Datomic Cloud.
https://my.landrooter.com/
https://my.ubikare.io/
I hope the migration will be smooth.
Will the new cloud architecture address the current 10 billion datoms limit per database?
That’s rather low for a cloud-based Saas application.
Thanks for the update. Can’t wait to play with this!
Datomic Cloud is now available. Here is a link to the announcement post:
We currently use the Peer library in the above mentioned projects (Datomic Free). We are now trying to migrate our code to Client library so we can try Datomic Cloud.
Is there anything else we need to consider before doing this?
Asier, you’ll want to check out this page in our docs https://docs.datomic.com/on-prem/moving-to-cloud.html
What kind of data are you looking to store in Datomic?
We currently use local H2 storage and store schema attributes.
Geographical data is stored in an external PostGIS database (Amazon RDS).
Does this answer your question?
It’s a collaborative app related to machinery in factories. Machines can have a lifetime of decades, which is why Datomic’s historical features are so appealing. The data is relational, with strings limited to 1000 characters, and blobs stored elsewhere.
How many datoms/factory/decade? How many factories?
Worldwide, say there are 10k factories big enough to use this app, with
maybe 20 users per factory. Plus a similar number of people who work for
companies that supply machines and services to those factories. So maybe
400k users in total. Then if each user creates 1k entities per year (4 per
working day), at 10 datoms each, that’s 4 billion datoms/year.
Is this data all in one database, or will it be sharded by geography? Note that the Datomic limits are per-database, not per-system.
The data size you describe might or might not be a good fit, depending on the distribution of data access and on the performance requirements.
I think it needs to be one logical database, as factories and suppliers can
be in different geographic regions. For example, our company is from down
under, but many of our projects are for customers in North & South America
and Europe.
I’m not that familiar with Datomic’s sharding. I had considered separate
regional databases and utilizing the peer library’s cross-database joins,
but I think that would get quite messy. Also, if Datomic Cloud is based on
the client library, I guess it doesn’t support those joins?
I would consider having two databases: one for the users, and another for the machinery. Then the latter database could be sharded by time if necessary for performance.
Thanks, Stu. But the user database will be fairly static, and most of the new datoms will be written to the other database. So although sharding would improve read performance, I don’t see how it would overcome the 10 billion datom limit for a single database?