I’m stepping right into @vvvvalvalval’s area here (maybe he wants to weigh in), but I’d say that the complexity of indexing the data correctly again depends on your use case.
For connecting to the cluster, we’re using GitHub - mpenet/spandex: Elasticsearch client for Clojure (built on new ES 8.x java client).
The actual cluster setup is more my area. I wanted to sort of align with Datomic here, so I ended up creating a CloudFormation template to set up all the resources. After the fact, I wonder if using https://www.terraform.io would have been better. (As of Terraform 0.12, they will have better JSON support, which I can then make as a compilation target for the super small Clojure DSL I wrote for the CloudFormation template).
For deployments, we’re using CodeDeploy, like Datomic. The general idea is to keep the infrastructure immutable: changes are made to the templates, rather than directly to the running resources, which means that git functions as a log over the changes that has been made to the created resources.
Also, you should look into a strategy to be able to do rolling upgrades to the cluster nodes as a matter of routine rather than a once in a blue moon.
To quote Elastic on the matter,
ElasticSearch releases new versions with bug fixes and performance enhancements at a very fast pace, and it is always a good idea to keep your cluster current.
Upgrading should be a routine process, rather than a once-yearly fiasco that requires countless hours of precise planning.
I tried to do this using CodeDeploy and CloudFormation, but they have hard limits on how long a template upgrade can take. They will fail after an hour, which is not much depending on the amount of data you’re storing and need to shift between nodes as they are decommissioned and provisioned.
I ended up writing my own tool (Clojure, of course) which deals with the logic of taking down and bringing up nodes, check health status of the cluster, and so on.