I’m looking for some guidance or advice or even “here’s the RTFM link”.
We have a new AWS account which is being management by AWS Control Tower. In the AWS best practice guidance documents, they recommend that DEV and PROD workloads be deployed to different AWS accounts.
Is it possible/recommended to deploy datomic-cloud projects to different dev & prod accounts?
It is possible. The active credentials will determine the account for your deployment. This is either ambient credentials or the :creds-profile kv pair in your deployment command.
How does the :app-name parameter value in the ion-config.edn impact this?
It does not. :app-name is the application name of your Datomic Cloud application, and this corresponds to a CodeDeploy application. Multiple Datomic Cloud systems may have the same :app-name, and the deploy target is determined by the compute group (:group) that’s specified when deploying.
Regarding SDLC, I would suggest asking on the #datomic channel on clojurians.slack.com. I believe some folks from vouch.io hang out there and may be able to answer questions about the github action that you linked.
Hi, we have the exact same setup, our “Landing Zone” approach follows the “account per env” advice. Datomic Cloud (like the honey badger ;)) doesn’t really care about the approach you take. You could do multiple envs in a single account or the AWS recommended best practice.
We’ve been using DC since day 1 and yeah we sorta got wrapped around the axle on the :app-name initially as well. Writing scripts that would hack the environment suffix into the ion-config.edn prior to a push. Don’t do that lol. As @Robert-Randolph mentioned, there’s the :app-name :compute-group distinction, but also, if you’re doing account per env, you probably don’t need to even deal with that.
We’re using Jenkins and are experimenting with CodePipeline, but it should be straightforward with any similar tool like GH Actions. You can have your “unit test,etc/deploy to dev” stage be triggered automatically by git updates. Then any subsequent stages are, again as Robert mentioned, running the same push/deploy but with the appropriate creds.
It’s pretty straightforward. We wrote some helpers to do things like have our deployment script sit in a loop, checking for the success or failure of the CodeDeploy job to pass/fail the stage accordingly.
Also, DC doesn’t internally have anything like pre/post-deployment hooks. But it’s pretty easy to roll your own (again using CodeDeploy’s status) as you may end up needing them for doing stuff, like setting up other AWS resources, that’s complementary to your DC code update.
You may also want to look at using Terraform/CDK to wrap/complement DC’s CloudFormation scripts. We’re currently using some inhouse Terraform scripts that use outputs from the CloudFormation stack to deploy/configure other stuff. We’re currently experimenting with using CDK’s (the CDK just fixed a bug that was preventing this) import feature to do the same, perhaps more cleanly.
I’m interested in your multi-account setup. Did you use Control Tower for the setup? I got quite far with the provisioning of a stack, and now struggling with pushing the first code. I suspect that Control Tower has implemented some Service Control Policies that is preventing the {:op :push} from working. So I’m curious if you guys also used Control Tower, and if so, how you mitigated this issue.
Hi we did our setup before or just as Control Tower was becoming available, i can’t remember now. We brought in a consultancy that basically had their own AWS Best Practice compliant Terraform scripts, and set everything up per our specs (policies, AWS Config Rules, etc.) We’re still using those for new envs and what have you but we are looking at what it would take to move over to Control Tower specifically to take better advantage of Service Control Policies and what have you. So unfortunately I can’t speak to issues that might be related to that specific setup.
So what kind of issues are you having with the push? AFAIK, (and someone on the datomic team can confirm), AWS API-wise, the push is only doing something like an s3:PutObject. Are you getting unauthorized complaints or something else?
I’ve deployed Datomic Cloud via Terraform several times. Each time it was to isolated AWS accounts across multiple organisations (which is what Control Tower packages up for you).
Deployment requires access to a few AWS services as you have to write to an S3 bucket and poke at CodeDeploy.
CloudTrail can help you track down problems with authorisation, assuming you have access.
I’d be happy to help with getting everything up and running. Feel free to message me privately.
AWS Control Tower may allow an Operator to create an account that may only access AWS resources in one single region. This would remove the ability to fetch cognitect supplied software artifacts, which reside in us-east-1, during certain crucial moments of the SDLC with ions/datomic-cloud. There might be more to it than this terse summary.