Change EC2 instance type

I don’t have tons of AWS experience - this is probably more of an AWS question, but I thought I’d try asking here.

The docs I found for changing the instance type on my Datomic EC2 instance didn’t work - if I try to stop, change type, and start my instance, it goes instead into the terminated state, and a new instance comes along automatically. I’m guessing this is related to CloudFormation, maybe?

What is the correct way to switch to a larger EC2 instance?

While I’m asking, is it normal to be getting out of memory problems with the default t3.small? My app has very little activity right now. I am often seeing a situation where I can’t even deploy (step function fails), and I have to reboot the instance.

Thanks

Follow-up question (I live in hope!)

I’ve tracked down the relevant part of the CloudFormation template.

Given the t3.small has 2GB memory, why does the solo-compute template have

-XX:MaxDirectMemorySize=256m

and

"OsReserveMb": 256`

?

Rather than going for a bigger instance, could I just bump these to, what, 1GB? 1.5?

Any docs on this stuff that I’ve overlooked?

Thanks

The Datomic Cloud solo topology does not provide any options for the instance size.
You can choose from 2 instance sizes when using the Production topology. The base instance size for the Production topology is also larger than the instance size for Solo.

The memory configurations for the individual instances listed in the CloudFormation template(s) of Datomic Cloud have been extensively tested and changing them is unsupported.

-Marshall

I got the impression the templates were just starting points, and that folks are modifying them according to their needs. Your reply makes me wonder if this is unsupported.

As mentioned in a parallel thread [1], both myself and @jacob have needed to frequently reboot the compute instance in order to deploy, resulting in down time. Maybe it’s just time to switch to the production topology.

Good to know that there’s no need to mess with those memory settings, but could you explain why the JVM is given so little memory on a 2GB instance? Are there multiple JVM processes running?

Thanks,

[1] Rollback after failed deploy

1 Like

You generally shouldn’t need to modify the Datomic Cloud templates, and doing so can result in problems when it comes to upgrades and support.

The JVM runs with more memory. The setting you mentioned is the Direct Buffer memory, not the heap size.

Hi Tom,

You are misunderstanding the settings – MaxDirectMemorySize does not control JVM RAM.

We would love to understand the specifics of your problem before making a recommendation. Once we collect some more information, we should be able to provide a clear explanation of what is going on, and whether a move to Production would be helpful.

Jaret will follow up on the support ticket, and we will summarize back here with anything that is generally applicable and useful for all.

Thanks!
Stu

1 Like

Following for the benefit of others hitting memory issues on Solo. The culprit seems to have been a large set of (transitive) deps. In my case this was due to the amazonica library pulling in the entire set of AWS APIs. The amazonica readme has instructions for only pulling in the deps you actually need, which for now seems to have solved the problem.

(A better alternative, which I’ve not looked at yet, might be the “data driven” AWS library from Cognitect [1])

A helpful tip which I learned along the way, is that you can write all your deps to a tmp dir like this:

clj -Stree -Sforce -Sdeps '{:mvn/local-repo "deps-tmp"}'

[1] https://github.com/cognitect-labs/aws-api