AWS Free Tier limit alert

I have a Datomic Ions - production, without very much data in in. A few tens of thousands of datoms.

I keep receiving from AWS warnings about read capacity. I will attach 2 screenshots of the alarm.

I really don’t understand what could trigger this.

Hi @danbunea !

The alarm you are seeing is about exceeding Dynamo DB Read Capacity Units. These alarms are thrown when you have exceeded the currently provisioned Read Capacity for the underlying DDB table. Datomic Cloud uses On-demand scaling and will scale to increase provisioning within limits. You can review your current provisioning level under the DDB console for the table name Datomic-Taia-stack.

In general, you shouldn’t have to change these settings. But this is an indication that perhaps your application is performing more read than expected or the size of your DB is larger than expected.

https://docs.datomic.com/cloud/operation/growing-your-system.html#scaling-storage

A good next step would be to confirm you are exceeding DDB provisioning levels in your metrics (RCU and WCU) or under DDB console. Then we can go from there in terms of advice for either changing your provisioning or tracking down what is increasing your reads.

If you would like to share info privately you can always log a case with Datomic support. https://support.cognitect.com/hc/en-us/requests/new

Cheers,
Jaret

Hi @jaret , thank you for your reply.

Datomic-Taia-stack table
Item count 7,740

Table size 2.5 megabytes

Average item size 328.45 bytes

For such a small table I really don’t see why it would need to go over the provisioned read capacity.

On the other hand, when I open the app, I transfer a lot of data to the front end. To read it, I need to do a lot of pull queries (let’s say 10-15). I run all of the queries using a pmap passing the same db instance to all of them. Could this be the issue? That the db instance loads more data in the background and because it’s using a pmap and separate threads it does it for all of them? I could switch the code to using core.async, but so far I didn’t have enough time.

@danbunea even with very little data it is possible to create reads that exceed provisioning. Just to confirm that is what you are seeing you can watch you consumed reads metric and make a dashboard to see what is reported in conjunction with other Datomic metrics like the examples we have in the docs:

https://docs.datomic.com/cloud/operation/monitoring.html#dashboards

Specifically you want to look at a dashboard like this to confirm you are going over your provisioned level:

If that matches expectations the next thing to do would be to understand exactly what your query is doing and why it is creating so many DDB reads against your intuition that it shouldn’t be that many. To do that I recommend getting the io-stats and query-stats for the query in question and potentially pasting them here or sending them into support:

https://docs.datomic.com/cloud/api/io-stats.html#query

https://docs.datomic.com/cloud/api/query-stats.html