Message Listeners in Ions

If I have an SQS message listener that I need to run for async processing, is there any way I can start this from within the nodes on a compute group, or do I always have to invoke a lambda from the outside to do such a thing?

I understand that it may not be a good idea to run asynchronous processes inside my compute group, but I’d like to get the benefit of “effective in-memory access to datoms” for the purposes of processing these messages.

You can certainly have a background thread that runs on your Datomic node(s) and handles async processing.

See Kafka Consumer as an Ion?

The main point I should make here is that your process should be coordinated via ion calls, but you should not make individual ion calls that are long-lived (i.e. don’t make run-my-whole-import-job a lambda-invoked ion).

Good to know. That still leaves me wondering how to ensure that the message listener is started after every redeploy.

I’m still not completely familiar with the internals of ions, so the crux of my question is really where in the code I should put the bootstrap of such listeners. Normally, I’d do something like this in an application’s “main” entry point and bootstrap the entire system (à la component, mount, integrant, etc.), but I couldn’t figure out how to declare such an entry point with an ion deployment.

Triggering something from the outside for each member of the ASG seems like it would be needlessly complicated.

1 Like


I have a relevant question in the topic and would like to hear if you already found a solution.

My understanding is, it is okay to run background threads as long as it is coordinated through Lambda calls. However, this leaves me with another question.

Let’s say I have a Lambda ion that starts the background thread after a deployment. I can trigger that Lambda function via an EventBridge rule that watches the Datomic Ion deployments on CodeDeploy. This way, it will be certain that the Lambda function run after each successful deployment and the background thread will be started. However, if I have one query group with let’s say three nodes where each node should start a background thread. If I’m understanding correctly, the Lambda ion call will only be executed on one of the nodes in the query group rather than all of them. Is there a specific Lambda ion type that executes in all of the nodes or am I missing something else?


I’m curious why you haven’t just wired up your SQS handler to an ion lambda, and then in AWS just configure the SQS queue to be consumed using that lambda as the handler. This is the main way ions was designed to easily connect with the rest of the AWS ecosystem. The lambda that is created is just a small invoker of your handler inside of the ion, so you’re still getting the in-memory access to datoms. In other words, your handler is still running in the compute group, wether it’s invoked via the lambda, or you run your own async processor.

Hi @jmshelby! You can check out this discussion at datomic channel in the Clojurians Slack regarding this topic. Also refer to this blog post to see the issues using Lambda with SQS polling.

@marshall This. Is there a solution to this problem for Ion applications?


[I know this topic is old, but re-reading it more carefully, I find that @furkan3ayraktar 's (and @stuartrexking and @okocim) question could be expressed as “how can I ensure a startup process runs on each of N autoscaled Ions?”]

Technically, I don’t know of a way to solve the problem. But practically I have been satisfied keeping all my lambda entry points (http-direct and lambdas) in the same namespace and having them close over a single delay that initializes the system (which can include starting a “background worker thread”). The one “startup” delay is de-referenced in each of the entry point handlers. It’s true that some stimulus must arrive on an entry point to start the thread, but if nothing every arrives in the form of “real work” stimulus (e.g. an HTTP request or an SQS message or …) do you even care? I think in most scenarios the answer is no, but I can imagine cases where the goal is to bootstrap many ion instances to chew through a lot of pre-configured work with no “input” -and this approach won’t address that need.