Execution error when importing from cloud with dev-local 0.9.225

Thank you @jaret and the rest of the team for increasing the limit on imported string lengths in the latest version of dev-local.

Using the latest release, we are experiencing the following exception when attempting to import-cloud from our database.

Datomic Cloud:         704-8957
dev-tools:             0.9.51
com.datomic/dev-local: 0.9.225
(dl/import-cloud
  {:source {:system      "the-system-name"
            :db-name     "the-db-name"
            :server-type :ion
            :region      "the-region"
            :endpoint    "http://entry.the-query-group-name.the-region.datomic.net:8182/"
            :proxy-port  8182}
   :dest   {:system      "the-system-name"
            :server-type :dev-local
            :db-name     "the-db-name"}})
Importing.........................................................Execution error (IllegalArgumentException) at org.fressian.handlers.WriteHandlerLookup/requireWriteHandler (WriteHandlerLookup.java:48).
Cannot write 90 as tag null
java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Cannot write 90 as tag null
	at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
java.lang.IllegalArgumentException: Cannot write 90 as tag null
	at org.fressian.handlers.WriteHandlerLookup.requireWriteHandler(WriteHandlerLookup.java:48)
	at org.fressian.FressianWriter.writeAs(FressianWriter.java:458)
	at org.fressian.FressianWriter.writeAs(FressianWriter.java:464)
	at org.fressian.FressianWriter.writeObject(FressianWriter.java:472)
	at datomic.dev_local.tx$reify__19341.write(tx.clj:64)
	at org.fressian.FressianWriter.doWrite(FressianWriter.java:449)
	at org.fressian.FressianWriter.writeAs(FressianWriter.java:459)
	at org.fressian.FressianWriter.writeAs(FressianWriter.java:464)
	at org.fressian.FressianWriter.writeObject(FressianWriter.java:472)
	at datomic.dev_local.tx$write_fressian.invokeStatic(tx.clj:94)
	at datomic.dev_local.tx$write_fressian.invoke(tx.clj:91)
	at datomic.dev_local.tx$fressian_bbuf.invokeStatic(tx.clj:101)
	at datomic.dev_local.tx$fressian_bbuf.invoke(tx.clj:97)
	at datomic.dev_local.tx$sized_fressian_bbuf.invokeStatic(tx.clj:110)
	at datomic.dev_local.tx$sized_fressian_bbuf.invoke(tx.clj:104)
	at datomic.dev_local.tx$marshal_import_tx$fn__19370.invoke(tx.clj:209)
	at clojure.core$mapv$fn__8430.invoke(core.clj:6912)
	at clojure.lang.PersistentVector.reduce(PersistentVector.java:343)
	at clojure.core$reduce.invokeStatic(core.clj:6827)
	at clojure.core$mapv.invokeStatic(core.clj:6903)
	at clojure.core$mapv.invoke(core.clj:6903)
	at datomic.dev_local.tx$marshal_import_tx.invokeStatic(tx.clj:206)
	at datomic.dev_local.tx$marshal_import_tx.invoke(tx.clj:202)
	at clojure.core$pmap$fn__8447$fn__8448.invoke(core.clj:7022)
	at clojure.core$binding_conveyor_fn$fn__5739.invoke(core.clj:2030)
	at clojure.lang.AFn.call(AFn.java:18)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
	at java.base/java.lang.Thread.run(Thread.java:832)

Any idea what could be going wrong here? Thanks!

Also, I can rule out that we do not have string values (at least relevant to current t) that exceed the new 1,000,000 character limit :slight_smile:

(d/q '{:find  [?text-count]
       :in    [$]
       :where [[?ident :db/valueType :db.type/string]
               [?n ?ident ?text]
               [(count ?text) ?text-count]
               [(>= ?text-count 1000000)]]}
  db) 
=> []

Hi @joshkh

Could you share your transitive deps tree? The output or trace.edn from running:

clj -Stree

Also what do the first 10 transactions look like if you were to look at tx-range? (https://docs.datomic.com/cloud/time/log.html#tx-range)

Thanks,
Jaret

Can you confirm also that you are not passing any filter key to import cloud? Even an empty filter?

And one more thought when did you create this Cloud DB? Was this post or pre the Tuples release? Tuples were released 06/27/ 2019…

Hi Jaret,

Thanks for the response.

Can you confirm also that you are not passing any filter key to import cloud? Even an empty filter?

The structure passed to import-cloud is the same as in the subject of the post. No filters, including empty ones.

And one more thought when did you create this Cloud DB? Was this post or pre the Tuples release? Tuples were released 06/27/ 2019…

This Cloud DB was created prior the tuples release

Also what do the first 10 transactions look like if you were to look at tx-range ?

(seq (d/tx-range conn {:start 0 :end 9}))

Starting at :t 6 they become custom attribute installations. Happy to share in a DM or a support ticket.

Could you share your transitive deps tree?

org.clojure/clojure 1.10.0
  org.clojure/core.specs.alpha 0.2.44
  org.clojure/spec.alpha 0.2.176
com.datomic/dev-local 0.9.225
  org.clojure/tools.analyzer 0.6.9
  com.google.errorprone/error_prone_annotations 2.3.4
  com.datomic/client-api 0.8.54
    org.clojure/core.async 0.5.527
      org.clojure/tools.analyzer.jvm 0.7.2
  com.cognitect/anomalies 0.1.12
  com.google.guava/listenablefuture 9999.0-empty-to-avoid-conflict-with-guava
  com.datomic/client 0.8.111
    com.cognitect/http-client 0.1.105
      org.eclipse.jetty/jetty-http 9.4.27.v20200227
        org.eclipse.jetty/jetty-io 9.4.27.v20200227
      org.eclipse.jetty/jetty-util 9.4.27.v20200227
      org.eclipse.jetty/jetty-client 9.4.27.v20200227
  org.checkerframework/checker-compat-qual 2.5.5
  com.google.guava/failureaccess 1.0.1
  com.google.guava/guava 28.2-android
  com.datomic/client-impl-shared 0.8.80
    com.cognitect/hmac-authn 0.1.195
      commons-codec/commons-codec 1.10
    com.cognitect/transit-clj 0.8.313
      com.cognitect/transit-java 0.8.337
        com.fasterxml.jackson.core/jackson-core 2.8.7
        org.msgpack/msgpack 0.6.12
          com.googlecode.json-simple/json-simple 1.1.1
          org.javassist/javassist 3.18.1-GA
        javax.xml.bind/jaxb-api 2.3.0
  com.google.j2objc/j2objc-annotations 1.3
  com.datomic/query-support 0.8.27
  org.fressian/fressian 0.6.5
  org.clojure/tools.reader 1.0.0-beta4
  org.clojure/core.memoize 0.5.9
  org.clojure/data.priority-map 0.0.7
  com.google.code.findbugs/jsr305 3.0.2
  org.clojure/core.cache 0.6.5
  org.ow2.asm/asm-all 4.2
org.clojure/test.check 0.9.0
com.datomic/client-cloud 0.8.102
  com.cognitect/s3-creds 0.1.22
  com.datomic/s3-access-keys 0.1.6
  com.amazonaws/aws-java-sdk-s3 1.11.210
    com.amazonaws/jmespath-java 1.11.210
      com.fasterxml.jackson.core/jackson-databind 2.6.7.1
        com.fasterxml.jackson.core/jackson-annotations 2.6.0
    com.amazonaws/aws-java-sdk-core 1.11.210
      joda-time/joda-time 2.8.1
      com.fasterxml.jackson.dataformat/jackson-dataformat-cbor 2.6.7
      software.amazon.ion/ion-java 1.0.2
      org.apache.httpcomponents/httpclient 4.5.2
        org.apache.httpcomponents/httpcore 4.4.4
        commons-logging/commons-logging 1.2
    com.amazonaws/aws-java-sdk-kms 1.11.210

Did you ever run system administer on this DB to upgrade your tuple schema?

https://docs.datomic.com/cloud/operation/howto.html#upgrade-base-schema

Did you ever run system administer on this DB to upgrade your tuple schema?

I’m 99% sure that we ran this directly after the tuples release, because we needed tuples at the time and have been using them since. That being said, is it possible that we could be using tuples without running this?

Hi Jaret,

The db that I am trying to import is at schema level 4.

{:schema-level 4}

That being said, is it possible that we could be using tuples without running this?

No you would have to run the upgrade.

A schema level of 4 means that you ran the upgrade. I think that rules out the theory I had. I will discuss this issue with the team this morning and review your deps more closely.

@joshkh would it be possible for you DM or email (support@cognitect.com) the first 10 transactions?

No problem, Jaret. I have emailed the first 10 transactions to Cognitect support.

Thanks again for looking into this – we really appreciate it.

Hi Josh,

How long (wall clock) does the import proceed before encountering this error?

Hi Stu,

Picking up after the process has failed, the error is instantaneous.

Starting with a cleared :storage-dir directory, the process runs for just over 3 minutes before the error occurs with a ~550mb db.log file.

@joshkh to troubleshoot this better we improved the error messaging on import-cloud in the latest release of dev-tools. Could you try importing on the latest dev-local 0.9.229.

Thanks for the update, Jaret. I do see some more details in the error message. Since we’re approaching “real life” data, do you mind if we continue the discussion in a support email, and then update the forum post (for others) when we find a resolution?

@joshkvh absolutely! I am sending you a follow up ticket from our previous case where you can attach the info.