Kafka-backup: Gagal melakukan deserialize nilai untuk header 'kafka_replyPartition' pada topik

Dibuat pada 11 Jun 2020  ·  15Komentar  ·  Sumber: itadventurer/kafka-backup

Mendapat kesalahan lain pada cluster QA kami (kafka 2.4.0, kafka-backup dibangun dari komit f30b9ad9).

[2020-06-11 08:39:55,585] WARN Failed to deserialize value for header 'kafka_replyPartition' on topic 'cosmos-cs-reads', so using byte array (org.apache.kafka.connect.storage.SimpleHeaderConverter:68)
java.lang.StringIndexOutOfBoundsException: String index out of range: 0
        at java.base/java.lang.StringLatin1.charAt(Unknown Source)
        at java.base/java.lang.String.charAt(Unknown Source)
        at org.apache.kafka.connect.data.Values.parse(Values.java:822)
        at org.apache.kafka.connect.data.Values.parseString(Values.java:378)
        at org.apache.kafka.connect.storage.SimpleHeaderConverter.toConnectHeader(SimpleHeaderConverter.java:64)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.convertHeadersFor(WorkerSinkTask.java:516)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$3(WorkerSinkTask.java:491)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:491)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:465)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
        at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.base/java.lang.Thread.run(Unknown Source)
[2020-06-11 08:39:56,295] ERROR WorkerSinkTask{id=chrono_qa-backup-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. (org.apache.kafka.connect.runtime.WorkerSinkTask:559)
org.apache.kafka.connect.errors.DataException: cosmos-cs-reads error: Not a byte array! cosmos-cs-cmds
        at de.azapps.kafkabackup.common.AlreadyBytesConverter.fromConnectData(AlreadyBytesConverter.java:19)
        at de.azapps.kafkabackup.common.record.RecordSerde.write(RecordSerde.java:121)
        at de.azapps.kafkabackup.common.segment.SegmentWriter.append(SegmentWriter.java:75)
        at de.azapps.kafkabackup.common.partition.PartitionWriter.append(PartitionWriter.java:57)
        at de.azapps.kafkabackup.sink.BackupSinkTask.put(BackupSinkTask.java:68)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:539)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
        at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.base/java.lang.Thread.run(Unknown Source)
[2020-06-11 08:39:56,353] ERROR WorkerSinkTask{id=chrono_qa-backup-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:179)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:561)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
        at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.base/java.lang.Thread.run(Unknown Source)
Caused by: org.apache.kafka.connect.errors.DataException: cosmos-cs-reads error: Not a byte array! cosmos-cs-cmds
        at de.azapps.kafkabackup.common.AlreadyBytesConverter.fromConnectData(AlreadyBytesConverter.java:19)
        at de.azapps.kafkabackup.common.record.RecordSerde.write(RecordSerde.java:121)
        at de.azapps.kafkabackup.common.segment.SegmentWriter.append(SegmentWriter.java:75)
        at de.azapps.kafkabackup.common.partition.PartitionWriter.append(PartitionWriter.java:57)
        at de.azapps.kafkabackup.sink.BackupSinkTask.put(BackupSinkTask.java:68)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:539)
        ... 10 more
[2020-06-11 08:39:56,354] ERROR WorkerSinkTask{id=chrono_qa-backup-sink-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:180)
[2020-06-11 08:39:56,382] INFO Stopped BackupSinkTask (de.azapps.kafkabackup.sink.BackupSinkTask:139)
bug

Semua 15 komentar

JFYI, topik ini tampaknya baru bagi saya. Saya kira itu baru saja dibuat baru-baru ini.

JFYI 2, restart tidak membantu.

Berikut adalah deskripsi topik:

Topic: cosmos-cs-reads  PartitionCount: 1       ReplicationFactor: 1    Configs: compression.type=producer,min.insync.replicas=1,cleanup.policy=delete,segment.bytes=1073741824,flush.messages=9223372036854775807,file.delete.delay.ms=60000,max.message.bytes=1000012,min.compaction.lag.ms=0,message.timestamp.type=CreateTime,preallocate=false,index.interval.bytes=4096,min.cleanable.dirty.ratio=0.5,unclean.leader.election.enable=true,retention.bytes=120000000000,delete.retention.ms=86400000,message.timestamp.difference.max.ms=9223372036854775807,segment.index.bytes=10485760
        Topic: cosmos-cs-reads  Partition: 0    Leader: 1       Replicas: 1     Isr: 1

Saya mencoba menggunakan pesan menggunakan kafka-console-consumer dan ada JSON yang valid di dalamnya.. jq dapat menguraikannya tanpa masalah. Beberapa bidang adalah string kosong ( "field1":"","field2":"" ).

Sepertinya ada masalah dengan header. Bisakah Anda memeriksa seperti apa headernya?

Bisakah Anda menyarankan bagaimana melakukannya?

Cara termudah adalah dengan menggunakan kafkacat: https://stackoverflow.com/questions/55180620/how-to-view-kafka-headers

Ya.. baru saja mencoba dengan kafkacat dan lihat ini di header sekarang..
kafka_replyTopic=cosmos-cs-cmds,kafka_replyPartition=,kafka_correlationId=���D����� ;X,__TypeId__=eu.chargetime.ocpp.model.core.BootNotificationRequest

Tampaknya pengembang kami melakukan sesuatu yang salah :(

UPD: Mereka mengatakan itu adalah data uji dan diharapkan.. :(
UPD2: Mereka memberi tahu kafka_* header ditambahkan oleh kerangka kerja cloud Spring.

Ok terima kasih… Saya harus memikirkan cara memperbaikinya. Saya melihat bahwa ini cukup kritis

JFYI, saya menyimpan ini dalam status "rusak" untuk menguji kemungkinan perbaikan Anda :-D

Hmm… Saya telah mencoba mereproduksi bug di #97. Masalah tidak muncul (di mesin lokal saya – GitHub Pipeline sedang berjalan)…

Bisakah Anda mencoba menambahkan pesan header bermasalah Anda di sini? https://github.com/itadventurer/kafka-backup/pull/97/files#diff -28c62e6ea255f4a9955c7be8c5d8a1cfR95
(jelas sebagai data yang disandikan hex)

Semoga kita bisa mereproduksinya di sini ;)

Terima kasih! Saya akan mencoba minggu ini!

Saya juga tidak dapat mereproduksinya. Tetapi saya menyadari ada beberapa komit yang menyentuh pemrosesan tajuk setelah komit f30b9ad (yang merupakan basis build saya). Jadi saya kira saya harus bertemu revisi dan mencoba lagi di lingkungan saya.

Setelah memutakhirkan dan penyesuaian konfigurasi berfungsi dengan baik ...
Saya kira alasan utamanya adalah yang ini:

 connector.class=de.azapps.kafkabackup.sink.BackupSinkConnector
-key.converter=de.azapps.kafkabackup.common.AlreadyBytesConverter
-value.converter=de.azapps.kafkabackup.common.AlreadyBytesConverter
+key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
+value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
+header.converter=org.apache.kafka.connect.converters.ByteArrayConverter

Menutup sebagai diselesaikan kemudian.

Apakah halaman ini membantu?
0 / 5 - 0 peringkat