QA ν΄λ¬μ€ν°μμ λ λ€λ₯Έ μ€λ₯κ° λ°μνμ΅λλ€(kafka 2.4.0, μ»€λ° f30b9ad9μμ λΉλλ kafka-backup).
[2020-06-11 08:39:55,585] WARN Failed to deserialize value for header 'kafka_replyPartition' on topic 'cosmos-cs-reads', so using byte array (org.apache.kafka.connect.storage.SimpleHeaderConverter:68)
java.lang.StringIndexOutOfBoundsException: String index out of range: 0
at java.base/java.lang.StringLatin1.charAt(Unknown Source)
at java.base/java.lang.String.charAt(Unknown Source)
at org.apache.kafka.connect.data.Values.parse(Values.java:822)
at org.apache.kafka.connect.data.Values.parseString(Values.java:378)
at org.apache.kafka.connect.storage.SimpleHeaderConverter.toConnectHeader(SimpleHeaderConverter.java:64)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertHeadersFor(WorkerSinkTask.java:516)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$3(WorkerSinkTask.java:491)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:491)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:465)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
[2020-06-11 08:39:56,295] ERROR WorkerSinkTask{id=chrono_qa-backup-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. (org.apache.kafka.connect.runtime.WorkerSinkTask:559)
org.apache.kafka.connect.errors.DataException: cosmos-cs-reads error: Not a byte array! cosmos-cs-cmds
at de.azapps.kafkabackup.common.AlreadyBytesConverter.fromConnectData(AlreadyBytesConverter.java:19)
at de.azapps.kafkabackup.common.record.RecordSerde.write(RecordSerde.java:121)
at de.azapps.kafkabackup.common.segment.SegmentWriter.append(SegmentWriter.java:75)
at de.azapps.kafkabackup.common.partition.PartitionWriter.append(PartitionWriter.java:57)
at de.azapps.kafkabackup.sink.BackupSinkTask.put(BackupSinkTask.java:68)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:539)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
[2020-06-11 08:39:56,353] ERROR WorkerSinkTask{id=chrono_qa-backup-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:179)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:561)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: org.apache.kafka.connect.errors.DataException: cosmos-cs-reads error: Not a byte array! cosmos-cs-cmds
at de.azapps.kafkabackup.common.AlreadyBytesConverter.fromConnectData(AlreadyBytesConverter.java:19)
at de.azapps.kafkabackup.common.record.RecordSerde.write(RecordSerde.java:121)
at de.azapps.kafkabackup.common.segment.SegmentWriter.append(SegmentWriter.java:75)
at de.azapps.kafkabackup.common.partition.PartitionWriter.append(PartitionWriter.java:57)
at de.azapps.kafkabackup.sink.BackupSinkTask.put(BackupSinkTask.java:68)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:539)
... 10 more
[2020-06-11 08:39:56,354] ERROR WorkerSinkTask{id=chrono_qa-backup-sink-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:180)
[2020-06-11 08:39:56,382] INFO Stopped BackupSinkTask (de.azapps.kafkabackup.sink.BackupSinkTask:139)
JFYI, μ΄ μ£Όμ λ μ μκ² μλ‘μ΄ κ² κ°μ΅λλ€. μκΈ΄μ§ μΌλ§ μ λ κ² κ°μμ.
JFYI 2, λ€μ μμν΄λ λμμ΄ λμ§ μμ΅λλ€.
λ€μμ μ£Όμ μ€λͺ μ λλ€.
Topic: cosmos-cs-reads PartitionCount: 1 ReplicationFactor: 1 Configs: compression.type=producer,min.insync.replicas=1,cleanup.policy=delete,segment.bytes=1073741824,flush.messages=9223372036854775807,file.delete.delay.ms=60000,max.message.bytes=1000012,min.compaction.lag.ms=0,message.timestamp.type=CreateTime,preallocate=false,index.interval.bytes=4096,min.cleanable.dirty.ratio=0.5,unclean.leader.election.enable=true,retention.bytes=120000000000,delete.retention.ms=86400000,message.timestamp.difference.max.ms=9223372036854775807,segment.index.bytes=10485760
Topic: cosmos-cs-reads Partition: 0 Leader: 1 Replicas: 1 Isr: 1
kafka-console-consumerλ₯Ό μ¬μ©νμ¬ λ©μμ§λ₯Ό μ¬μ©νλ €κ³ μλνμΌλ©° λ΄λΆμ μ ν¨ν JSONμ΄ μμ΅λλ€. jq
λ λ¬Έμ μμ΄ μ΄λ₯Ό ꡬ문 λΆμν μ μμ΅λλ€. νμ§λ§ λΉ λ¬Έμμ΄( "field1":"","field2":""
)μΈ νλλ κ±°μ μμ΅λλ€.
ν€λμ λ¬Έμ κ° μλ κ² κ°μ΅λλ€. ν€λκ° μ΄λ»κ² μκ²Όλμ§ νμΈν μ μμ΅λκΉ?
λ°©λ²μ μ μν΄ μ£Όμκ² μ΅λκΉ?
κ°μ₯ μ¬μ΄ λ°©λ²μ kafkacatμ μ¬μ©νλ κ²μ λλ€. https://stackoverflow.com/questions/55180620/how-to-view-kafka-headers
μ.. λ°©κΈ kafkacat
μλνκ³ μ§κΈ ν€λμμ μ΄κ²μ 보μμμ€.
kafka_replyTopic=cosmos-cs-cmds,kafka_replyPartition=,kafka_correlationId=οΏ½οΏ½οΏ½DοΏ½οΏ½οΏ½οΏ½οΏ½ ;X,__TypeId__=eu.chargetime.ocpp.model.core.BootNotificationRequest
μ°λ¦¬ κ°λ°μλ€μ΄ λκ° μλͺ»νκ³ μλ κ² κ°μ΅λλ€ :(
UPD: ν
μ€νΈ λ°μ΄ν°λΌκ³ νκ³ μμλλ‘μμ΅λλ€. :(
UPD2: kafka_*
ν€λκ° Spring ν΄λΌμ°λ νλ μμν¬μ μν΄ μΆκ°λμλ€κ³ λ§νμ΅λλ€.
μκ² μ΅λλ€. κ°μ¬ν©λλ€... ν΄κ²° λ°©λ²μ λν΄ μκ°ν΄μΌ ν©λλ€. λλ μ΄κ²μ΄ λ§€μ° μ€μνλ€λ κ²μ μμμ΅λλ€ π±
JFYI, κ°λ₯ν μμ μ¬νμ ν μ€νΈνκΈ° μν΄ μ΄κ²μ "κΉ¨μ§" μνλ‘ μ μ§νκ³ μμ΅λλ€ :-D
ν ... #97μμ λ²κ·Έλ₯Ό μ¬νν΄ λ³΄μμ΅λλ€. λ¬Έμ κ° λνλμ§ μμμ΅λλ€(λ΄ λ‘컬 μ»΄ν¨ν°μμ β GitHub νμ΄νλΌμΈμ΄ μ€ν μ€μ)β¦
μ¬κΈ°μ λ¬Έμ κ° μλ ν€λ λ©μμ§λ₯Ό μΆκ°ν μ μμ΅λκΉ? https://github.com/itadventurer/kafka-backup/pull/97/files#diff -28c62e6ea255f4a9955c7be8c5d8a1cfR95
(λΆλͺ
ν 16μ§μλ‘ μΈμ½λ©λ λ°μ΄ν°λ‘)
μ¬κΈ°μμ μ¬νν μ μκΈ°λ₯Ό λ°λλλ€. ;)
κ°μ¬ν©λλ€! μ΄λ²μ£Όμ λμ ν΄λ³Όκ²μ!
μ λ μ¬νν μ μμμ΅λλ€. κ·Έλ¬λ μ»€λ° f30b9ad(λ΄ λΉλ κΈ°λ°) νμ ν€λ μ²λ¦¬λ₯Ό λ§μ§λ 컀λ°μ΄ κ±°μ μλ€λ κ²μ κΉ¨λ¬μμ΅λλ€. κ·Έλμ λλ μμ μ λ²νκ³ λ΄ νκ²½μμ λ€μ μλν΄μΌνλ€κ³ μκ°ν©λλ€.
μ
κ·Έλ μ΄λ λ° κ΅¬μ± μ‘°μ νμλ μ μλν©λλ€ ...
μ£Όλ μ΄μ λ λ€μκ³Ό κ°μ΅λλ€.
connector.class=de.azapps.kafkabackup.sink.BackupSinkConnector
-key.converter=de.azapps.kafkabackup.common.AlreadyBytesConverter
-value.converter=de.azapps.kafkabackup.common.AlreadyBytesConverter
+key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
+value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
+header.converter=org.apache.kafka.connect.converters.ByteArrayConverter
ν΄κ²°λ λλ‘ μ’ λ£ν©λλ€.