Kafka-backup: 주제의 'kafka_replyPartition' 헀더 값을 μ—­μ§λ ¬ν™”ν•˜μ§€ λͺ»ν–ˆμŠ΅λ‹ˆλ‹€.

에 λ§Œλ“  2020λ…„ 06μ›” 11일  Β·  15μ½”λ©˜νŠΈ  Β·  좜처: itadventurer/kafka-backup

QA ν΄λŸ¬μŠ€ν„°μ—μ„œ 또 λ‹€λ₯Έ 였λ₯˜κ°€ λ°œμƒν–ˆμŠ΅λ‹ˆλ‹€(kafka 2.4.0, 컀밋 f30b9ad9μ—μ„œ λΉŒλ“œλœ kafka-backup).

[2020-06-11 08:39:55,585] WARN Failed to deserialize value for header 'kafka_replyPartition' on topic 'cosmos-cs-reads', so using byte array (org.apache.kafka.connect.storage.SimpleHeaderConverter:68)
java.lang.StringIndexOutOfBoundsException: String index out of range: 0
        at java.base/java.lang.StringLatin1.charAt(Unknown Source)
        at java.base/java.lang.String.charAt(Unknown Source)
        at org.apache.kafka.connect.data.Values.parse(Values.java:822)
        at org.apache.kafka.connect.data.Values.parseString(Values.java:378)
        at org.apache.kafka.connect.storage.SimpleHeaderConverter.toConnectHeader(SimpleHeaderConverter.java:64)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.convertHeadersFor(WorkerSinkTask.java:516)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$3(WorkerSinkTask.java:491)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:491)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:465)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
        at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.base/java.lang.Thread.run(Unknown Source)
[2020-06-11 08:39:56,295] ERROR WorkerSinkTask{id=chrono_qa-backup-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. (org.apache.kafka.connect.runtime.WorkerSinkTask:559)
org.apache.kafka.connect.errors.DataException: cosmos-cs-reads error: Not a byte array! cosmos-cs-cmds
        at de.azapps.kafkabackup.common.AlreadyBytesConverter.fromConnectData(AlreadyBytesConverter.java:19)
        at de.azapps.kafkabackup.common.record.RecordSerde.write(RecordSerde.java:121)
        at de.azapps.kafkabackup.common.segment.SegmentWriter.append(SegmentWriter.java:75)
        at de.azapps.kafkabackup.common.partition.PartitionWriter.append(PartitionWriter.java:57)
        at de.azapps.kafkabackup.sink.BackupSinkTask.put(BackupSinkTask.java:68)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:539)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
        at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.base/java.lang.Thread.run(Unknown Source)
[2020-06-11 08:39:56,353] ERROR WorkerSinkTask{id=chrono_qa-backup-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:179)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:561)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
        at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.base/java.lang.Thread.run(Unknown Source)
Caused by: org.apache.kafka.connect.errors.DataException: cosmos-cs-reads error: Not a byte array! cosmos-cs-cmds
        at de.azapps.kafkabackup.common.AlreadyBytesConverter.fromConnectData(AlreadyBytesConverter.java:19)
        at de.azapps.kafkabackup.common.record.RecordSerde.write(RecordSerde.java:121)
        at de.azapps.kafkabackup.common.segment.SegmentWriter.append(SegmentWriter.java:75)
        at de.azapps.kafkabackup.common.partition.PartitionWriter.append(PartitionWriter.java:57)
        at de.azapps.kafkabackup.sink.BackupSinkTask.put(BackupSinkTask.java:68)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:539)
        ... 10 more
[2020-06-11 08:39:56,354] ERROR WorkerSinkTask{id=chrono_qa-backup-sink-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:180)
[2020-06-11 08:39:56,382] INFO Stopped BackupSinkTask (de.azapps.kafkabackup.sink.BackupSinkTask:139)

λͺ¨λ“  15 λŒ“κΈ€

JFYI, 이 μ£Όμ œλŠ” μ €μ—κ²Œ μƒˆλ‘œμš΄ 것 κ°™μŠ΅λ‹ˆλ‹€. 생긴지 μ–Όλ§ˆ μ•ˆ 된 것 κ°™μ•„μš”.

JFYI 2, λ‹€μ‹œ μ‹œμž‘ν•΄λ„ 도움이 λ˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€.

λ‹€μŒμ€ 주제 μ„€λͺ…μž…λ‹ˆλ‹€.

Topic: cosmos-cs-reads  PartitionCount: 1       ReplicationFactor: 1    Configs: compression.type=producer,min.insync.replicas=1,cleanup.policy=delete,segment.bytes=1073741824,flush.messages=9223372036854775807,file.delete.delay.ms=60000,max.message.bytes=1000012,min.compaction.lag.ms=0,message.timestamp.type=CreateTime,preallocate=false,index.interval.bytes=4096,min.cleanable.dirty.ratio=0.5,unclean.leader.election.enable=true,retention.bytes=120000000000,delete.retention.ms=86400000,message.timestamp.difference.max.ms=9223372036854775807,segment.index.bytes=10485760
        Topic: cosmos-cs-reads  Partition: 0    Leader: 1       Replicas: 1     Isr: 1

kafka-console-consumerλ₯Ό μ‚¬μš©ν•˜μ—¬ λ©”μ‹œμ§€λ₯Ό μ‚¬μš©ν•˜λ €κ³  μ‹œλ„ν–ˆμœΌλ©° 내뢀에 μœ νš¨ν•œ JSON이 μžˆμŠ΅λ‹ˆλ‹€. jq λŠ” 문제 없이 이λ₯Ό ꡬ문 뢄석할 수 μžˆμŠ΅λ‹ˆλ‹€. ν•˜μ§€λ§Œ 빈 λ¬Έμžμ—΄( "field1":"","field2":"" )인 ν•„λ“œλŠ” 거의 μ—†μŠ΅λ‹ˆλ‹€.

헀더에 λ¬Έμ œκ°€ μžˆλŠ” 것 κ°™μŠ΅λ‹ˆλ‹€. 헀더가 μ–΄λ–»κ²Œ μƒκ²ΌλŠ”μ§€ 확인할 수 μžˆμŠ΅λ‹ˆκΉŒ?

방법을 μ œμ•ˆν•΄ μ£Όμ‹œκ² μŠ΅λ‹ˆκΉŒ?

κ°€μž₯ μ‰¬μš΄ 방법은 kafkacat을 μ‚¬μš©ν•˜λŠ” κ²ƒμž…λ‹ˆλ‹€. https://stackoverflow.com/questions/55180620/how-to-view-kafka-headers

예.. 방금 kafkacat μ‹œλ„ν–ˆκ³  μ§€κΈˆ ν—€λ”μ—μ„œ 이것을 λ³΄μ‹­μ‹œμ˜€.
kafka_replyTopic=cosmos-cs-cmds,kafka_replyPartition=,kafka_correlationId=οΏ½οΏ½οΏ½DοΏ½οΏ½οΏ½οΏ½οΏ½ ;X,__TypeId__=eu.chargetime.ocpp.model.core.BootNotificationRequest

우리 κ°œλ°œμžλ“€μ΄ λ­”κ°€ 잘λͺ»ν•˜κ³  μžˆλŠ” 것 κ°™μŠ΅λ‹ˆλ‹€ :(

UPD: ν…ŒμŠ€νŠΈ 데이터라고 ν–ˆκ³  μ˜ˆμƒλŒ€λ‘œμ˜€μŠ΅λ‹ˆλ‹€. :(
UPD2: kafka_* 헀더가 Spring ν΄λΌμš°λ“œ ν”„λ ˆμž„μ›Œν¬μ— μ˜ν•΄ μΆ”κ°€λ˜μ—ˆλ‹€κ³  λ§ν–ˆμŠ΅λ‹ˆλ‹€.

μ•Œκ² μŠ΅λ‹ˆλ‹€. κ°μ‚¬ν•©λ‹ˆλ‹€... ν•΄κ²° 방법에 λŒ€ν•΄ 생각해야 ν•©λ‹ˆλ‹€. λ‚˜λŠ” 이것이 맀우 μ€‘μš”ν•˜λ‹€λŠ” 것을 μ•Œμ•˜μŠ΅λ‹ˆλ‹€ 😱

JFYI, κ°€λŠ₯ν•œ μˆ˜μ • 사항을 ν…ŒμŠ€νŠΈν•˜κΈ° μœ„ν•΄ 이것을 "깨진" μƒνƒœλ‘œ μœ μ§€ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€ :-D

흠... #97μ—μ„œ 버그λ₯Ό μž¬ν˜„ν•΄ λ³΄μ•˜μŠ΅λ‹ˆλ‹€. λ¬Έμ œκ°€ λ‚˜νƒ€λ‚˜μ§€ μ•Šμ•˜μŠ΅λ‹ˆλ‹€(λ‚΄ 둜컬 μ»΄ν“¨ν„°μ—μ„œ – GitHub νŒŒμ΄ν”„λΌμΈμ΄ μ‹€ν–‰ μ€‘μž„)…

여기에 λ¬Έμ œκ°€ μžˆλŠ” 헀더 λ©”μ‹œμ§€λ₯Ό μΆ”κ°€ν•  수 μžˆμŠ΅λ‹ˆκΉŒ? https://github.com/itadventurer/kafka-backup/pull/97/files#diff -28c62e6ea255f4a9955c7be8c5d8a1cfR95
(λΆ„λͺ…νžˆ 16μ§„μˆ˜λ‘œ μΈμ½”λ”©λœ λ°μ΄ν„°λ‘œ)

μ—¬κΈ°μ—μ„œ μž¬ν˜„ν•  수 있기λ₯Ό λ°”λžλ‹ˆλ‹€. ;)

κ°μ‚¬ν•©λ‹ˆλ‹€! μ΄λ²ˆμ£Όμ— λ„μ „ν•΄λ³Όκ²Œμš”!

저도 μž¬ν˜„ν•  수 μ—†μ—ˆμŠ΅λ‹ˆλ‹€. κ·ΈλŸ¬λ‚˜ 컀밋 f30b9ad(λ‚΄ λΉŒλ“œ 기반) 후에 헀더 처리λ₯Ό λ§Œμ§€λŠ” 컀밋이 거의 μ—†λ‹€λŠ” 것을 κΉ¨λ‹¬μ•˜μŠ΅λ‹ˆλ‹€. κ·Έλž˜μ„œ λ‚˜λŠ” μˆ˜μ •μ„ λ²”ν•˜κ³  λ‚΄ ν™˜κ²½μ—μ„œ λ‹€μ‹œ μ‹œλ„ν•΄μ•Όν•œλ‹€κ³  μƒκ°ν•©λ‹ˆλ‹€.

μ—…κ·Έλ ˆμ΄λ“œ 및 ꡬ성 μ‘°μ • ν›„μ—λŠ” 잘 μž‘λ™ν•©λ‹ˆλ‹€ ...
주된 μ΄μœ λŠ” λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€.

 connector.class=de.azapps.kafkabackup.sink.BackupSinkConnector
-key.converter=de.azapps.kafkabackup.common.AlreadyBytesConverter
-value.converter=de.azapps.kafkabackup.common.AlreadyBytesConverter
+key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
+value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
+header.converter=org.apache.kafka.connect.converters.ByteArrayConverter

ν•΄κ²°λœ λŒ€λ‘œ μ’…λ£Œν•©λ‹ˆλ‹€.

이 νŽ˜μ΄μ§€κ°€ 도움이 λ˜μ—ˆλ‚˜μš”?
0 / 5 - 0 λ“±κΈ‰