Kafka-backup: 또 λ‹€λ₯Έ NPE

에 λ§Œλ“  2020λ…„ 03μ›” 17일  Β·  16μ½”λ©˜νŠΈ  Β·  좜처: itadventurer/kafka-backup

μ–΄μ œ μ•„λž˜μ˜ NPE에 μ μ€‘ν–ˆμŠ΅λ‹ˆλ‹€(컀밋 3c95089c μ‚¬μš©). 아직 여기에 μžˆμ§€λ§Œ λ§ˆμŠ€ν„°(f30b9ad9)의 μ΅œμ‹  μ»€λ°‹μœΌλ‘œ 였늘 μ‹œλ„ν–ˆμŠ΅λ‹ˆλ‹€. μ•„λž˜ 좜λ ₯은 μ΅œμ‹  λ²„μ „μž…λ‹ˆλ‹€.

λ³€κ²½λœ μ‚¬ν•­μž…λ‹ˆλ‹€. eCryptfs둜 λ§ˆμ΄κ·Έλ ˆμ΄μ…˜ν–ˆμŠ΅λ‹ˆλ‹€. λ‚˜λŠ” kafka-backup을 μ€‘μ§€ν•˜κ³  target dir의 이름을 λ°”κΎΈκ³  λΉ„μš°κ³  chattr +i λ°±μ—… 싱크 ꡬ성을 μ§€μ •ν–ˆμŠ΅λ‹ˆλ‹€(kafka-backup이 Puppet에 μ˜ν•΄ λ‹€μ‹œ μ‹œμž‘λ˜λŠ” 것을 λ°©μ§€ν•˜κΈ° μœ„ν•΄). 그런 λ‹€μŒ eCryptfs λ³€κ²½ 사항을 λ°°ν¬ν•˜κ³  rsyncλ₯Ό λ‹€μ‹œ μˆ˜ν–‰ν•œ λ‹€μŒ chattr +i ν•΄μ œν•˜κ³  Puppet을 λ‹€μ‹œ μ μš©ν–ˆμŠ΅λ‹ˆλ‹€.

이제 μ£Όμš” μ§ˆλ¬Έμ€ 이것을 λ””λ²„κ·Έν•˜λ €κ³  μ‹œλ„ν•΄μ•Ό ν•©λ‹ˆκΉŒ? μ•„λ‹ˆλ©΄ κ·Έλƒ₯ μ§€μš°κ³  λ‹€λ₯Έ μƒˆ 백업을 ν•΄μ•Ό ν•©λ‹ˆκΉŒ? 이것은 QAμ΄λ―€λ‘œ μ‹œκ°„μ΄ μžˆμŠ΅λ‹ˆλ‹€.

[2020-03-17 02:23:47,321] INFO [Consumer clientId=connector-consumer-chrono_qa-backup-sink-0, groupId=connect-chrono_qa-backup-sink] Setting offset for partition [redacted].chrono-billable-datasink-0 to the committed offset FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=kafka5.node:9093 (id: 5 rack: null), epoch=187}} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:762)
[2020-03-17 02:23:47,697] ERROR WorkerSinkTask{id=chrono_qa-backup-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:179)
java.lang.NullPointerException
        at de.azapps.kafkabackup.sink.BackupSinkTask.close(BackupSinkTask.java:122)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:397)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:591)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
        at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.base/java.lang.Thread.run(Unknown Source)
[2020-03-17 02:23:47,705] ERROR WorkerSinkTask{id=chrono_qa-backup-sink-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:180)
[2020-03-17 02:23:47,705] INFO Stopped BackupSinkTask (de.azapps.kafkabackup.sink.BackupSinkTask:139)

λͺ¨λ“  16 λŒ“κΈ€

JFYI, λ‚˜λŠ” 그것을 μ§€μš°κ³  μƒˆλ‘œμš΄ λ°±μ—…μœΌλ‘œ μ‹œμž‘ν–ˆμŠ΅λ‹ˆλ‹€. 이 문제λ₯Ό λ‹«μœΌμ‹­μ‹œμ˜€.

흠... μ΄μƒν•˜λ„€μš”... κ΄€λ ¨ μ½”λ“œ 쀄은 λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€. https://github.com/itadventurer/kafka-backup/blob/f30b9ad963c8a7d266c8eacd50bd7c5c3ddbbc16/src/main/java/de/Backazapps/kafkabackup/sink/java #L121 -L122

partitionWriters λŠ” https://github.com/itadventurer/kafka-backup/blob/f30b9ad963c8a7d266c8eacd50bd7c5c3ddbbc16/src/main/java/de/azapps/sink/java/de/azapps/sinkfkabackup 의 open() ν˜ΈμΆœμ— μ±„μ›Œμ§‘λ‹ˆλ‹€. BackupSinkTask.java#L107
λͺ¨λ“  TopicPartition κ°€ 열릴 λ•Œλ§ˆλ‹€ ν˜ΈμΆœλ©λ‹ˆλ‹€... μ™œ 이런 일이 λ°œμƒν•˜λŠ”μ§€ 이해가 λ˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€.

μΆ”κ°€ 디버깅에 μ‚¬μš©ν•  수 μžˆλŠ” 데이터가 μžˆμŠ΅λ‹ˆκΉŒ? μž¬ν˜„ν•˜λ €κ³  μ‹œλ„ν•˜λŠ” 것이 ν₯미둜울 κ²ƒμž…λ‹ˆλ‹€. NPEλ₯Ό λ˜μ§€λŠ” λŒ€μ‹  μ΅œμ†Œν•œ 의미 μžˆλŠ” 였λ₯˜λ₯Ό λ³΄μ—¬μ€˜μ•Ό ν•©λ‹ˆλ‹€...

μ•ˆλ…•ν•˜μ„Έμš”! 예, μ—¬μ „νžˆ 이전 디렉토리 백업이 μžˆμŠ΅λ‹ˆλ‹€. 그것을 μ‚¬μš©ν•˜λ©΄ ν˜„μž¬ ν΄λŸ¬μŠ€ν„° λ°±μ—… μƒνƒœμ— 영ν–₯을 λ―ΈμΉ  수 μžˆμ§€λ§Œ 싱크 이름을 λ³€κ²½ν•˜μ§€ μ•Šμ•˜κΈ° λ•Œλ¬Έμ— μΆ”μΈ‘ν•©λ‹ˆλ‹€.

λ‚΄ μƒκ°μ—λŠ” 이 λŒ€μƒ 디렉토리가 eCryptfsλ₯Ό ν™œμ„±ν™”ν•˜λŠ” λ™μ•ˆ μ–΄λ–€ μ‹μœΌλ‘œλ“  μ†μƒλ˜μ—ˆμŠ΅λ‹ˆλ‹€. 일뢀 파일이 μ‹€μˆ˜λ‘œ λ³€κ²½λ˜μ—ˆκ±°λ‚˜ 이와 μœ μ‚¬ν•œ 것일 수 μžˆμŠ΅λ‹ˆλ‹€.

흠… λ―Όκ°ν•œ 정보가 ν¬ν•¨λ˜μ–΄ μžˆμŠ΅λ‹ˆκΉŒ? https://send.firefox.com/에 μ—…λ‘œλ“œν•˜κ³  링크λ₯Ό λ³΄λ‚΄μ£Όμ‹­μ‹œμ˜€. λ‚˜λŠ” 였λ₯˜λ₯Ό μž¬ν˜„ν•˜λ €κ³  λ…Έλ ₯ν•  κ²ƒμž…λ‹ˆλ‹€.
그렇지 μ•ŠμœΌλ©΄ μƒˆλ‘œμš΄ ν΄λŸ¬μŠ€ν„°λ‘œ μž¬ν˜„μ„ μ‹œλ„ν•˜κ±°λ‚˜ 당신이 옳기λ₯Ό λ°”λΌλ©΄μ„œ 문제λ₯Ό μ’…λ£Œν•©λ‹ˆλ‹€ ;)

였늘 λ‹€λ₯Έ ν΄λŸ¬μŠ€ν„°μ—μ„œλ„ λ°œμƒν–ˆμŠ΅λ‹ˆλ‹€ ...
kafka-backup을 μ€‘μ§€ν•œ λ‹€μŒ eCryptfsλ₯Ό 마운트 ν•΄μ œν•œ λ‹€μŒ azcopy sync λ₯Ό μˆ˜ν–‰ν•œ λ‹€μŒ eCryptfsλ₯Ό λ‹€μ‹œ λ§ˆμš΄νŠΈν•˜κ³  kafka-backup을 μ‹œμž‘ν•˜λŠ” Azure λ°±μ—… cronjob이 μžˆμŠ΅λ‹ˆλ‹€.
였늘 λ°€ umount 단계가 μ‹€νŒ¨ν•˜μ—¬ μŠ€ν¬λ¦½νŠΈλ„ μ‹€νŒ¨ν–ˆμŠ΅λ‹ˆλ‹€( set -e ). λ¬Έμ œκ°€ λ°œμƒν•˜λŠ” μ‹œμ μΈ 것 κ°™μŠ΅λ‹ˆλ‹€. νƒ€μž„λΌμΈμ„ 주의 깊게 λ‹€μ‹œ 확인해야 ν•˜μ§€λ§Œ. 이 λ¬Έμ œλŠ” λ‚˜μ€‘μ— μ—…λ°μ΄νŠΈλ©λ‹ˆλ‹€.

UPD. 방금 둜그 확인을 ν–ˆμŠ΅λ‹ˆλ‹€. NPEλŠ” μ‹€μ œλ‘œ 더 일찍 μΌμ–΄λ‚¬μŠ΅λ‹ˆλ‹€. Kafka-backup이 OOM에 μ˜ν•΄ μ—¬λŸ¬ 번 μ’…λ£Œλ˜μ—ˆμŠ΅λ‹ˆλ‹€... -Xmx1024M λ˜λŠ” Docker memory_limit=1152M κ°€ 이 ν΄λŸ¬μŠ€ν„°μ— μΆ©λΆ„ν•˜μ§€ μ•Šμ€ 것 κ°™μŠ΅λ‹ˆλ‹€. (kafka-backup의 HEAP/RAM 크기λ₯Ό κ³„μ‚°ν•˜λŠ” 방법에 λŒ€ν•œ 아이디어 ?

이 데이터에 λŒ€ν•΄ 디버깅을 ν•˜μ‹œκ² μŠ΅λ‹ˆκΉŒ? νšŒμ‚¬μ˜ λ―Όκ°ν•œ 정보가 ν¬ν•¨λ˜μ–΄ μžˆμ–΄μ„œ 올릴 수 μ—†μŠ΅λ‹ˆλ‹€...

BTWκ°€ μ‹€νŒ¨ν•œ μ‹±ν¬λ‘œ 인해 kafka-connectκ°€ μ’…λ£Œλ  수 μžˆμŠ΅λ‹ˆκΉŒ? 단일 싱크 였λ₯˜(λ‹€λ₯Έ 싱크/컀λ„₯ν„°κ°€ μ—†λŠ” 경우)의 경우 전체 λ…λ¦½ν˜• μ—°κ²° ν”„λ‘œμ„ΈμŠ€κ°€ μ‹€νŒ¨ν•˜λ©΄ 쒋을 κ²ƒμž…λ‹ˆλ‹€.

BTWκ°€ μ‹€νŒ¨ν•œ μ‹±ν¬λ‘œ 인해 kafka-connectκ°€ μ’…λ£Œλ  수 μžˆμŠ΅λ‹ˆκΉŒ? 단일 싱크 였λ₯˜(λ‹€λ₯Έ 싱크/컀λ„₯ν„°κ°€ μ—†λŠ” 경우)의 경우 전체 λ…λ¦½ν˜• μ—°κ²° ν”„λ‘œμ„ΈμŠ€κ°€ μ‹€νŒ¨ν•˜λ©΄ 쒋을 κ²ƒμž…λ‹ˆλ‹€.
#46 μ°Έμ‘°

UPD. 방금 둜그 확인을 ν–ˆμŠ΅λ‹ˆλ‹€. NPEλŠ” μ‹€μ œλ‘œ 더 일찍 μΌμ–΄λ‚¬μŠ΅λ‹ˆλ‹€. Kafka-backup이 OOM에 μ˜ν•΄ μ—¬λŸ¬ 번 μ’…λ£Œλ˜μ—ˆμŠ΅λ‹ˆλ‹€... -Xmx1024M λ˜λŠ” Docker memory_limit=1152M이 이 ν΄λŸ¬μŠ€ν„°μ— μΆ©λΆ„ν•˜μ§€ μ•Šμ€ 것 κ°™μŠ΅λ‹ˆλ‹€. (kafka-backup의 HEAP/RAM 크기λ₯Ό κ³„μ‚°ν•˜λŠ” 방법에 λŒ€ν•œ 아이디어가 μžˆμŠ΅λ‹ˆκΉŒ?

μ—…λ°μ΄νŠΈλ₯Ό λ†“μ³μ„œ μ£„μ†‘ν•©λ‹ˆλ‹€. λ‹€λ₯Έ μ˜κ²¬μ„ 자유둭게 μΆ”κ°€ν•˜μ‹­μ‹œμ˜€ ;)

μ§€κΈˆμ€ μ–΄λ–»κ²Œ 계산할지 λͺ¨λ₯΄κ² μŠ΅λ‹ˆλ‹€. ν•΄λ‹Ή 토둠에 λŒ€ν•œ μƒˆ ν‹°μΌ“ #47을 μ—΄μ—ˆμŠ΅λ‹ˆλ‹€.

이 데이터에 λŒ€ν•΄ 디버깅을 ν•˜μ‹œκ² μŠ΅λ‹ˆκΉŒ? νšŒμ‚¬μ˜ λ―Όκ°ν•œ 정보가 ν¬ν•¨λ˜μ–΄ μžˆμ–΄μ„œ 올릴 수 μ—†μŠ΅λ‹ˆλ‹€...

예, λΆ€νƒν•©λ‹ˆλ‹€! 그것은 ꡉμž₯ν•  κ²ƒμž…λ‹ˆλ‹€!

이 데이터에 λŒ€ν•΄ 디버깅을 ν•˜μ‹œκ² μŠ΅λ‹ˆκΉŒ? νšŒμ‚¬μ˜ λ―Όκ°ν•œ 정보가 ν¬ν•¨λ˜μ–΄ μžˆμ–΄μ„œ 올릴 수 μ—†μŠ΅λ‹ˆλ‹€...

예, λΆ€νƒν•©λ‹ˆλ‹€! 그것은 ꡉμž₯ν•  κ²ƒμž…λ‹ˆλ‹€!

λΆˆν–‰νžˆλ„ λ‚˜λŠ” μžλ°” 디버깅에 λŠ₯μˆ™ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€ ... 당신이 이것에 λŒ€ν•΄ μ €λ₯Ό μ•ˆλ‚΄ν•˜λ©΄ 무언가λ₯Ό μ‹€ν–‰ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

μ’‹μ•„, λ‚˜λŠ” λ‹€μŒ λ‚  λ™μ•ˆ μ–΄λ–»κ²Œ ν•  것인지 μƒκ°ν•˜λ €κ³  λ…Έλ ₯ν•  것이닀. μ–΄μ©Œλ©΄ λ‚΄κ°€ μš°μ—°νžˆ 문제λ₯Ό 찾을 μˆ˜λ„ μžˆλ‹€ :joy: (더 λ§Žμ€ ν…ŒμŠ€νŠΈλ₯Ό μž‘μ„±ν•˜κ³  μ‹Άλ‹€)
νšŒμ‚¬κ°€ μ•„λ‹Œ λ°μ΄ν„°λ‘œ 그것을 μž¬ν˜„ν•  수 μžˆλ‹€λ©΄ 정말 λŒ€λ‹¨ν•  κ²ƒμž…λ‹ˆλ‹€!

μ—¬κΈ°μ—μ„œ λ³Έ 것에 λ”°λ₯΄λ©΄ kafka-backup ν”„λ‘œμ„ΈμŠ€λ₯Ό kill -9 λͺ‡ 번 μ£½μ΄λŠ” 것이 μ’‹μŠ΅λ‹ˆλ‹€. λ‚˜λŠ” 당신이 μƒνƒœμ— 도달 ν•  수 μžˆλ‹€κ³  μƒκ°ν•©λ‹ˆλ‹€ :) eCryptfs와 관련이 μ—†κΈ°λ₯Ό μ •λ§λ‘œ λ°”λžλ‹ˆλ‹€ ...

λ‚˜λŠ” 였늘 λ‚΄ ν…ŒμŠ€νŠΈ μ„€μ •μ—μ„œλ„ 그것을 λ³΄μ•˜μŠ΅λ‹ˆλ‹€. ν˜„μž¬ λ‚˜λŠ” 그것을 μž¬ν˜„ν•˜μ§€ λͺ»ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€. λ‹€μŒ 날에 λ‹€μ‹œ μ‹œλ„ ν•  κ²ƒμž…λ‹ˆλ‹€ ...

#88κ³Ό ν•œ OOM의 λͺ‡ μ‹œκ°„ 후에 이것을 λ‹€μ‹œ λˆ„λ₯΄μ‹­μ‹œμ˜€..

였늘 λ°€ Azure blobstore 백업을 μˆ˜ν–‰ν•˜κΈ° 전에 kafka-backup μ„œλΉ„μŠ€ μ’…λ£Œ μ‹œ 이것을 λ³΄μ•˜μŠ΅λ‹ˆλ‹€.

May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: [2020-05-30 19:19:24,572] INFO WorkerSinkTask{id=chrono_prod-backup-sink-0} Committing offsets synchronously using sequence number 2782: {xxx-4=OffsetAndMetadata{offset=911115, leaderEpoch=null, metadata=''}, yyy-5=OffsetAndMetadata{offset=11850053, leaderEpoch=null, metadata=''}, [...]
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: [2020-05-30 19:19:24,622] ERROR WorkerSinkTask{id=chrono_prod-backup-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:179)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: org.apache.kafka.common.errors.WakeupException
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:511)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:275)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:937)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1473)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1431)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at org.apache.kafka.connect.runtime.WorkerSinkTask.doCommitSync(WorkerSinkTask.java:333)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at org.apache.kafka.connect.runtime.WorkerSinkTask.doCommit(WorkerSinkTask.java:361)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:432)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:591)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: #011at java.base/java.lang.Thread.run(Unknown Source)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: [2020-05-30 19:19:24,634] ERROR WorkerSinkTask{id=chrono_prod-backup-sink-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:180)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: [2020-05-30 19:19:24,733] INFO Stopped BackupSinkTask (de.azapps.kafkabackup.sink.BackupSinkTask:139)
May 30 19:19:24 backupmgrp1 docker/kafka-backup-chrono_prod[16472]: [2020-05-30 19:19:24,771] INFO [Consumer clientId=connector-consumer-chrono_prod-backup-sink-0, groupId=connect-chrono_prod-backup-sink] Revoke previously assigned partitions [...]

:생각: Kafka Backup을 λͺ‡ μ‹œκ°„ λ™μ•ˆ μ‹€ν–‰ν•˜κ³  λ§Žμ€ 데이터λ₯Ό μƒμ„±ν•˜μ—¬ μž¬ν˜„ν•΄ 봐야 ν•  것 κ°™μŠ΅λ‹ˆλ‹€... κ°€μž₯ 의미 μžˆλŠ” λ°©μ‹μœΌλ‘œ λ””λ²„κΉ…ν•˜λŠ” 방법에 λŒ€ν•΄ 생각해야 ν•©λ‹ˆλ‹€...

Kafka Backup 섀정을 λͺ¨λ‹ˆν„°λ§ν•  수 μžˆλ‹€λ©΄ μ‘°κΈˆμ΄λ‚˜λ§ˆ 도움이 될 것 κ°™μŠ΅λ‹ˆλ‹€. μ•„λ§ˆλ„ μ§€ν‘œμ—μ„œ μœ μš©ν•œ 것을 λ³Ό 수 μžˆμ„ κ²ƒμž…λ‹ˆλ‹€.

λ™μΌν•œ 였λ₯˜λ₯Ό μž¬ν˜„ν•©λ‹ˆλ‹€.

[2020-07-10 11:05:21,755] ERROR WorkerSinkTask{id=backup-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
java.lang.NullPointerException
    at de.azapps.kafkabackup.sink.BackupSinkTask.close(BackupSinkTask.java:122)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:397)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:591)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

ν¬λ“œ μ„€μ • en k8s 및 Azure 파일 곡유 파일 μ‹œμŠ€ν…œμ„ μ‚¬μš©ν•˜μ—¬ 백업을 μ €μž₯ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€. 이 μ‹œμ μ—μ„œ λͺ‡ 가지 둜그λ₯Ό μΆ”κ°€ν•˜λ €κ³  ν•©λ‹ˆλ‹€.

이 νŽ˜μ΄μ§€κ°€ 도움이 λ˜μ—ˆλ‚˜μš”?
0 / 5 - 0 λ“±κΈ‰