Lightweight-human-pose-estimation.pytorch: About AP

Created on 17 Dec 2020  ·  9Comments  ·  Source: Daniil-Osokin/lightweight-human-pose-estimation.pytorch

Hi, thanks for your work! I have a question. Why is the accuracy of 61.8 in the original OpenPose paper and 48.6 in your analysis of the original OpenPose?

Most helpful comment

Thank you.

获取 Outlook for iOShttps://aka.ms/o0ukef


发件人: Daniil-Osokin notifications@github.com
发送时间: Sunday, December 20, 2020 11:23:10 PM
收件人: Daniil-Osokin/lightweight-human-pose-estimation.pytorch lightweight-human-pose-estimation.pytorch@noreply.github.com
抄送: augenstern-lwx liwenxingICT@outlook.com; Author author@noreply.github.com
主题: Re: [Daniil-Osokin/lightweight-human-pose-estimation.pytorch] About AP (#124)

It is just a sum of all losses for heatmaps and pafs. You may check the training scripthttps://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/blob/master/train.py for more details.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/issues/124#issuecomment-748621504, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AP5B6GEFLZNRTCBE4U5PCRLSVYJF5ANCNFSM4U65RKFQ.

All 9 comments

Hi! We have compared with the original model from the paper "Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields". As you can see in the table 4 of paragraph 3.2 the AP is 58.4%. It will increase to 61%, if do an additional refinement for each found person with a separate model for single person pose estimation (CPM). And those 58.4% was obtained in the multi-scale testing mode (6 scales). 48.6% of AP is obtained using a single scale for input data during testing.

Thank you for your reply!what is 6 scales?It means that one initial stage and five refinementstages?

获取 Outlook for iOShttps://aka.ms/o0ukef


发件人: Daniil-Osokin notifications@github.com
发送时间: Friday, December 18, 2020 11:14:00 PM
收件人: Daniil-Osokin/lightweight-human-pose-estimation.pytorch lightweight-human-pose-estimation.pytorch@noreply.github.com
抄送: augenstern-lwx liwenxingICT@outlook.com; Author author@noreply.github.com
主题: Re: [Daniil-Osokin/lightweight-human-pose-estimation.pytorch] About AP (#124)

Hi! We have compared with the original model from the paper "Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields"https://arxiv.org/pdf/1611.08050.pdf. As you can see in the table 4 of paragraph 3.2 the AP is 58.4%. It will increase to 61%, if do an additional refinement for each found person with a separate model for single person pose estimation (CPM). And those 58.4% was obtained in the multi-scale testing mode (6 scales). 48.6% of AP is obtained using a single scale for input data during testing.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/issues/124#issuecomment-748143927, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AP5B6GF7L7XYMR3P3ZLMBGDSVNWTRANCNFSM4U65RKFQ.

Network inference was performed 4 times (not 6, it is my mistake), each time with different input image resolution (different scale). Then all network outputs were averaged. You can check the validation script for the details, it supports multi-scale option.

Thank you!Why was multi-scales not used at that time,after all,this method can achieve higher AP?

获取 Outlook for iOShttps://aka.ms/o0ukef


发件人: Daniil-Osokin notifications@github.com
发送时间: Saturday, December 19, 2020 5:18:55 AM
收件人: Daniil-Osokin/lightweight-human-pose-estimation.pytorch lightweight-human-pose-estimation.pytorch@noreply.github.com
抄送: augenstern-lwx liwenxingICT@outlook.com; Author author@noreply.github.com
主题: Re: [Daniil-Osokin/lightweight-human-pose-estimation.pytorch] About AP (#124)

Network inference was performed 4 times (not 6, it is my mistake), each time with different input image resolution (different scale). Then all network outputs were averaged. You can check the validation scripthttps://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/blob/2df5db059db1a043169b65b633d7bb3b8efd13a6/val.py#L117 for the details, it supports multi-scale option.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/issues/124#issuecomment-748324241, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AP5B6GHRN44MSCXSFQTIEN3SVPBL7ANCNFSM4U65RKFQ.

And I wonder if the loss function is different from the original OpenPose?

获取 Outlook for iOShttps://aka.ms/o0ukef


发件人: 李 文星 liwenxingICT@outlook.com
发送时间: Saturday, December 19, 2020 10:39:10 AM
收件人: Daniil-Osokin/lightweight-human-pose-estimation.pytorch reply@reply.github.com; Daniil-Osokin/lightweight-human-pose-estimation.pytorch lightweight-human-pose-estimation.pytorch@noreply.github.com
抄送: Author author@noreply.github.com
主题: Re: [Daniil-Osokin/lightweight-human-pose-estimation.pytorch] About AP (#124)

Thank you!Why was multi-scales not used at that time,after all,this method can achieve higher AP?

获取 Outlook for iOShttps://aka.ms/o0ukef


发件人: Daniil-Osokin notifications@github.com
发送时间: Saturday, December 19, 2020 5:18:55 AM
收件人: Daniil-Osokin/lightweight-human-pose-estimation.pytorch lightweight-human-pose-estimation.pytorch@noreply.github.com
抄送: augenstern-lwx liwenxingICT@outlook.com; Author author@noreply.github.com
主题: Re: [Daniil-Osokin/lightweight-human-pose-estimation.pytorch] About AP (#124)

Network inference was performed 4 times (not 6, it is my mistake), each time with different input image resolution (different scale). Then all network outputs were averaged. You can check the validation scripthttps://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/blob/2df5db059db1a043169b65b633d7bb3b8efd13a6/val.py#L117 for the details, it supports multi-scale option.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/issues/124#issuecomment-748324241, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AP5B6GHRN44MSCXSFQTIEN3SVPBL7ANCNFSM4U65RKFQ.

Using single or multiple scales for inference is a speed/accuracy trade-off. Loss function is the same.

Thanks, I'd like to know how to calculate the loss after the combination of Heatmaps and PAFs stages?Because the original OpenPose is calculated by two stages.

It is just a sum of all losses for heatmaps and pafs. You may check the training script for more details.

Thank you.

获取 Outlook for iOShttps://aka.ms/o0ukef


发件人: Daniil-Osokin notifications@github.com
发送时间: Sunday, December 20, 2020 11:23:10 PM
收件人: Daniil-Osokin/lightweight-human-pose-estimation.pytorch lightweight-human-pose-estimation.pytorch@noreply.github.com
抄送: augenstern-lwx liwenxingICT@outlook.com; Author author@noreply.github.com
主题: Re: [Daniil-Osokin/lightweight-human-pose-estimation.pytorch] About AP (#124)

It is just a sum of all losses for heatmaps and pafs. You may check the training scripthttps://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/blob/master/train.py for more details.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/issues/124#issuecomment-748621504, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AP5B6GEFLZNRTCBE4U5PCRLSVYJF5ANCNFSM4U65RKFQ.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

mohamdev picture mohamdev  ·  4Comments

RCpengnan picture RCpengnan  ·  6Comments

anerisheth19 picture anerisheth19  ·  10Comments

hxm1150310617 picture hxm1150310617  ·  4Comments

jinfagang picture jinfagang  ·  18Comments