Mve: Fountain_p11 dataset

Created on 10 Mar 2018  ·  14Comments  ·  Source: simonfuhrmann/mve

Hello
so i wanted to reconstruct fountain_p11 using the ground truth camera calibration so I used the midelbury.sh script to create the scene and then followed by using featurerecon. and it seems that ground truth camera are not correct. I even used it with smvs and it didn't work. also tried to rephotograph the ground truth model with ground truth camera (with my own code which work fine with other scenes) and it did not work. can some one please tell me how to use Sretcha data set correctly

EDIT:
it seems that the ground truth camera param are correct since i used it with pmvs-2 and it did work fine. this is wiered

All 14 comments

So the way you described it sounds good -- you create a scene with the script and run featurerecon. The script, however, is for Middlebury and not for Strecha datasets. You may have to adapt it a little.

  • Can you take a look at the meta.ini file yourself and compare with the parameters from Strecha?
  • Before running featurerecon, can you inspect the scene with UMVE?

Well because in Strecha fountain_p11 contain only 11 images i manually created a fountain_par.txt so I think Middelbury script will run fine and meta.ini. are correct the _umve_ run fine also if run it before _featurerecon_ it shows cameras with weird rotations when i run _featurerecon_ i get only 1000 and something point forming cone like shape. check the image below
cone

Also if you noticed Strecha give in his data set 2 files camera which contains K , R , T. and another file called P which contains the projection matrix if take I compute p = K*[R|t] i get a matrix simlar to the one in the P file with one column diffrent check image blow.
matrice

this dataset is bugging me how did people used it to validate stuff

It's difficult to see from the image what is wrong. If you get features in a cone-like shape, some of the camera parameters are probably wrong. May be the cameras are inverted? You math in the second screenshot looks wrong to me. Shouldn't RT have all zeros in the last row, with a one in the bottom right corner?

the math is correct according to pinhole camera model R is 3x3 t is 3x1 and k is 3x3 why should i augement RT by row. i mean you can augment the projection p if you want to do some homogenous transformation.
Also i just checked on the net i found out that Strecha compute his projection P like this : p = k * [R^T |-R^T t]
the "^T" means transpose but i don't get this.
back to the problem :
fountain_par.txt here is the file you can try run script and featurerecon on it your self if you have time. i think the camera parameters given in the dataset are wrong or they are not compatible with MVE and SMVS

Maybe someone on the team has time to look into this, I don't. @nmoehrle, @flanggut?

Thank you. also i found out that all Strecha datasets even new ones in here : https://cvlab.epfl.ch/data/strechamvs do not work either so this role the assumption that the camera param are wrong and lead me to believe that the ground truth camera param are somehow not compatible with MVE and SMVS

Please post one of your meta.ini files here.

When I look at the screenshot of UMVE I can see that the Rotations are incorrect, the views are supposed to form a arc looking towards the center. When I experimented with strecha I had my own conversion scripts and since they are written in python I didn't try to integrate them into MVE. Can you show me the script that you used to convert the camera parameters, or give a link?

My best guess is that you did not convert the camera position (c stored in the strecha camera files) into the translation (t = -R * c). Further I think there was some oddity with the strecha camera files, the camera matrix is in row major and the rotation matrix is in column major, or if you will, the transposed rotation matrix is stored (R^t).

@simonfuhrmann here the meta file
meta.txt
@nmoehrle well i used this: https://github.com/simonfuhrmann/mve/wiki/Middlebury-Datasets to get camera parameters.
and yes i didn't (t = -R * c) so i thought that Strecha gives you the translation vector t what confused me is that Stesha in the readme file say p = k * [R^T |-R^T t] if he just replaced t by c -_- . i will try to use this information and see what it gives

This script cannot parse the .camera files of the strecha benchmark, it just reads the middlebury camera parameters format, a single line that looks like this:
"imgname.png k11 k12 k13 k21 k22 k23 k31 k32 k33 r11 r12 r13 r21 r22 r23 r31 r32 r33 t1 t2 t3".

The .camera files have a entirely different structure:

|Line|Content|
|------|-|
| 1-3 | K matrix |
| 4 | unknown |
| 5-7 | R^t |
| 8 | c |
| 9 | width height |

@nmoehrle yes yes i'm aware of that i manually created a file fountain_par.txt from .camera for 11 image (lazy to write my own parser) the only thing i didn't consider is (t = -R * c) i used c given in .camera as t. i will fixe this later and post the results

@nmoehrle well i think the problem solved thanx to you
screen

Yes this is how I remember it :-)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

GustavoCamargoRL picture GustavoCamargoRL  ·  13Comments

MaxDidIt picture MaxDidIt  ·  30Comments

Jus80687 picture Jus80687  ·  11Comments

daleydeng picture daleydeng  ·  8Comments

HelliceSaouli picture HelliceSaouli  ·  12Comments