Mve: Sight of view for dense point

Created on 22 Apr 2016  ·  8Comments  ·  Source: simonfuhrmann/mve

Another dense point filtering scheme is sight of view based, that is using the sight of each point to its related camera and shows an excellent result too. But I haven't seen this information in the output point cloud by mve. Will this be considered and the new scheme could be merged?

PS: more information used implies more potential for better results~

ref:
https://github.com/cdcseacave/openMVS

Vu H H, Labatut P, Pons J P, et al. High accuracy and visibility-consistent dense multiview stereo[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2012, 34(5): 889-901.

https://www.acute3d.com/

Most helpful comment

As far as I know, the differences between the approaches are severe.

1) The surface mesh is build from the semi-sparse point cloud, while MVE (FSSR) builds it on the ultra-dense points
2) The actual MVS is done on the mesh itself with optimization, while in MVE it is done using depth maps

The first step requires tetrahedralization in a global optimization, as Pierre mentioned. Tetrahedralization itself is very nasty, not even talking about including optimization for determining connectivity. To me it appears the approaches are so different that I don't even want to think about marrying them.

And well, even open source code can be ugly. In fact it's the only code that can be ugly because you cannot see the close source one. ;-)

All 8 comments

The referenced work uses a fundamentally different reconstruction technique than MVE. I am aware of this work. The line of sight information is mainly used for surface optimization, but MVE doesn't perform any global optimization in any stage of the pipeline (except BA of course). I doubt that this technique is, or can, be integrated into MVE. At least I don't know how.

To my knowledge, both techniques include four stages:

1 dense point cloud generation by fusing depth map in each view
2 surface/mesh reconstruction (point cloud -> triangle faces)
3 surface/mesh optimization (global or local)
4 texturing

The main difference between two techniques is surface reconstruction, fssr for mve, face selection (delaunay triangulation + s-t cut) for their work. The line of sight plays an important role for surface reconstruction not only for surface optimization in their work. The result of fssr trends to be smooth while the face selection based method can keep the sharp edge.

In my opinion, the sight of line should optionally be exported after the first stage, then the new surface reconstruction stage could be developed, last do the same texturing.

https://github.com/cdcseacave/openMVS/wiki/Modules

One additional difficulty is that there is no permissive licensed Delaunay tetrahedralization library.
http://doc.cgal.org/latest/Triangulation_3/index.html#Chapter_3D_Triangulations => GPL
http://wias-berlin.de/software/tetgen/ => AGPL
Note that MVE use a permissive license.

cgal is what openmvs used, the openmvs is trying to implement the face selection scheme, but their code is very very ugly~

I would never say that an open source code is ugly, this is not very kind towards the authors
Put something as open source and let it usable by anyone is a nice thing.
PS: You should note that there is not other "line of sight" open source implementation out there.
OpenMVS implements the grah cut of the delaunay tetrahedra triangulation in a generic way (enable to use various graph cut algorithms) and with and wihout weak surface visibility.

As far as I know, the differences between the approaches are severe.

1) The surface mesh is build from the semi-sparse point cloud, while MVE (FSSR) builds it on the ultra-dense points
2) The actual MVS is done on the mesh itself with optimization, while in MVE it is done using depth maps

The first step requires tetrahedralization in a global optimization, as Pierre mentioned. Tetrahedralization itself is very nasty, not even talking about including optimization for determining connectivity. To me it appears the approaches are so different that I don't even want to think about marrying them.

And well, even open source code can be ugly. In fact it's the only code that can be ugly because you cannot see the close source one. ;-)

Yeah, my fault, the opensource should be respected. Just because I spent a some time studying it and found it's a bit hard to understand and buggy which is not as elegant as MVE, thanks anyway~

I've been playing with Theia and OpenMVS quite a bit. @daleydeng I would agree there are some bugs in OpenMVS which completely block your reconstruction process and require debugging.

I have found that OpenMVS produces pretty good models when skipping the densify process and going straight to reconstructing the sparse input and then refining it. I would REALLY like to get the CUDA implementation of Refine working, but had linking issues that I have not yet been able to spend time resolving. This process is quite fast since the sparse cloud contains significantly less points and generally results in a final mesh that has a tolerable polygon count as well.

Running the Densify+Reconstruct+Refine takes MUCH longer and produces a very large mesh. However the quality is better when filling in areas the sparse did not cover.

Texturing is very good as well, and I appreciate that OpenMVS offers a complete package and is open source.

I am interesting now in MVE and looking forward to learning more.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

GustavoCamargoRL picture GustavoCamargoRL  ·  13Comments

HelliceSaouli picture HelliceSaouli  ·  12Comments

Jus80687 picture Jus80687  ·  11Comments

MaxDidIt picture MaxDidIt  ·  30Comments

HelliceSaouli picture HelliceSaouli  ·  14Comments