Tuesday, March 2, 2010

Measuring the distance between two meshes (2)

Second part of the "metro" tutorial, the first part is here.

Remember that MeshLab uses a sampling approach to compute the Hausdorff distance taking a set of points over a mesh X and for each point x on X it searches the closest point y on the other mesh Y. That means that the result is strongly affected on how many points do you take over X. Now assume that we want color the mesh X (e.g. the low resolution one) with the distance from Y.
In this case the previous trick of using per vertex color will yeld poor results, given the low resolution quality of the mesh.
Let's start again with our two HappyBuddha, full resolution and simplified to 50k faces.
Therefore first of all we need a denser sampling over the low res mesh, that means that we compute the Hausdorff distance we set the simplified mesh as sample mesh, the original happybuddha as target, we set a face sampling with a reasonable high number of sampling points (10^6 is ok) and, very important, we ask to save the computed samples by checking the appropriate option.

After a few secs you will see in the layer window two new layers that contains the point clouds representing respectively to the sample taken over the simplified mesh and the corresponding closest points on the original meshes.


To see and inspect these point clouds you have to manually switch to point visualization and turn off the other layers. Below two snapshots of the point cloud (in this case just 2,000,000 of point samples) at different zoom level to get a hint of the cloud density)

Then, just like in the previous post, use the Color->Colorize by quality filter to map the point cloud of the sample points (the one over the simplified mesh) with the standard red-green-blue colormap.

Now to better visualize these color we use a texture map. As a first step we need a simple parameterization of the simplified mesh. MeshLab offers a couple of parametrization tools. the first one is a rather simple trivial independent right triangle packing approach while the other one is the state of the art almost isometric approach . Let's use the first one (more on the latter in a future post...) simply by starting Texture->Trivial Per-Triangle Parametrization. This kind of parametrization is quite trivial, with a lot of problems (distorsion, fragmentation etc) but on the other hand is quite fast, simple and robust and in a few cases can even be useful.

Now you have just to fill the texture with the color of the sampled point cloud; you can do this with the filter texture->Transfer color to texture and choosing an adequate texture size (be bold and use a large texture size...). Below the result with a comparison about error color coded  using a texture or simply the color-per-vertex.