Posts Tagged optimization

Space-time texture map

There was a paper in 3DIM’09 entitled something like space-time consistent texture maps.  The paper talks about howstat-tex-2 to construct a single consistent texture map from a multi-view video sequence of a moving subject.  The method assumes as input a consistent triangular mesh (over all time sequences) and the input images.  The problem is posed as a labelling problem: for each triangle compute the view (and time step) from which to sample its texture.

This framework obviously holds for the capture of static objects (where the number of labels is just equal to the number of images).  This optimization framework is an alternative to other approaches, such as the naive solution of just averaging textures using all image observations.  Such simple averaging does not work if the geometry is inaccurate (see image to the right of the blurry house, mostly on the side).  I was curious if such a labelling approach would work better on approximate geometries, or whether it could be extended to a view-dependent formulation.

In the original formulation, the data term for a triangle in a view takes into account how much area is covered by the projected triangle (e.g., it prefers to assign a triangle to a view where it’s projected area is large). The smoothness term then takes into account the color differences along an edge.  In other words if two triangles are labelled differently, then a labelling that has similar colors along the edge should be preferred.  This problem can then be solved (approximately) with graph-cuts using alpha-expansions.

The last couple nights, for some reason, I kept having this thought that this same framework could be used to estimate some sort of view-dependent texture map (where instead of just finding the single best texture, you find several (say 4) texture maps that will work best in a view-dependent case).  All that would need to be changed is the data term, and then incorporate some notion of view-dependent consistency (e.g., instead of just using the color difference on edges in the texture maps for neighbouring triangles, a similar cost could be measured on the edges after rendering from several views).

Basic implementationstat-tex-1

I started implementing the basic framework.  Below are some example results.  Although, the method should really work best when used with accurate geometry, I wanted to test out the method when the geometry was only approximately known.  In this sequence, the house is reconstructed using shape-from-silhouette, meaning that the concavities beside the chimney are not in the reconstructed geometry.  Again, the image at the beginning of the blog (and at the right) show the results of simply averaging input images in the texture.

The left-most image shows the results with no smoothing (e.g., the best front-facing image is chosen for each triangle, followed by the solution using increasing smoothness weightings).  There are some artefacts, especially on the left hand side of the house (by the chimney, where the geometry is incorrect).  Increasing the smoothness helps out a bit, but too much smoothness samples the side regions of the texture from the front-facing images (which makes the blurry).  See the movie to get a better idea of the problems.

0001

Reconstructed textures for the house (left no smooth, right two increasing smoothness).

The texture maps for no smooth (best front-face image), smoothness=1, and smoothness=2, are given below.  The  gray-scale images show which input image the texture was assigned from (notice that the leftmost is patchy, where the right image suggests that large regions sampled texture from the same input image).

from-0 from-1 from-1
tex-0 tex-0 tex-2

Dog results

I also tried this for a model where the geometry was much more accurate (the dog).  It is hard to actually see any artefacts with the naive approach of just sampling the best view for each triangle (left-most image below).

movie-screen

Again, below are the corresponding texture maps and the gray-scale images denoting where the texture was sampled from.  Again, increasing the smoothness definitely reduces the number of small pieces ni the texture.  But it is hard to actually see any visually difference on the results.  Check the video to see for yourself.

from-0 from-1 from-3
tex-0 tex-1 tex-3

It is much easier to visualize the textures on the object.  The blender models and reconstructed textures are uploaded if you want to check them out: models & textures.

View-dependent extensions

Shortly after implementing this, and before starting on the view-dependent extensions, I realized that I was going to run into some problems with the discrete labelling.  If you have n images and you want to find a set of 4 view-dependent texture maps, you will need n choose 4 labels.  Even with as few as 20 input images, this gives 4845 labels.   After running into this problem, I didn’t further formalize the enery for the view-dependent case.

, , , ,

3 Comments