View-based texture transfer


I recently worked on a project where it was necessary to transfer a texture from an existing 3D source model to another similar target model.  The target model shape is not exactly the same as the source model, and the topology and uv coordinates are different.  While I am sure that there are likely specific methods for doing this (i.e., similar to how normal maps of higher resolution geometry are baked into a lower resolution texture). However, in this example, the geometry can be quite a bit different.  In favor of using some existing tools that I have to solve this problem, I have a solution that is based on some of the work in my PhD on estimating texture of a 3D computer vision (CV) generated object when given calibrated images of the object (see section 4.4 , or this component).  In CV, since the object geometry is estimated from images, it is only an approximation of the true geometry.  For the problem of transferring a texture, we can use the same approach.

However, we are not given images of the source object, but we do have a texture so we can generate these synthetically.  In this way, we can even give important views more weight (e.g., by having more views of say the front of the object). For the sake of illustration, I will demonstrate this on the teddy example (downloaded from www.blender-models.com).  The source model has an accurate 3D geometry, and in this case the source model doesn’t have uv coordinates (the texture coordinates use the world coordinates for a volume-like texture, and the geometry encodes some detail).   I have removed some of the detail on the eyes and nose, then the object has been decimated and smoothed so that its silhouette is quite a bit different than the original input geometry.  The target object also has a set of non-optimal uv coordinates.  The differences in the target object may make it difficult to simply find the corresponding point on the source object in order to transfer the texture (similar to what I’m guessing what would be used for the baking of normal maps).

TeddyOriginal3DCrop TeddyOriginalWireframe TeddyDecimateSmoothed TeddyDecimateSmoothWireframe2
Teddy geometry (source) Teddy wireframe (source) Decimated (target) Decimated wireframe (target)

In order to transfer the texture from the source object to the target a number of synthetic images can be generated around the source object.

TeddyCameras TeddyLitInputImages

These images and the camera poses can be used as input to a texture reconstruction. In my PhD, I explored several alternatives for this problem.  Among these is simply taking a weighted average (avg), computing an optimal view for each texel with some regularization (opt), and a multi-band weighting (multi).  The last two can also be combined, so that the multi-band weight is used for low frequency and the high frequency info is grabbed from the optimal view. For the Teddy, I applied these methods to two configurations: a set of input images generated from the Teddy with illumination, and a set of images from the Teddy without illumination.  For basic texture transfer the latter configuration would be used. After applying these to the Teddy, you can see that a weighted average is blurred due to the difference in the target from the source.

Lit images

TeddyLitInput TeddyLitAverage TeddyLitOptimal TeddyLitMulti TeddyLitMultiOptimal
TeddyTextureAverage TeddyTextureOpt
TeddyTextureMulti TeddyTextureMultiOpt
Teddy input Average Opt Multi MultiOpt

Unlit images

TeddyNoLightInput TeddyUnlitAverage TeddyUnlitOptimal TeddyUnlitMulti TeddyUnlitMultiOptimal
Teddy input Average Opt Multi MultiOpt

The opt and multi both give excellent results despite the changes in the geometry, with some small artifacts (e.g., near the eyes for multi method).  The combination of the two methods gives a good overall texture capable, fixing up some of the small artifacts.  The opt method has some trouble with places that are not visible (e.g., under the chin).  In my prototype implementation, I had some scripts to statically place the cameras and perform the rendering of the images in blender, but the cameras could be placed in order to get full coverage.  The models and images are all available in the TeddyBlog.zip file.

, , ,

  1. #1 by Triangle.tbin on March 20th, 2013

    Already at it again, eh? Nice work.

    I’ll read the post more carefully later so that I may ask extremely impractical questions for the fun of it.

  2. #2 by Neil on March 20th, 2013

    Driving in a car for 4.5 days will make you want to do something productive.

    I’m not sure that the post itself is self-contained, but since you know what I was working on a few weeks ago (texture for AutoOrgan). This is what I was thinking of doing but didn’t have the time.

  3. #3 by Steven Eliuk on April 4th, 2013

    Very kool… what are the limits of the deformation to the grid before the methods breakdown? I would be interested in seeing different degrees of vertex removal and smoothing detailed.

    As always, amazing work.

  4. #4 by Neil on April 5th, 2013

    I didn’t really test out more deformation, but the limits will depend on how many sharp high-frequency edges appear in the texture. In such cases, these edges may not line up on the final texture. Vertex shouldn’t cause too much problem until it starts causing too much deformation.

(will not be published)

  1. No trackbacks yet.