Blog
Batch Processing Blend Shapes – Stage 1 hero image
4 August 2017

Batch Processing Blend Shapes – Stage 1

Hey! Welcome to this three part guide on how to batch process multiple scans with Agisoft Photoscan and R3DS Wrap3. In the first stage of the guide we’ll be processing a neutral pose which will serve as our reference object when aligning and scaling all the expressions uniformly. In the second stage we’ll be batch processing all the expressions using Photoscan, ready for stage three, where we’ll be batch-wrapping/texturing a base-mesh with animation loops  to all the expression scans. Leaving us with an array of clean, textured models which can be used for film, games or VFX.

Step 1 – Pre-Processing Images

Before taking the photographs of the model; make sure that the image that each camera creates is the same and is unique to the camera itself. So if camera D-11 shoots out an image called D_011.NEF on our neutral pose; it should shoot out D_011.NEF through every shoot; this is crucial if we want to align our models all perfectly.

It’s also important to place around 50 small black dots across the actors face; these points will serve as a reference for us when placing the points for the RigidAlignment node and the Opticalflow node inside of Wrap3. More importantly, they’re used for blend shapes to track how the muscles of the face contracts or relaxes when pulling expressions.

In order to setup scaling correctly later in the project; we’ll need a static object in the scene which serves as a reference; something which we know the dimensions off and can be seen from a minimum of two images.In this tutorial we had a  horizontal metal beam coming out from the rig (Image below); this is picked up by more cameras, is in a fixed position and helps to keep the models head in the same place between expressions.

After placing the black dots and setting up a reference for scale, it’s time to take the photographs of your actor. There are a bundle of tutorials and guides online on how to take photographs so I’ll leave this part out, if you’re still interested in how we take our photos then drop me an e-mail and I’ll be happy to help, you can find my e-mail at the bottom of the page.  As a general overview, you want clear, sharp images of your actor which takes advantage of as much of the pixel resolution as possible, furthermore you want flat images with, if possible, no highlights or shadows or lens distortion. We run our images through a custom TNG filter in DxO which helps to further push the highlights and shadows out of the images; we also enable lens distortion within DxO.

Optional – Mask Images – If you’re not using a mask then skip to step 2!

Photoscan has the option of importing masks; which force the software to look at the correct subject in the photo. I’ve used masks on some projects and some without and the difference for me was minimal; especially compared to the time (and money) lost to masking hundreds of images. My suggestions would be to only mask if you’re getting massive errors in your end results due to the software picking up the background or if you’re target object is behind a flat colour background (so it’s very easy to mask).

Creating the masks for a model is a relatively simple process; you can either import your own custom masks or generate them through a rough model in Photoscan; let’s do that latter.

First let’s start up Photoscan and import all our photos by selecting ‘Add Photos‘ under the ‘Workflow‘ tab.

From here we’ll build a ‘Tie-Point Cloud‘ by selecting ‘Align Photos‘ under the ‘Workflow‘ tab.

In the ‘Align Photos‘ window; let’s select ‘Medium‘ Accuracy, 400,000 key points, 10,000 tie points and leave ‘Adaptive camera model fitting‘ enabled. Note that if the model produces errors then it’s a good idea to set the ‘Accuracy‘ to High; this usually fixes most problems.

Once the ‘Tie-Point Cloud’ is finished,  you should see your model built out of tiny points and surrounded by lots of stray, random points; time to get rid of them by setting a ‘Reconstruction region‘.  One should have automatically been created but it will need refining using the ‘Resize Region’ tool on the top toolbar.

Use the blue dots on each corner to to change the size of the reconstruction region; we want to make sure that we only include the parts that we want to process. In our instance we just need the information from the shoulders up.  It can also be a good idea to work in orthographic mode by pressing the (5) key.

Once the reconstruction region is complete; let’s build a ‘Dense-Point Cloud‘. Select ‘Build Dense Cloud‘ from the ‘Workflow‘ tab on the top toolbar. In the ‘Build Dense Cloud’ window let’s leave everything on default but the ‘Quality‘, which we’ll set to ‘Medium‘.

 

Once the dense cloud is built, you should have a relatively decent quality version of your model.. with a few straggly bits here and there floating in 3D space.. let’s remove them. Use the free-form selection tool in the top toolbar to select all the random clusters that we don’t need and simply hit the delete key.

Now let’s build a mesh from our dense point cloud; we’ll use this mesh to create the masks for all of our images in one go! Select ‘Build mesh‘ from the ‘Workflow‘ tab on the top toolbar. In the ‘Build Mesh’ window, let’s leave all the settings as they are but change the face count to something around 5,000,000; if you’re PC is low-mid budget then change this value to something like 1-2,000,000. Also let’s disable ‘Calculate vertex colors‘ as we won’t be needing it.

When ready.. hit OK.

Nearly finished! You should now have mesh which is of decent quality; If not then I suggest re-building the model but using ‘High’ settings when constructing the ‘Tie-Point cloud’ and the ‘Dense-Point Cloud’.

Let’s now build the masks for our images using the mesh we’ve created. Start by selecting all of our images in the ‘Photos’ panel, open the right-click menu and select ‘Import Masks’.

In the “Import Masks‘ window let’s select ‘From Model‘ as our ‘Method‘ ‘Replacement‘ as our ‘Operation‘ and Leave everything else as default.

Now if you double click any of your images inside the ‘Photos’ panel you should see your mask in action!

Exporting Masks and Importing into New Scene

If you’d like to export the masks to there own separate file; you can do so by selecting all the images in the ‘Photos‘ panel, opening the right-click menu and selecting ‘Export Masks‘. Keep note of the ‘Filename template‘ we’ll need this when importing the masks into a new project. When ready, hit OK and choose a directory to export the masks.

 

Now to re-import the masks into a new project we simply need to start up a new project, import our photos like we’ve done before, select them all, open up the right-click menu and select ‘Import Masks‘.

On the ‘Import Masks‘ window we want to change the ‘Method‘ to ‘From File‘ and change our ‘Filename template‘ to the template we took note of before – {filename}_mask.png

Now simply hit OK and watch Photoscan import all your masks!

Step 2 – Build Neutral Expression

Our first step is to build and align a neutral mesh which will serve as our master file or chunk. Once this mesh is aligned correctly and correctly scaled we can use the ‘Align Chunks‘ command to align all other chunks in the file (more on this later). For now, import your images for the neutral scan and hit the ‘Align Photos‘ button under the ‘Workflow‘ tab.

For Accuracy we use the high setting, highest can generally take 4-5 times longer and the quality difference has never been noticeable. We keep ‘generic preselection‘ on and under advanced; set our ‘Key point limit‘ to 400,000 and ‘Tie point limit‘ to 10,000. Also, if using a mask then be sure to select ‘Constrain features by mask‘.

Next up is to refine the Tie Point Cloud created by the ‘Align Photos‘ command. To do this we’ll use the ‘Gradual Selection‘ command under the ‘Edit‘ tab.

First select ‘Reconstruction uncertainty‘ and select a value around 12; take note on the selection being made on the tie-point cloud as you change the value, you want to select a value that still maintains decent density on the mesh while also removing stray points or points on the mesh which aren’t needed (15-9 is usually good). When you’re happy with the selection, hit OK, then press delete.

After removing points from the Tie-cloud point it’s a good idea to re-align or calibrate the camera positions. To do this hit the ‘Optimize cameras‘ button in the ‘Reference‘ Pane. If you don’t have the reference pane you’re either using the standard version of Photoscan or the ‘Workspace‘ panel is covering it up. Drag the ‘Workspace‘ panel off the the side and then re-insert it back into the user interface to show the hidden panels.

In the ‘Optimize Camera Alignment‘ window tick all the options available but ‘Fit p4‘ and hit OK.

After the cameras are re-calibrated, go to gradual selection again and this time choose ‘Reprojection errors‘ and change the value to something between 1 and 0.5. Again, be mindful that you don’t select too many tie points and leave a gap in your Tie points cloud.
Again it’s a good idea to run the camera calibration tool to ensure you get the best quality possible.
Lastly we’re going to run through the gradual selection process again; this time selecting ‘Projection accuracy’. Change the value to something between 10-30. When you’re happy with the selection, hit the OK button, delete, and then open up the ‘Optimize cameras‘ again.

Finally we’ll delete all the stray points that the gradual selection missed by using the ‘Free-form selection tool‘.

Finally, use the ‘Resize region‘ tool on the top toolbar to change the rectangular region so it surrounds the whole model.

After you’ve finished cleaning up the tie-point cloud and you’re reconstruction region is encompassing your model; select ‘Build Dense Cloud‘ under the ‘Workflow’ tab.

We use High quality for the same reasons as before; the quality difference isn’t worth it in our opinion based on how long ‘Ultra High’ takes to compute. Also, In the advanced drop-down you want to make sure your ‘Depth Filtering‘ is set to ‘Aggressive‘.

Once the dense cloud is complete you should have a clean mesh which has no visible gaps; unless zooming in close. If you have any small components surrounding the mesh you can use the ‘Free-form selection‘ tool to select and delete the stray parts.

Now we’ll build a mesh from our dense cloud, select ‘Build Mesh‘ from the ‘Workflow‘ tab.

In the ‘Build Mesh‘ window we have a few options to change. The surface type should be set to ‘Arbitrary‘, source data set to ‘Dense cloud‘ and we set the ‘Face Count‘ to 5,000,000;

If you’re working on a low/mid budget system it may be better to change the ‘Face Count‘ to something lower; say 1-3,000,000.

Lastly in the advanced drop-down menu we’ll want to leave the ‘Interpolation‘ on enabled and disable ‘Calculate vertex colors‘.

If you set the ‘Face count‘ to 0 in the ‘Build Mesh‘ window, you’ll most likely have a mesh which is over 20,000,000 polygons, which is unusable and unneeded. We’ll need to decimate this mesh to something more manageable. If you specifically set the ‘Face count‘ to something manageable in the step before; skip this and the next step on decimation.

Select ‘Decimate mesh‘ from the ‘Tools‘ –> ‘Mesh‘ tab and in the Decimate mesh window we’ll set the face count to 5,000,000; enough of a poly-count to maintain detail but not too much that it breaks the machine!

You should now have a high-quality model which is ready for texturing.

First off select ‘Build Texture‘ from the ‘Workflow‘ tab to bring up the build texture window.

In this window we’ll keep everything on default but the ‘Texture size/count‘; set this to either 4096, 8192 or 16384, dependent on the quality needed, we like to work with 16384×16384 images for highest possible detail but 8096×8096 is suffice. At 4096×4096 you could lose detail when zooming in close to the model, okay in most cases for games, bad for VFX/film.

So, the mesh is cleaned, built and textured.. what now?

Step 3 – Alignment & Scale!

First lets select ‘Reset View‘(0) from the ‘View‘ tab. You may notice that the model is now horribly aligned; time to fix that.

Select the ‘Rotate‘ tool on the toolbar at the top of the screen and rotate your model until it’s in the correct orientation. Be mindful not to use the ‘Navigation‘ tool This will rotate around the model and not the model itself. The XYZ axis (bottom right of viewport) should be the same orientation as the image below.

So, alignment complete, now to set the scale of the model. As mentioned at the start of the guide, you should have something you can reference in the shoot which is also visible from more than two cameras; we used a ruler connected to a vertical beam which was locked in place.

First load up a photo in Photoscan by double clicking on an image which has the reference object shown clearly. We’ll then need to place two markers, on points of the reference which we know the distance of. Start by inserting the first point by right clicking on one point of reference and select ‘Place marker‘ –> ‘New marker‘. After that, place point 2 using the same method; keep in mind we’ll need to know the distance between these two points.

Now we’ll need to repeat the process for multiple photos; this time selecting ‘Place marker‘ –> ‘Point 1‘ + ‘Point 2.  the more photos with accurate markers the more likely that the scaling will be accurate; we usually work with 3-4 images.

Some images will have little grey points already placed on the images. This is the software estimating where the points are for the other photos. If correct, right click the points and select ‘Place Marker‘.

After all the points have been placed we want to create a scale bar. Select both the points in the reference panel and select ‘Create scale bar‘ in the right click menu; this will create a scale bar for the project in the reference panel.

Now we just need to set the distance between the two points in the distance column of the scale bar. Our measurement is 2cm so we would normally input 0.02 (Photoscan works in meters). However the software which we import our final models into, work at a different metric system and the models import incredibly tiny using this value. So that the measurement is correct; we set 2.0 in the distance column.

Lastly we just need to hit ‘Update‘ in the reference panel to update our changes. If all is good then you should see 0.0000 in the Error column of the scale bar and a model which is correctly scaled and aligned!

 

In the next post I’ll be going over how to batch process the rest of the expression scans and will also include a python script for the gradual selection/camera calibration process.

Thanks for reading and if you have any problems then drop me an e-mail and I’d be happy to help!

 

Rashed Al-Metrami

Rashed@Metapixel.io