9/25/08

Video-Based vs Laser-Based Tracking of Dynamic Environments

In both fields of motion-capture and surface-capture (scanning) the need to discriminate unique points in space is paramount to the accuracy of the capture system. When I was first conceiving rotoCap I tried to figure out this problem by looking what I had to work with and not having ready access to the specialized cameras of a Vicon system or laser scanner I turned to consumer video cameras. I conceived I could lessen the amount of computer load by not having the program calculate trajectories but color. An infrared machine-vision camera is suited to higher resolutions and frame rates, how could the lowly camcorder compete? The answer is color.

Most machine vision cameras used in mocap and 3d scanning are grayscale in which their output is turned into a binary image. A binary image is simply a high-contrast image that is only black and white with no gradient. This makes the image easier to process for the software since it only deals with white dots on a black background, but the other calculations it uses to discriminate between two white dots which come near to each other can be complex and costly to the performance. By adding in the color value, the need to track each dot is made easier by giving each its own RGB value.

Why bring this up? The DARPA Grand Challenge is a great source for inspiration because of the real-world applications of scanning/mocap. The Stanford University team's Standley uses a hybrid system of laser scanning and a video overlay allowing it to use color to tell the difference between the road color and everything else. This is a great confirmation that we're both on the right track when it comes to using color over binary alone.

9/14/08

Sequence Viewer and Shader Progress (UPDATE)

rotoCap is really coming along as more pieces of code unlock multiple areas of development. The past few weeks have seen a streamlining of file management, GUI revisions for better interactivity, and it's now ready to start processing images.

Since the core tracking engine relies on an image plane to calculate movement, rotoCap needs to:
  • generate a shader with an animated texture
  • disconnect the animated texture from timeline dependency
  • connect animated texture to rotoCap control
  • apply the shader to an image plane
  • scale image plane the same aspect ratio as source image file
  • create a camera to lock onto the image plane perpendicularly
  • use the new camera's viewport as part of the rotoCap GUI
  • create playback widget for different framerates
I considered embedding the images into the GUI, but thought the user should be able to pan, zoom into each image using the familiar Maya orthographic camera navigation. A "Fit Image" button would also be added to reset any camera translation.

Currently, the shader generator works great the first time the user loads up the image sequence into rotoCap but changing the image directory results in errors as the script tries to create new shaders connected to previous file nodes. The tricky part at this time is how to query if a shader exists since the shaderNode command does not have an -exists flag to test. To get around this I'm loading up all the shaders into an array and deleting the rotoCap-created shader network based on wildcard name-space search.

UPDATE
The shader handling is fixed now. The bug was due to old nodes conflicting with the newly generated nodes. Each time the user changes the image sequence directory a process is called to delete all nodes generated by rotoCap from previous executions. The nodes are filtered by a gmatch wildcard name-space execution, then deleted.