12/3/08

Drone Generation Subprocess "makeDrone"

I'm calling the visual representations of each color track drones. These, like worker bees and pollen, will track the RGB elements across the UV space. Currently the makeDrone subprocess creates:
  • visible geometry
  • RGB attributes from a colorEditor call
  • unique object name
  • colorSeek inclusion toggle (include in batch color tracking or not)
  • RGB swatch via canvas command
  • target-rig attribute to control (what does this drone control)
  • "edit RGB" and delete options
Possible future features:
  • per-drone area constraints
  • initial search pattern
  • per-frame census (how many frames does drone find target RGB)

11/10/08

Over 100!

My method of file numbering is when I get a portion of the script running it gets saved as a separate file. This past weekend, the bugs squashed broke 100 (as of this writing at 103).

The GUI is being refined into a sleeker interface, but still with the consideration of an Adobe-like familiarity. Currently I'm working on a timeline-independent playback system for an animated texture. While scrubbing and stepping frames was relatively easy, the animation of the texture at a constant framerate is a little trickier. I've been using the timerX function to wait a certain amount of time then step the frame. It works but I'm trying to make it run consistently on different machines. The amount of times the function checks the elapsed time varies depending on the speed of the machine. I don't think this is a big issue, but I'm trying to make rotoCap as consistent and stable as I'd rather have all the processing power for the color-tracking engine than the GUI.

10/16/08

Simple Shape Tracking

Andy Wilson has developed an elegant control method based on the shape of the hand. By comparing the background shapes to newly created shapes (formed by the hand), he is able to track the movement allowing for mouse-less interactions. (Source: http://procrastineering.blogspot.com/2008/10/andy-wilson.html)

9/25/08

Video-Based vs Laser-Based Tracking of Dynamic Environments

In both fields of motion-capture and surface-capture (scanning) the need to discriminate unique points in space is paramount to the accuracy of the capture system. When I was first conceiving rotoCap I tried to figure out this problem by looking what I had to work with and not having ready access to the specialized cameras of a Vicon system or laser scanner I turned to consumer video cameras. I conceived I could lessen the amount of computer load by not having the program calculate trajectories but color. An infrared machine-vision camera is suited to higher resolutions and frame rates, how could the lowly camcorder compete? The answer is color.

Most machine vision cameras used in mocap and 3d scanning are grayscale in which their output is turned into a binary image. A binary image is simply a high-contrast image that is only black and white with no gradient. This makes the image easier to process for the software since it only deals with white dots on a black background, but the other calculations it uses to discriminate between two white dots which come near to each other can be complex and costly to the performance. By adding in the color value, the need to track each dot is made easier by giving each its own RGB value.

Why bring this up? The DARPA Grand Challenge is a great source for inspiration because of the real-world applications of scanning/mocap. The Stanford University team's Standley uses a hybrid system of laser scanning and a video overlay allowing it to use color to tell the difference between the road color and everything else. This is a great confirmation that we're both on the right track when it comes to using color over binary alone.

9/14/08

Sequence Viewer and Shader Progress (UPDATE)

rotoCap is really coming along as more pieces of code unlock multiple areas of development. The past few weeks have seen a streamlining of file management, GUI revisions for better interactivity, and it's now ready to start processing images.

Since the core tracking engine relies on an image plane to calculate movement, rotoCap needs to:
  • generate a shader with an animated texture
  • disconnect the animated texture from timeline dependency
  • connect animated texture to rotoCap control
  • apply the shader to an image plane
  • scale image plane the same aspect ratio as source image file
  • create a camera to lock onto the image plane perpendicularly
  • use the new camera's viewport as part of the rotoCap GUI
  • create playback widget for different framerates
I considered embedding the images into the GUI, but thought the user should be able to pan, zoom into each image using the familiar Maya orthographic camera navigation. A "Fit Image" button would also be added to reset any camera translation.

Currently, the shader generator works great the first time the user loads up the image sequence into rotoCap but changing the image directory results in errors as the script tries to create new shaders connected to previous file nodes. The tricky part at this time is how to query if a shader exists since the shaderNode command does not have an -exists flag to test. To get around this I'm loading up all the shaders into an array and deleting the rotoCap-created shader network based on wildcard name-space search.

UPDATE
The shader handling is fixed now. The bug was due to old nodes conflicting with the newly generated nodes. Each time the user changes the image sequence directory a process is called to delete all nodes generated by rotoCap from previous executions. The nodes are filtered by a gmatch wildcard name-space execution, then deleted.

8/20/08

3D Scanning for Animation



After attending SIGGRAPH this year and talking with Dr. Paul Debevec about the subject I am convinced 3D scanning will progress more and more into the realm of motion-capture where the fields will merge within 5 years. With Image Metrics' release of Emily, a 3D face tracked onto an actor's head, we see the current state of the synergy. So far, the use of 3D scan data in animation has been to capture key poses, this still leaves simulations to drive the actual motion. By capturing the full performance in 3D, the best of both worlds can be had by being able to use the animation uncut and or pulling key poses off choice portions.

Are we there yet? Not exactly. It has been pointed out that Emily was still animated by humans using the scan data to target a face rig. But, it is a good running start for the industry's jump over the valley.

8/18/08

File Handler Works!

This past weekend has been very productive. The script is now automatically picking out the file extension and using that to filter out the other files which the user doesn't want (e.g. thumbs.db, OSX files, etc.). This makes it faster for the user since they simply choose the first file of their image sequence and the script takes care of the rest.

The next step will be to create the image viewer. I see some trouble coming since Maya doesn't have a resize for the image command. One work around will be to load up the files as an animated texture and do screenshots of each frame with a perpendicular camera through the saveImage command. The thumbnailed images will be used for the GUI window while the full-resolution images will be used by the script for the actual processing. The image dimensions also need to be queried so the height and width ratio can be applied correctly to the NURBS plane. After doing some research it seems querying the outSize of a node will work.

8/9/08

How to fix getFileList

When you use the MEL command getFileList, make sure there is a trailing "/" on the folder string you pass to it. Otherwise the command returns the folder string you passed to it initially.

string $imageDir = (dirname (`fileDialog`) + "/"); // Stroke must be added
string $fileList[] = `getFileList -folder $imageDir`;

This was a frustrating bug and hopefully this will show up in search results when someone else has this issue.

8/5/08

File Handler Progress!

After some frustrating times with fileBrowser.mel, I went back to using the fileDialog command to pass a path to the image sequence file handler so I'll only have to call up array elements. Theoretically this should save some processing cycles and makes it a little more cross-platform friendly.

7/14/08

Radiohead & 3D Scanning


Thom York underwent a scanning session to capture his performance of House of Cards. The video was created using two different scanning methods, laser and structured light. It's encouraging to see fine artists using technology instead of abhorring it. In an exciting move, Radiohead released the raw data from the session as a CSV sequence which I was able to process into a polygonal mesh to reveal what Mr. York looked like during the capture session. There is some surface warping due to the deliberate distortion of the process by shooting through a piece of glass or Lucite, but the results were interesting.

7/13/08

The Story So Far...

The core image processing script is functional, now it's time to get it functional for the user. I've decided that the Pane Layout will be the best to hold the different program elements because of the resizable areas. In the future I may make the image area it's own window but only if it doesn't impact the performance of the script by handling the cross-window chatter.

The trickiest part so far has been the file-handling. MEL has about 4 ways to handle file dialogs and each seems to have their strengths. For now I only need to worry about storing directory paths.

FIRST!

Master of Fine Arts Thesis Project : Creating an Optical Motion-Capture System from "off-the-shelf" Hardware and Software.

What is Motion-Capture? Motion-capture (commonly known as "mocap") is a method of recording the movements of a real-world performance, usually a human. The movements are then applied to a computer model which mimics the actions of the original performance. The performer usually wears a special suit which has markers or sensors which are used to turn the movements into a numeric representation, easily understood by a computer.

Who uses Mocap? Both the entertainment industry and research entities use mocap. Many of today's video games and movies use mocap to create performances where realistic movement is desired. Researchers use mocap to study the movement of the body in detail. Since the performance is captured as data, it is infinitely repeatable and allows for greater scrutiny with less variables.

Why write your own? Traditional drawbacks to a commercial system are its prohibitive cost and complexity, thus limiting motion capture to primarily larger studios and research entities. These barriers are on the verge of falling away due to the dropping cost of near-professional electronics and new methods of motion data capture by said electronics. Independent animators and small studios will be able to utilize motion capture as they have with other technologies which previously were unavailable to them such as 3D animation software itself and high-end workstations.

How does it work? The program is actually a script that runs within a 3D package called Maya. The user will record the performance using off-the-shelf video cameras and bring it into the script (which I've dubbed rotoCap). The video is analyzed frame by frame to synchronize the multiple data streams and to locate the markers placed on the performer at the time of the capture session. The markers are isolated in 3D space and used to drive the performance of a computer model.