12/15/08
12/3/08
Drone Generation Subprocess "makeDrone"
- visible geometry
- RGB attributes from a colorEditor call
- unique object name
- colorSeek inclusion toggle (include in batch color tracking or not)
- RGB swatch via canvas command
- target-rig attribute to control (what does this drone control)
- "edit RGB" and delete options
- per-drone area constraints
- initial search pattern
- per-frame census (how many frames does drone find target RGB)
11/10/08
Over 100!
The GUI is being refined into a sleeker interface, but still with the consideration of an Adobe-like familiarity. Currently I'm working on a timeline-independent playback system for an animated texture. While scrubbing and stepping frames was relatively easy, the animation of the texture at a constant framerate is a little trickier. I've been using the timerX function to wait a certain amount of time then step the frame. It works but I'm trying to make it run consistently on different machines. The amount of times the function checks the elapsed time varies depending on the speed of the machine. I don't think this is a big issue, but I'm trying to make rotoCap as consistent and stable as I'd rather have all the processing power for the color-tracking engine than the GUI.
10/16/08
Simple Shape Tracking
9/25/08
Video-Based vs Laser-Based Tracking of Dynamic Environments
Most machine vision cameras used in mocap and 3d scanning are grayscale in which their output is turned into a binary image. A binary image is simply a high-contrast image that is only black and white with no gradient. This makes the image easier to process for the software since it only deals with white dots on a black background, but the other calculations it uses to discriminate between two white dots which come near to each other can be complex and costly to the performance. By adding in the color value, the need to track each dot is made easier by giving each its own RGB value.
Why bring this up? The DARPA Grand Challenge is a great source for inspiration because of the real-world applications of scanning/mocap. The Stanford University team's Standley uses a hybrid system of laser scanning and a video overlay allowing it to use color to tell the difference between the road color and everything else. This is a great confirmation that we're both on the right track when it comes to using color over binary alone.
9/14/08
Sequence Viewer and Shader Progress (UPDATE)
Since the core tracking engine relies on an image plane to calculate movement, rotoCap needs to:
- generate a shader with an animated texture
- disconnect the animated texture from timeline dependency
- connect animated texture to rotoCap control
- apply the shader to an image plane
- scale image plane the same aspect ratio as source image file
- create a camera to lock onto the image plane perpendicularly
- use the new camera's viewport as part of the rotoCap GUI
- create playback widget for different framerates
Currently, the shader generator works great the first time the user loads up the image sequence into rotoCap but changing the image directory results in errors as the script tries to create new shaders connected to previous file nodes. The tricky part at this time is how to query if a shader exists since the shaderNode command does not have an -exists flag to test. To get around this I'm loading up all the shaders into an array and deleting the rotoCap-created shader network based on wildcard name-space search.
UPDATE
The shader handling is fixed now. The bug was due to old nodes conflicting with the newly generated nodes. Each time the user changes the image sequence directory a process is called to delete all nodes generated by rotoCap from previous executions. The nodes are filtered by a gmatch wildcard name-space execution, then deleted.
8/20/08
3D Scanning for Animation
After attending SIGGRAPH this year and talking with Dr. Paul Debevec about the subject I am convinced 3D scanning will progress more and more into the realm of motion-capture where the fields will merge within 5 years. With Image Metrics' release of Emily, a 3D face tracked onto an actor's head, we see the current state of the synergy. So far, the use of 3D scan data in animation has been to capture key poses, this still leaves simulations to drive the actual motion. By capturing the full performance in 3D, the best of both worlds can be had by being able to use the animation uncut and or pulling key poses off choice portions.
Are we there yet? Not exactly. It has been pointed out that Emily was still animated by humans using the scan data to target a face rig. But, it is a good running start for the industry's jump over the valley.
8/18/08
File Handler Works!
The next step will be to create the image viewer. I see some trouble coming since Maya doesn't have a resize for the image command. One work around will be to load up the files as an animated texture and do screenshots of each frame with a perpendicular camera through the saveImage command. The thumbnailed images will be used for the GUI window while the full-resolution images will be used by the script for the actual processing. The image dimensions also need to be queried so the height and width ratio can be applied correctly to the NURBS plane. After doing some research it seems querying the outSize of a node will work.
8/9/08
How to fix getFileList
string $imageDir = (dirname (`fileDialog`) + "/"); // Stroke must be added
string $fileList[] = `getFileList -folder $imageDir`;
This was a frustrating bug and hopefully this will show up in search results when someone else has this issue.
8/5/08
File Handler Progress!
7/14/08
Radiohead & 3D Scanning
Thom York underwent a scanning session to capture his performance of House of Cards. The video was created using two different scanning methods, laser and structured light. It's encouraging to see fine artists using technology instead of abhorring it. In an exciting move, Radiohead released the raw data from the session as a CSV sequence which I was able to process into a polygonal mesh to reveal what Mr. York looked like during the capture session. There is some surface warping due to the deliberate distortion of the process by shooting through a piece of glass or Lucite, but the results were interesting.
7/13/08
The Story So Far...
The trickiest part so far has been the file-handling. MEL has about 4 ways to handle file dialogs and each seems to have their strengths. For now I only need to worry about storing directory paths.
FIRST!
What is Motion-Capture? Motion-capture (commonly known as "mocap") is a method of recording the movements of a real-world performance, usually a human. The movements are then applied to a computer model which mimics the actions of the original performance. The performer usually wears a special suit which has markers or sensors which are used to turn the movements into a numeric representation, easily understood by a computer.
Who uses Mocap? Both the entertainment industry and research entities use mocap. Many of today's video games and movies use mocap to create performances where realistic movement is desired. Researchers use mocap to study the movement of the body in detail. Since the performance is captured as data, it is infinitely repeatable and allows for greater scrutiny with less variables.
Why write your own? Traditional drawbacks to a commercial system are its prohibitive cost and complexity, thus limiting motion capture to primarily larger studios and research entities. These barriers are on the verge of falling away due to the dropping cost of near-professional electronics and new methods of motion data capture by said electronics. Independent animators and small studios will be able to utilize motion capture as they have with other technologies which previously were unavailable to them such as 3D animation software itself and high-end workstations.
How does it work? The program is actually a script that runs within a 3D package called Maya. The user will record the performance using off-the-shelf video cameras and bring it into the script (which I've dubbed rotoCap). The video is analyzed frame by frame to synchronize the multiple data streams and to locate the markers placed on the performer at the time of the capture session. The markers are isolated in 3D space and used to drive the performance of a computer model.