Tuesday, May 12, 2009

vfx - final project

ARTifact - finished project

here is my final result of using compression artifacting intentionally. i didn't end up using found footage for this piece.* rather i morphed my present day self out of a still image from when i was young. as i was working out my ideas on nostalgia, i was repeatedly thinking about multiple times within one moment, as well as trying to prolong lost moments of the past. i love the idea of animating still photography for its application in this concept.

for the sound design, however, i did end up using found materials. although i'm getting a composer to do my music for me, i've found it fun and helpful to try to work these things out on my own.

*i've determined using this hd quality footage yields a different compression effect than found footage. since the pixels are smaller, the morph result is smoother and more blended-looking.

vfx - texturing and lighting


texturing is not as fun as modeling!! the plates and the tennis ball need work, but i don't think it matters for the shot i'm doing. as eric says, don't spend time on details that the camera will never see. i am pretty pleased with how the cup came out though.

the only previous lighting experience i'd had was on still objects in maya. it was a little more challenging lighting something well when it is moving through a space.

vfx - modeling



the design for my ship is the starship enterprise, made out of a plastic cup, a tennis ball, paper plates, pencils, paper clips, and toilet paper tubes. my original reason for picking this design was to motivate me to emulate reality. i am a firm believer that in order to abstract, it's important to first learn realism, to know what you're abstracting from. besides, i figure it would be much more valuable to learn how to purposefully replicate reality, since i'd be shooting for something very specific.

i wasn't thinking about it at the time, but this design reiterates my theme of appropriation. it just goes to show how much it's a part of my creative fiber!

ARTifact notes 3: got it! ...kind of.

after hunting around on the internet for a few days, i finally came across two programs i could use to intentionally ARTifact. both are free downloads, and essentially, they're just too simple to figure out the errors i'm purposefully setting up. the first program is ffmpeg, where i can import a video and set up an export so that it only sets an i-frame once (at the very beginning.) then i take it into avidemux - a simple editing program that also displays whether or not a given frame is an i-frame or a p-frame. then it's just a matter of finding those i-frames and deleting them.

after weeks of frustration, my first test - though crude - felt like a huge success:

ARTifact notes 2

conveniently, as i was getting more and more frustrated by trying to intentionally replicate the look of compression artifacting, a few music videos came out within a couple weeks of eachother. one was a kanye west video, but i prefer this one:


using these videos as search cues, i was able to compile research on how it the effect is achieved.

the effect is colloquially referred to as "datamoshing," but that name is kinda dumb, so i opt to call it what it really is: compression artifacting. then i cleverly capitalize the ART part to emphasize the intentionality. heh heh.

first, i found an interview with the animation house that made the kanye video. the information wasn't detailed enough to be terribly useful, but it at least got me started:

1. no compression QT.
2. take into a program that upped the data rate significantly. then compress into an avi codec.
-enunciates each cut as an I-frame.
I-frames hold info for color and structure for a series of other frames called delta frames. I-frames happen at the beginning of a cut. delta frames move the key frame through time.
3. take compressed avi and bring it into another program that would illustrate all of the keyframes. then delete keys. tada!

unfortunately, since this didn't explain exactly what programs were used, or how to actually achieve the effect, more research was required. i was also pretty foggy on the concept of i-frames and p-frames, so i also made the effort to learn more about those:

useful links:
http://popmodernism.org/appropirate/delta.html
http://download-finished.com/

questions i have: can i do this using youtube vids, since they're already so compressed. what do i do to find keys (i-frames)?

my notes on video compression:

interframe coding - in video compression, the coding of differences between frames.
intraframe coding - compressing redundant areas within a single video frame.
A video sequence is made up of keyframes of the entire image. Between these are delta frames, which are encoded with incremental differences. depending on the compression method, a new key frame is generated based on a set number of frames or when a certain percentage of pixels in the material has changed.
delta frame types:
p-frame: predictive/predicted frame
b-frame: bi-directional frame or bi-directional predictive frame.

in a motion sequence, individual frames of pictures are grouped and played so viewer registers videos in spatial motion.
b-frames rely on frames before and after, contain only the data that have changed from preceding frame or are different from following frames.
p-frames follow I-frame - contain only data changed from I (color or content)

all frames needed for predictions are contained within one GOP (group of pictures).
- can be as small as one i-frame, not usually larger than 15 frames.
every video frame is broken into blocks of 8x8 pixels or y, r-y, and b-y.
-these are divided into macro-blocks 16x16.

b-frames require less data than p, but more encoding/decoding

mpeg2 GOP order: IPBBPBBPBBIP....
always 1 I, 1 P, 2 B's
Data is stored this way, though it is out of viewing order. GOP's can be open or closed - determines if that GOP is independent and can be cut. really it's: IBBPBBPBBP....

with my youtube stuff, if i export a jpeg sequence, these essentially become all I-frames?

bitrate - number of bits processed per unit of time. bit = binary digit (0 or 1). 1 = on 0 = off.

vfx notes 4

vfx notes 2/23: 3d tracking

possible to derive point data for scene reconstruction from photography

2 types of tracking:

1. CAMERA TRACKING
-re-establishes position of camera in 3d space.
-allows locking of moving 3d objects into static live-action scene.
-allows set extension of static live-action object.

2. OBJECT TRACKING
-re-establishes motion of object in action in 3d space
-allows locking of 3d objects onto moving live-action objects.

use boujou for motion tracking. for camera tracking it is easy! not as easy for object tracking.

vfx notes 3


vfx class notes 2/9: rotoscoping and compositing

compositing
-prepping incoming elements ie. color correction, stabilize, dust
-modifying elements ie. transform, dewarping, lens distortion
-keying mattes from green screen plates
-assembly of elements- overs, adds, multiplies
-adding effects ie. haze, blur, film grain, flares
-prepping for film output, setting LUT grain match.
-can be be highly creative or assembly line tedium

becoming more 3d-like in scope. nuke contains a 3d renderer and camera!
application for compositing: NUKE!
-node-based, not centrally time based like AfterEffects
-more efficient, faster, capable of large production work
-very creative workflow, but you need to know technical aspects of the images used.
-developed by digital domain 15 years ago
-very fast. UI not as accessible but it's getting better.

basic comp skills we'll cover:
-Nuke UI
-image formats
-merge operations
-color corrections
-keying green screen
-roto

rotoscoping
-cutting mattes manually
-necessary when green screen was or could not be used in the shoot.
-often necessary from location plates
-sky replacement a typical use.

in rotoscoping, you never want a sharp edge - about a 2 pixel blur.

NUKE NOTES
open session = "script"
right click for file menu
image -- read (hotkey R)
find footage
floating thumbnail node - drag from viewer 1 to thumbnail
for viewing - render - flipbook selected
pencil icon: bezier (make sure it's clicked)
(tapping space bar - fullscreens whichever window is selected)
hold down ctrl+alt to put down pen points. creates a white shape. in bezier tools window (right side of screen) adjust output. uncheck RGB but leave alpha channel.
change channels at top of viewer window.
right click and convert vertices to cusp (also can smooth, break) - turns into a rectangle.
alt+ctrl to add another point
hit "A" in viewer to see alpha channel
ctrl + drag pts to feather just one part.
automatically has autokey on.
select 2 pts. + handle to move, rotate, scale both together
set keyframes and it interpolates between
set proxy mode to speed up playback
once done, detach image from viewer, so you just have alpha channel of the final bezier.

rendering - little film frame - click on it!
"w" to write
click on folder on right
decide what to name it
make adjustments
"render" dialogue box: "1, 133" means to render between frames 1-133.
naming: foo.%04d.jpg
save scripts as nk file.

vfx notes 2

vfx classnotes 2/2

vfx shot vs. standard shot
-can't be accomplished thru standard photography
-needs repair of problems
-needs addition of other elements (pyrotechnics, etc.)
-typically very expensive: $4000/sec.

storyboard to identify vfx - necessary shots

early in-camera techniques:
-split screen
-forced perspective
-stop action
-adjusting speed
-double exposure/ traveling mattes
-miniatures

vista vision (aka. 8 perf): rotate 35mm for bigger frame. sacrifice sound.

rear projection: gateweave causes jitter in rear projected bg.
-double grain from shooting projected film

front projection: scotch-lite = what screens are made of. same thigns as stop signs.

optical printing: pre-exposed film and mattes to composite images.

matte painting: done on glass in front of shot - lighting changes through day, so make ambiguous light and shadows. still, image loses life so.... VFX to bring it back!

paralax: relative motion of objects in relation to camera. things close to you move fast!

ARTifact notes 1





this was my first attempt at recreating the look of compression artifacting. i started in photoshop, developing a stylized version of this pixelated look with still frames from youtube videos. the plan was to then move into aftereffects and create the result over time. here are two of the still images i created.

when i moved into after effects, the results were far less successful. i was key-framing this "mosaic" filter by pixel size and layering varying animations over rotoscoped youtube clips. i was hoping for something akin to these still images, in their subtle layers of ghostly pixels, where the image digitally melts out of the frame. but moving over time, it was chaotic and completely ungraceful. it looked awful. so awful infact, that i apparently erased all evidence of it from my computer. but i do have some notes of my workflow on record:

image height: 720
copy layer
levels: R = 160
copy again
filter: mosaic 17
cut out around both
less pixely layer on top: "darken" blend mode.

notes on my notes

i have decided to post my notes from vfx, both because going through them again will help me better absorb the information, and because it is honest documentation of all that i've learned this semester.

while i was learning maya and nuke in eric's class, i was also trying to learn how to exploit youtube-quality video for its characteristic blocky, compressed, artifacted look. so i will be interjecting my class notes with my independant research on how to acheive this compression effect intentionally and artistically. these will be posted in the order they were recorded/learned.

vfx notes 1

vfx class notes 1/26: vfx history and maya basics.

types of effects:
special - explosions, etc (on camera)
visual - post production (usually computer, but can also be practical)
practical - physical

keyboard shortcuts:
translate = w
rotate = e
scale = r

IPR render allows you to see any updates between renders

marking menu
channels box
mel commands - script commands
ctrl A = attribute editor

edit - delete (by type or all) -- use for modeling

display menu - hide menu allows you to hide particular elements

window - general editor, script editor, hyershade

render settings - export as frames (tifs)
"frame padding" = number of digits assigned to file name
4 = 0003, 2 = 03, etc.

when lighting, always uncheck "enable default lights"





Monday, May 11, 2009

appropriate! (the verb, not the adjective)

coming into this semester, i was immensely interested in using found footage combined with real objects. appropriation in general is very appealing to me. from recycling would-be trash (cereal boxes, dead light bulbs, etc.) to long-lost commercials found on youtube, i find it challenging, exciting, and rewarding to make new art from old crap.

i'm a collector, or a pack-rat if you want to call me names. i hold on to things forever, thinking there might be a use for them some time down the road. (and the occasional truth of this philosophy is why i just continue to amass all this junk.) furthermore, we as humans are continuously told to "reuse! recycle!" lest we bury ourselves in our own consumer by-products. i find this sentiment particularly poignant as we enter this age of disposable media, wherein we're virtually buried, or oversaturated, by online EVERYTHING. i think my brain melts a little in such instances where i go on ebay and buy the old taperecorder that someone held onto for 30 years, thinking that i'll use it to make some new work of art, only to learn that i can't because it doesn't work properly, but man it looks cool, so i should keep it just in case i want to put it on display when i one day have a shelf for it, but it did come in that sturdy box that i'll kick around my dining room floor until i use it to make a set for a different animation which i will capture digitally and put online for someone to find and download and clutter up their harddrive until they eventually put it in the "recycle bin," which ironically permanently erases it.

seriously, i find these connections very fascinating. of course, the more i try to develop imagery with found materials, the more i become aware of the limitations:

1. inflexibility - getting pre-existing objects and video to do something new. i am referring here to visual "newness," as opposed to perceptual newness that i feel is almost automatically achieved through juxtaposition of disparate, found elements. i seek to manipule the video and objects so that they embody the fluidity and dynamism of animation, while maintaining their original appeal.

2. integrating set video into a scene - getting found video to look like something other than a window of youtube within a bigger frame, and i don't mean just putting an afterEffects filter over it! old video has as much evocative power as old toys, electronics, and other objects. however, i think it's a whole other challenge to create original, moving imagery from pre-exisiting, moving images.

3. actually acquiring the exact objects that i imagine - for example, i would love to create imagery with one of those late 70s/early 80s tv with wood-paneled sides and big silver knobs. however, i am having a surprisingly hard time finding one for purchase. apparently no one wants to pay shipping for an item that size.

so... i thought i would look to Eric's class for potential ways to get around these limitations. visual effects is the blending of separate elements into a whole new, seamless scene. i love that the industry's history is rooted in magic and visual trickery and making things happen that no one had ever seen before.

Saturday, May 9, 2009

establish!

i have created this blog as a record of what i've learned in eric hanson's ctan 462 vfx class. my aim for taking the class was to expose myself to the world of 3d visual effects (about which i knew very little), in hopes that i might learn some new tricks that would allow me to delve deeper into the concepts i am exploring in my thesis.

going into this past semester, i felt that i was limiting my ideation by only thinking in terms of the things i already knew (ie. traditional techniques and after effects.) however, i feel as though any time i explain what i would like to do - visually or conceptually - i am told by my classmates, "oh yeah, that would be totally easy in maya!" basically i wanted the opportunity to expand my conceptual palette into... the third dimension!!!!! (wishing i had some way to illustrate an audio reverb technique in html.)

it was especially important for me to do this now, in the development stages of thesis, since this knowledge could potentially shape the process.