2023/09/11


- keyframe animation
→ most of the work done by an animator, “artist”
→ animations explicitly created by setting keyframe positions over time
→ software interpolates between the keyframes
→ requires a lot of skill
→ “technical artists”, unicorns, people who are artistically and techically skilled
→ evaluating the quality of animated artwork is easy for us because we can tell when human motion looks natural. but the motion is very complicated
→ hes showing some software for keyframe animation but i didnt catch the name
- procedural animation
→ most of the work done by the computer, or programmer
→ animator has to set up the initial simulation conditions, then press play
→ objects positions and shapes are controlled by
⇒ particle, flocking, crowd systems
⇒ cloth, fire, water, smoke simulations
→ physics stuff, spend most of our time making the calculaitons efficient but we know how to do it
→ particle stuff, we have known algorithms that can look believable
- motion capture animation
→ most of the work is done by a human actor
→ human joint angles are recorded from a live model via sensors
→ often needs touching up.. filtering, editing, retargeting, stitching
⇒ retargeting: editing the physics when you want to change the size or shape of something, e.g. animating a 10 foot troll with a normal sized model
⇒ stitching: like getting pieces of motion to respond to player input like a basketball player in a game
→ the systems for this are not perfect but they're getting pretty good


History of Computer animation
- early pioneering work
- wireframe graphics:
→ renders the outer lines of an object
→ dont worry about shading or lighting
- project sketchpad
→ ivan sutherland's phd thesis at MIT
→ interactive display loop
→ user moves light pen, computer responds

- 70s: introduction of raster graphics
- beginning of graphics standards
→ up to this point every university had their own approach from the hardware up
→ GKS: European effort
→ Core: north american effort

- Raster graphics: image produced as an array of pixels
- shading algorithms:
→ gourard shading
→ anti aliasing
→ phong shading
→ curved surfaces

- 80s: silicon graphics 3D geometry engine
- industry based standards
→ PHIGS
→ RenderMan
⇒ PIXAR, some form of it still used today
- texture, environment and bump mapping
- environmental mapping, bump mapping leads to more “realism” renderings
→ takes 3 days to render
- environmental mapping:
→ take a sphere, put a camera in it and do a 360 camera shot. get an elongated picture
→ if you see some object you put it on ur panoramic image and then wrap that around the sphere, bending it to make it look like the light is effecting it. but it's static, the reflection wont change as u move close to the object
- bump mapping:
→ mess around with the “normal” angles of light hitting the object, modulate it so it looks like there is texture on the surface
- global illumination
→ ray tracing: get light reflections
→ radiosity: get soft shadows and colours reflecting

- 90s:
- openGL API
- competely computer generated movies were successful
- inductry and academia have grpahics research labs
- new techniques:
→ non-photorealistic images
⇒ either take an existing image and process it so it looks painted, or start with 3d models and then render them as if they're painted
→ quicktime VR
→ cloth, fire animation
⇒ physics, tweaking of parameters
⇒ get the deltas (change per frame) right or else it looks weird
→ human animation


- 2000s:
- graphics cards for PCs dominate the market
- gaming systems determine direction of the market
- computer graphics routine in movie industry
→ e.g. downtown abbey filmed in london.. computer graphics used to remove clutter and stuff
- blurring of hardware in software
→ start to see programmable GPUs
→ more of the rendering pipeline is placed on GPU
→ vertex shaders: are graphics processing functions which manipulate object positions of the GPU
→ pixel shaders: small program in the GPU which process pixels
- start to see non-photorealistic animation
- sub surface scattering:
→ brought a modern look to videogames and movies
→ prior to this we deal with light refelction only on the surface of things.
→ subsurface scatting deals with light that might travel in an object, deals with translucent textures
→ e.g. gollum





Rendering Pipeline


- digital animation is kinda like a flipbook
- it's an ordered sequence of digital images

Photographic image
- 2d functions, the value at any point is a colour
Digital image
- conversion to discrete pxels ? idk he changed the slide

- At each pixel an RGB triplet is stored.
- by moxel RGB we can generate most perceivable colours

- frame buffer: 2d array of RAM that stores the colour values for each pixel in the displayed image
- colour vs greyscale: whether we have 3 separate colour channels (RGB) or one gray level channel
- resolution: number of pixels in frame buffer (w x h)
- colour depth: number of bits used for each pixel. determines how many colours can be shown
→ 1 bit : 2 colours
→ 8 bit: 256 colours
→ 24 bit: 8 bits per colour channel

- image rendering model:
→ how do we artificially generate digital images?
→ form 2d images using a proces analogous to how images are formed by real camers
⇒ virtal camera, virtual light...
→ basic scene elements:
⇒ light source
⇒ object
⇒ viewer
⇒ surface materials

- types of light sources
→ point light: just a point in space, x, y ,z position. emanates light in all directions equally. like the sun
→ spot light: light x,y,z location and a direction, vecotr. like a cone of light like a flashlight.
→ directional: light at an infinite distance, parallel rays. approximating the sun. no positional data.
→ ambient: this is a hack. we're not dealign with global illumination, not bounding light rays around. so what happens when you look under a desk that's not directly illuminated? without ambient light it would be an abyss. we're going to approximate that by saying theres some minimum amount of light in a scene.
⇒ will talk about ambient occlusion later
→ area light: not something we're going to deal wiht... but we get some light from part of the area light, but no light from others. this creates gradients of light underneath the object and a soft shadow rather than a hard one.

- objects
→ GPUs love triangles
→ point
→ line
→ 2d curve : defined by control points and a method of interpolation. the points might influence the line or anchor it
→ polygons: any number of points and edges
→ 3d curved surfaces : might have a set of control points and an interpolation through them that will create the 3d shape

- surface materials define reflective and transmissive properties of surfaces
→ color
→ texture
→ specular highlight : shininess
→ reflectivity : mirror-like surfaces
→ roughness : bumpy surfaces
→ transparency : glass-like-ness lol
→ translucence : sub surface scattering

- ray tracing (not in openGL)
→ way to form images by tracing rays of light from a light source to determine which rays will enter the camera
→ each ray of light might have multiple interations with objects before being absorbed, partially or not, and coming into the camera or whatever
→ ray tracing produces rly high quality images.
- we don't project light we sqaush objects.

- camera specification
→ six degrees of freedom:
⇒ position of center of projection
• x, y, z
⇒ orientation
• yaw, pitch, roll
→ virtual cameras also have limited FOV. object outside of it are “clipped”



CONTINUED 2023/09/13 - 11:55

Image rendering pipeline:



vertices → transformer → clipper → projector → rasterizer → pixels

- physics engine is middleware, so that for every game you don't have to re-calculate all the physics.
- usually goes to the GPU

TXAA
- anti aliasing: averaging pixels on both space and time

Ambient occlusion:
- look at the normals of the object, if you have overlapping normals then you're probably in a crevice and you can assume there's shadow there

This is an example where the tactical method has an impact on the aesthetic effect


In this class we'll use openGL to do the rendering for us and not implement any rendering ourselves
The programmer we will write the application and use openGL API as access to the GPU


API Contents:
- functions that specify and alter scene elements like objects, viewer,,,
→ ex: object specification
⇒ openGL has some primitives including points(1D) line segments (2d) and polygons (3d)
⇒ can also define some curves and surfaces (quadrics: spheres cylinders cones)

Index