👓AR and VR Engineering Unit 3 – Computer Graphics Basics for AR/VR

Computer graphics is the backbone of AR and VR, transforming 3D models into immersive visual experiences. From rasterization to ray tracing, shaders to texture mapping, these techniques create realistic digital worlds that respond to user input in real-time. AR/VR graphics push the boundaries of traditional rendering, employing specialized techniques like stereoscopic rendering, foveated rendering, and spatial mapping. These methods optimize performance and enhance the illusion of depth and presence, crucial for creating convincing virtual environments.

Key Concepts and Terminology

  • Computer graphics involves the creation, manipulation, and rendering of visual content using computers
  • Rasterization converts vector graphics into a raster image (pixel grid) for display on a screen
  • Ray tracing simulates the physical behavior of light to produce highly realistic images but is computationally intensive
  • Shaders are programs that run on a GPU to determine the final appearance of pixels in a rendered image
    • Vertex shaders manipulate the attributes of vertices in a 3D model
    • Fragment shaders calculate the color and other properties of individual pixels
  • Texture mapping applies 2D images or patterns to the surface of a 3D model to add detail and realism
  • Anti-aliasing techniques (MSAA, FXAA) reduce the appearance of jagged edges in rendered images
  • Level of detail (LOD) adjusts the complexity of 3D models based on their distance from the camera to improve performance

Fundamentals of Computer Graphics

  • Computer graphics pipeline consists of stages that transform 3D models into a 2D image
    1. Vertex processing applies transformations to the vertices of a 3D model
    2. Primitive assembly creates geometric primitives (triangles) from the transformed vertices
    3. Rasterization converts the primitives into fragments (pixels)
    4. Fragment processing applies shading and texturing to determine the final color of each pixel
  • Coordinate systems in computer graphics include local (object), world, and screen space
  • Matrices are used to represent transformations (translation, rotation, scaling) in 3D space
  • Homogeneous coordinates add a fourth dimension (w) to 3D points to simplify matrix calculations
  • Clipping removes portions of geometry that fall outside the view frustum to improve performance
  • Z-buffering determines which fragments are visible based on their depth from the camera

3D Modeling and Rendering

  • Polygonal modeling represents 3D objects using a mesh of polygons (usually triangles)
    • Vertices define the corners of each polygon
    • Edges connect the vertices to form the outline of the polygon
    • Faces are the filled areas within the edges
  • NURBS (Non-Uniform Rational B-Splines) modeling uses mathematical curves and surfaces for smooth, organic shapes
  • Subdivision surface modeling starts with a low-poly mesh and recursively subdivides it to create a smooth surface
  • Physically based rendering (PBR) simulates the interaction of light with materials based on their physical properties
    • Metalness determines how much a material behaves like a metal (reflective, no diffuse reflection)
    • Roughness controls the microscopic surface irregularities that affect how light scatters
  • Global illumination techniques (radiosity, path tracing) account for indirect lighting and reflections between objects

Lighting and Texturing

  • Lighting models simulate the behavior of light in a virtual scene
    • Ambient lighting provides constant, omnidirectional illumination
    • Diffuse lighting varies based on the angle between the surface normal and the light direction
    • Specular lighting creates highlights based on the viewer's position relative to the light and surface
  • Shadow mapping renders the scene from the light's perspective to determine which areas are occluded
  • Normal mapping uses a texture to add surface detail without increasing the polygon count
  • Bump mapping perturbs the surface normals to create the illusion of depth and texture
  • Displacement mapping actually modifies the geometry of the surface based on a height map
  • UV mapping assigns 2D texture coordinates to the vertices of a 3D model
  • Mip mapping pre-calculates scaled versions of textures to avoid aliasing artifacts when viewed at a distance

Animation Principles

  • Keyframing defines the starting and ending states of an animation, with the computer interpolating the intermediate frames
  • Forward kinematics calculates the positions of an object's children based on the position and orientation of its parent
  • Inverse kinematics determines the positions of an object's parents based on the desired position of its children
  • Rigging creates a hierarchical skeleton (rig) that controls the deformation of a 3D model during animation
  • Skinning associates each vertex in a 3D model with one or more bones in the rig, allowing it to deform with the animation
  • Motion capture records the movements of real actors and applies them to digital characters
  • Procedural animation generates motion based on rules and algorithms rather than pre-defined keyframes

AR/VR-Specific Graphics Techniques

  • Stereoscopic rendering creates two slightly offset views of a scene to simulate depth perception in VR
  • Foveated rendering reduces the resolution of the image in the peripheral vision to improve performance
  • Timewarp synchronizes the rendered frames with the user's head motion to reduce latency and avoid judder
  • Spatial mapping creates a virtual representation of the real-world environment for AR applications
  • Occlusion handling ensures that virtual objects are correctly obscured by real-world objects in AR
  • Photogrammetry reconstructs 3D models from a series of photographs taken from different angles
  • SLAM (Simultaneous Localization and Mapping) algorithms allow AR devices to track their position and map the environment in real-time

Performance Optimization

  • Level of detail (LOD) techniques reduce the complexity of 3D models based on their distance from the camera
  • Occlusion culling avoids rendering objects that are hidden behind other objects
  • Frustum culling skips rendering objects that are outside the camera's field of view
  • Batching combines multiple draw calls into a single call to reduce CPU overhead
  • Instancing renders multiple copies of the same object with a single draw call, varying their properties using instanced data
  • Texture compression reduces the memory footprint and bandwidth requirements of textures
  • Mip mapping generates pre-scaled versions of textures to improve cache coherence and reduce aliasing artifacts

Hands-on Projects and Applications

  • Develop a basic 3D scene with lighting, texturing, and camera controls using a graphics engine (Unity, Unreal)
  • Implement a custom shader that applies a visual effect (e.g., toon shading, holographic material) to a 3D model
  • Create an AR application that places virtual objects in the real world using marker-based or markerless tracking
  • Design a VR experience that incorporates spatial audio, haptic feedback, and intuitive interaction mechanics
  • Optimize the performance of a complex 3D scene by applying LOD, occlusion culling, and other techniques
  • Experiment with photogrammetry by capturing and reconstructing a real-world object as a 3D model
  • Build a procedural animation system that generates realistic motion based on physical simulation or behavioral rules


© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.