AR SDKs and development tools are essential for creating immersive augmented reality experiences. These toolkits provide developers with the necessary resources to blend virtual content with the real world, enabling interactive and engaging AR applications across various platforms.

Selecting the right AR SDK involves considering factors like target platforms, required features, performance, and cost. Popular options include for iOS, for Android, and cross-platform solutions like and , each offering unique capabilities for AR development.

Overview of AR SDKs

  • AR SDKs (Software Development Kits) provide developers with tools and libraries to create augmented reality experiences across various platforms and devices
  • Understanding the capabilities, limitations, and use cases of different AR SDKs is crucial for selecting the appropriate toolkit for a specific AR project in the context of Immersive and Virtual Reality Art
  • AR SDKs enable the integration of virtual content with the real world, allowing artists and developers to create interactive and immersive experiences that blend digital elements with physical environments
Top images from around the web for Comparison of popular AR SDKs
Top images from around the web for Comparison of popular AR SDKs
  • Popular AR SDKs include ARKit (iOS), ARCore (Android), Vuforia (cross-platform), and (cross-platform), each with its own strengths and weaknesses
  • ARKit and ARCore are platform-specific SDKs that leverage the native capabilities of iOS and Android devices respectively, offering tight integration with the operating system and hardware
  • Vuforia and Wikitude are cross-platform SDKs that support multiple platforms and provide a wider range of features, such as image recognition and tracking, making them suitable for projects targeting multiple devices and operating systems
  • Other notable AR SDKs include AR Foundation, AR, and WebXR, which cater to specific development environments or platforms

Criteria for selecting an AR SDK

  • Target platform and devices: Consider the intended audience and the devices they are likely to use, such as iOS, Android, or web browsers, and choose an SDK that supports those platforms
  • Required features and capabilities: Evaluate the specific requirements of the AR project, such as , , , or , and select an SDK that provides those features
  • Performance and stability: Assess the performance and stability of the SDK, considering factors like tracking accuracy, , and resource consumption, to ensure a smooth and reliable user experience
  • Documentation and community support: Look for SDKs with comprehensive documentation, tutorials, and active developer communities to facilitate learning, troubleshooting, and staying up-to-date with the latest features and best practices
  • Licensing and cost: Consider the licensing terms and associated costs of the SDK, including any fees for commercial use, royalties, or subscription plans, and ensure they align with the project's budget and distribution goals

AR development tools

  • AR development involves a range of specialized tools and software for creating, manipulating, and integrating various assets, such as 3D models, animations, audio, and user interfaces
  • These tools work in conjunction with AR SDKs to streamline the development process and enable the creation of rich and immersive AR experiences for Immersive and Virtual Reality Art projects
  • Familiarity with these tools and their workflows is essential for AR developers and artists to efficiently create and iterate on AR content

3D modeling for AR

  • 3D modeling tools are used to create and edit the virtual objects and environments that are superimposed on the real world in AR experiences
  • Popular 3D modeling software for AR includes Autodesk Maya, Blender, and SketchUp, which offer a wide range of modeling techniques (polygon modeling, sculpting) and features (UV mapping, texturing)
  • Optimization techniques, such as low-poly modeling and texture atlasing, are crucial for creating 3D models that perform well on mobile devices and maintain a high frame rate in AR applications
  • AR-specific 3D modeling considerations include ensuring proper scale, orientation, and pivot points for virtual objects to align correctly with the real world

Animation tools for AR

  • Animation tools enable the creation of dynamic and interactive elements in AR experiences, such as characters, objects, and visual effects
  • 3D animation software like Autodesk Maya, Blender, and Cinema 4D provide robust tools for keyframe animation, rigging, and motion capture, allowing developers to bring virtual elements to life
  • 2D animation tools, such as Adobe After Effects and Toon Boom Harmony, can be used to create animated textures, user interface elements, and overlays for AR experiences
  • Unity and Unreal Engine, popular game engines for AR development, also offer built-in animation tools and support for importing animations from external software

Audio tools for AR

  • Audio plays a crucial role in creating immersive and engaging AR experiences, providing spatial cues, interactive feedback, and ambient soundscapes
  • Digital audio workstations (DAWs) like Ableton Live, FL Studio, and Pro Tools are used to compose, record, and edit audio assets for AR projects
  • Middleware solutions, such as Wwise and FMOD, facilitate the integration of adaptive audio and real-time audio processing in AR applications, enabling dynamic and responsive audio based on user interactions and environmental factors
  • Spatial audio tools, like Google Resonance Audio and Oculus Spatializer, allow developers to create realistic and localized audio experiences that enhance the sense of presence in AR

UI design tools for AR

  • User interface (UI) design tools are essential for creating intuitive and user-friendly interfaces that guide users through AR experiences and provide visual feedback
  • Adobe XD, Sketch, and Figma are popular UI design tools that enable the creation of wireframes, mockups, and interactive prototypes for AR applications
  • AR-specific UI considerations include designing for variable screen sizes, accommodating different viewing distances, and ensuring readability in various lighting conditions
  • tools like InVision and Marvel allow designers to create interactive AR UI prototypes that simulate user flows and interactions, facilitating iterative design and user testing

AR SDKs for iOS

  • iOS is a popular platform for AR development, with a large user base and powerful hardware capabilities that enable immersive and high-performance AR experiences
  • Apple's ARKit is the primary SDK for developing AR applications on iOS, providing a comprehensive set of tools and frameworks for creating AR content

ARKit overview and capabilities

  • ARKit is Apple's proprietary SDK for AR development on iOS devices, leveraging the device's camera, motion sensors, and processing power to enable realistic and responsive AR experiences
  • Key features of ARKit include markerless tracking, plane detection, face tracking, image recognition, and support for 3D object recognition and tracking
  • ARKit uses (VIO) to accurately track the device's position and orientation in real-time, enabling stable and precise AR content placement
  • With ARKit's Scene Understanding capabilities, developers can detect and interact with real-world surfaces, objects, and lighting conditions, creating more realistic and context-aware AR experiences

Setting up ARKit development environment

  • To start developing with ARKit, you need a Mac computer running macOS and Xcode, Apple's integrated development environment (IDE) for iOS and macOS
  • Ensure your Mac and Xcode are updated to the latest versions to access the most recent ARKit features and improvements
  • Create a new Xcode project and select the "Augmented Reality App" template, which provides a pre-configured setup for ARKit development
  • Connect an ARKit-compatible iOS device (iPhone or iPad) to your Mac for testing and debugging your AR applications

ARKit programming basics

  • ARKit development primarily uses Swift, Apple's modern and powerful programming language for iOS and macOS
  • Familiarize yourself with the key classes and protocols in ARKit, such as
    ARSession
    ,
    ARFrame
    ,
    ARSCNView
    , and
    ARWorldTrackingConfiguration
    , which form the foundation of ARKit programming
  • Use
    ARSession
    to manage the AR session, configure tracking settings, and handle session state changes and interruptions
  • Implement
    ARSCNViewDelegate
    or
    ARSKViewDelegate
    methods to receive updates on the AR session, such as when a new frame is available or when the tracking state changes
  • Create and manipulate 3D content using SceneKit or SpriteKit, Apple's frameworks for 3D and 2D rendering, respectively, and add them to the AR view

Advanced ARKit features

  • ARKit offers advanced features that enable more sophisticated and interactive AR experiences, such as face tracking, image recognition, and object detection
  • Face tracking allows developers to detect and track the user's facial features, enabling applications like real-time facial filters, animated characters, and emotion recognition
  • Image recognition enables the detection and tracking of specific images or markers in the real world, triggering AR content or interactions when recognized
  • 3D object detection and tracking allow developers to recognize and track known 3D objects in the real world, enabling applications like product visualization and interactive guides
  • ARKit also supports collaborative AR experiences, allowing multiple users to share the same AR space and interact with virtual content simultaneously using shared world maps and anchors

AR SDKs for Android

  • Android is another major platform for AR development, with a diverse range of devices and a large global user base
  • Google's ARCore is the primary SDK for creating AR experiences on Android, providing a set of APIs and tools for integrating AR capabilities into Android applications

ARCore overview and capabilities

  • ARCore is Google's SDK for building AR applications on Android devices, using the device's camera and motion sensors to enable immersive and interactive AR experiences
  • Key features of ARCore include , , and , allowing developers to create AR content that seamlessly blends with the real world
  • ARCore uses concurrent odometry and mapping (COM) to track the device's position and orientation relative to the world, providing stable and accurate AR content placement
  • With ARCore's environmental understanding capabilities, developers can detect horizontal and vertical surfaces, estimate lighting conditions, and create realistic shadows and reflections for virtual objects

Setting up ARCore development environment

  • To develop ARCore applications, you need a computer running Windows, macOS, or Linux, along with Android Studio, the official IDE for Android development
  • Ensure your Android Studio installation is up to date and includes the latest Android SDK and build tools
  • Set up an ARCore-supported Android device for testing and debugging, ensuring that it meets the minimum hardware and software requirements for ARCore
  • Create a new Android Studio project and configure it for ARCore development by adding the necessary dependencies and permissions to your project's build files and manifest

ARCore programming basics

  • ARCore development primarily uses Java or Kotlin, the official programming languages for Android development
  • Familiarize yourself with the core classes and interfaces in ARCore, such as
    Session
    ,
    Frame
    ,
    Anchor
    , and
    Trackable
    , which form the foundation of ARCore programming
  • Use
    Session
    to manage the AR session, configure session settings, and handle session state changes and errors
  • Implement
    FrameUpdateListener
    to receive updates on the AR session, such as when a new camera frame is available or when the tracking state changes
  • Create and manipulate 3D content using OpenGL ES or Sceneform, a 3D framework for Android that simplifies the creation and rendering of 3D scenes in AR applications

Advanced ARCore features

  • ARCore offers advanced features that enable more complex and engaging AR experiences, such as augmented images, cloud anchors, and scene viewer
  • Augmented Images allows developers to detect and track specific 2D images in the real world, triggering AR content or interactions when recognized
  • Cloud Anchors enable the creation of persistent and shareable AR experiences, allowing users to save and retrieve AR content across different devices and sessions
  • Scene Viewer is an ARCore feature that allows users to view and interact with 3D models in AR directly from web browsers or mobile apps, without the need for a dedicated AR application
  • ARCore also supports the integration of AR with other Google technologies, such as Google Maps and Google Lens, enabling location-based AR experiences and visual search capabilities

Cross-platform AR SDKs

  • Cross-platform AR SDKs allow developers to create AR experiences that can be deployed across multiple platforms and devices, such as iOS, Android, and web browsers
  • These SDKs provide a unified development environment and a set of APIs that abstract the differences between platforms, simplifying the development process and reducing the need for platform-specific code

Vuforia overview and capabilities

  • Vuforia is a popular cross-platform AR SDK that enables the creation of marker-based and markerless AR experiences for mobile devices and digital eyewear
  • Key features of Vuforia include image recognition, 3D object tracking, and extended tracking, which allows AR content to persist even when the original marker or object is no longer visible
  • Vuforia supports a wide range of development platforms, including Unity, iOS, Android, and Universal Windows Platform (UWP), making it a versatile choice for cross-platform AR projects
  • Vuforia's cloud recognition service allows developers to create and manage large databases of target images and 3D objects, enabling scalable and dynamic AR experiences

Wikitude overview and capabilities

  • Wikitude is another leading cross-platform AR SDK that offers a comprehensive set of tools and features for creating AR experiences across multiple devices and operating systems
  • Key features of Wikitude include instant tracking, which enables markerless AR experiences with no prior setup, and SLAM (Simultaneous Localization and Mapping) for precise tracking and mapping of the environment
  • Wikitude supports a variety of development platforms and frameworks, including Android, iOS, Unity, Xamarin, and Cordova, providing flexibility for developers working with different technologies
  • Wikitude's Studio is a web-based authoring tool that allows non-developers to create and publish AR experiences without coding, making AR content creation more accessible to a broader audience

Comparison of cross-platform AR SDKs

  • When choosing a cross-platform AR SDK, consider factors such as supported platforms, feature set, ease of use, performance, and pricing
  • Vuforia and Wikitude are both robust and feature-rich SDKs that offer a wide range of capabilities and support for multiple platforms, making them suitable for a variety of AR projects
  • Other cross-platform AR SDKs, such as ARToolKit and EasyAR, provide specific features or focus on particular use cases, such as marker-based tracking or mobile performance optimization
  • Evaluate the specific requirements and constraints of your AR project, such as target devices, required features, and development team expertise, to select the most appropriate cross-platform AR SDK
  • Keep in mind that while cross-platform SDKs simplify development across multiple platforms, they may not always provide the same level of performance or access to platform-specific features compared to native SDKs like ARKit and ARCore

Web-based AR development

  • Web-based AR allows developers to create AR experiences that can be accessed directly from web browsers, without the need for users to download and install a separate mobile application
  • This approach lowers the barrier to entry for users and enables the creation of more accessible and shareable AR content, making it well-suited for marketing campaigns, product visualizations, and educational experiences

WebXR overview and capabilities

  • WebXR is an open standard that enables the creation of AR and VR experiences on the web, providing a API for accessing the device's camera, motion sensors, and display capabilities
  • WebXR builds upon the earlier WebVR standard, adding support for AR and mixed reality experiences in addition to VR
  • Key features of WebXR include the ability to detect and track real-world surfaces, anchor virtual content to real-world positions, and handle user input through various controllers and gestures
  • WebXR experiences can be accessed through compatible web browsers on mobile devices and head-mounted displays, making them highly accessible and platform-agnostic

Setting up WebXR development environment

  • To develop WebXR applications, you need a text editor or integrated development environment (IDE) for writing HTML, CSS, and JavaScript code
  • Ensure that your development machine has a recent version of a WebXR-compatible browser, such as Google Chrome, Mozilla Firefox, or Microsoft Edge
  • Set up a local web server to host your WebXR application during development and testing, using tools like Apache, Nginx, or live-server (a Node.js package for simple HTTP server creation)
  • Test your WebXR application on target devices, such as smartphones or tablets with AR capabilities, to ensure compatibility and performance

WebXR programming basics

  • WebXR development primarily uses JavaScript, the standard programming language for web development, along with HTML and CSS for structuring and styling the application
  • Familiarize yourself with the core concepts and interfaces of the WebXR API, such as
    XRSession
    ,
    XRFrame
    ,
    XRSpace
    , and
    XRReferenceSpace
    , which form the foundation of WebXR programming
  • Use
    navigator.xr.requestSession()
    to request an XR session and specify the desired session mode (e.g., "immersive-ar" for AR experiences)
  • Implement the
    XRSession
    's
    requestAnimationFrame()
    method to update and render the AR scene in each frame, based on the current pose and input state
  • Create and manipulate 3D content using WebGL or popular web-based 3D libraries like Three.js or Babylon.js, and integrate them with the WebXR scene

Advanced WebXR features

  • WebXR offers advanced features that enable more sophisticated and interactive AR experiences, such as hit testing, anchors, and lighting estimation
  • Hit testing allows developers to detect real-world surfaces and points of interest in the camera view, enabling the placement and interaction of virtual content in the real environment
  • Anchors provide a way to persist and share the position and orientation of virtual objects across multiple sessions or devices, enabling collaborative and persistent AR experiences
  • Lighting estimation enables the application to adapt the appearance of virtual objects based on the real-world lighting conditions, creating more realistic and seamless blending of virtual and real content
  • WebXR can also be combined with other web technologies, such as WebRTC for real-time communication and collaboration, or Web Bluetooth for interfacing with external devices and sensors, expanding the possibilities for interactive and connected AR experiences

Integrating AR with other technologies

  • AR can be combined with various other technologies to create more powerful, intelligent, and connected experiences that extend beyond the visual overlay of digital content
  • Integrating AR with technologies such as machine learning, Internet of Things (IoT), blockchain, and 5G networks enables the creation of AR applications that are more context-aware, responsive, secure, and scalable

AR and machine learning

  • Machine learning can be used to enhance AR experiences by enabling real-time object recognition, scene understanding, and user behavior analysis
  • Convolutional Neural Networks (CNNs) can be trained to recognize and classify objects, people, or scenes in the camera view, providing contextual

Key Terms to Review (27)

3D Object Recognition: 3D object recognition is the process of identifying and classifying three-dimensional objects from a set of data, often captured through sensors or cameras. This technology is crucial for enhancing augmented reality experiences, allowing virtual objects to interact seamlessly with the real world by understanding their spatial relationships and positioning.
Agile Development: Agile development is a flexible and iterative approach to software development that emphasizes collaboration, customer feedback, and rapid delivery of functional software. This methodology allows teams to respond quickly to changes in requirements or market conditions, making it especially useful in fields like augmented reality, where technology and user expectations evolve rapidly.
ARCore: ARCore is Google's platform for building augmented reality experiences on Android devices. It provides developers with the tools to create immersive applications that blend digital content with the real world by utilizing motion tracking, environmental understanding, and light estimation. By leveraging the capabilities of ARCore, developers can create engaging experiences that enhance user interaction and provide contextual information based on the user's surroundings.
ARKit: ARKit is Apple's augmented reality development framework that allows developers to create immersive AR experiences for iOS devices. By utilizing advanced technologies such as motion tracking, environmental understanding, and light estimation, ARKit enables applications to seamlessly blend digital content with the real world. This powerful tool supports both marker-based and markerless AR, making it versatile for various applications in gaming, education, and beyond.
C#: C# is a modern, object-oriented programming language developed by Microsoft that is widely used for building a variety of applications, particularly in game development and augmented reality. With its clean syntax and powerful features, C# allows developers to create interactive experiences and complex systems efficiently. It is tightly integrated with the .NET framework, making it a go-to choice for game engines and AR SDKs.
Environmental Understanding: Environmental understanding refers to the ability to perceive, interpret, and interact with the surrounding physical world in a meaningful way. This understanding is crucial when developing augmented reality (AR) applications, as it involves recognizing and responding to real-world elements like surfaces, lighting, and spatial relationships to create immersive experiences.
Facial recognition: Facial recognition is a technology that can identify or verify a person’s identity by analyzing and comparing facial features from images or video. It has applications in various fields, including security, marketing, and social media, enhancing the interaction between users and digital environments.
Frame Rate: Frame rate refers to the frequency at which consecutive images, or frames, are displayed in a video or rendered in a virtual environment, typically measured in frames per second (FPS). A higher frame rate results in smoother motion and better visual quality, which is crucial for immersive experiences like virtual and augmented reality, as well as gaming. Maintaining an optimal frame rate helps reduce motion blur and enhances the overall user experience by making interactions feel more natural and responsive.
Gesture recognition: Gesture recognition is a technology that enables devices to interpret human gestures as commands or inputs, often using sensors or cameras to detect and analyze physical movements. This technology plays a crucial role in creating intuitive user interfaces in immersive experiences, allowing for seamless interactions in both augmented and virtual realities. Gesture recognition can enhance user engagement by providing a natural way to navigate and control digital environments without the need for traditional input devices.
Haptic feedback: Haptic feedback refers to the technology that simulates the sense of touch by applying forces, vibrations, or motions to the user, creating a tactile response in interaction. This technology enhances immersion and engagement in virtual environments by providing users with physical sensations that correspond to their actions or events within a digital space. Its integration into various systems and devices improves user experiences across multiple applications, from gaming to medical simulations.
Head-mounted display: A head-mounted display (HMD) is a device worn on the head that provides an immersive visual experience by displaying images directly in front of the user's eyes. HMDs are commonly used in virtual reality (VR) and augmented reality (AR) applications, allowing users to interact with digital content in a three-dimensional space. These devices often incorporate sensors to track head movements, enhancing the sense of presence and immersion.
JavaScript: JavaScript is a versatile, high-level programming language primarily used for enhancing web pages with interactivity and dynamic content. In the context of AR SDKs and development tools, JavaScript serves as a key component for creating immersive experiences, enabling developers to build applications that integrate 3D models, animations, and user interactions within augmented reality environments.
Latency: Latency refers to the delay between a user's action and the system's response, crucial for ensuring a seamless experience in immersive environments. In VR and AR, high latency can disrupt the sense of presence and immersion, making it vital to minimize delays in headset tracking, input devices, and rendering. Understanding and addressing latency is essential for creating engaging experiences and maintaining user comfort.
Light estimation: Light estimation refers to the process of assessing the intensity and direction of light in an environment to enhance the realism of augmented reality experiences. Accurate light estimation allows AR applications to seamlessly blend virtual objects with the real world by matching the lighting conditions, thereby improving shadow rendering and color consistency. This capability is essential for creating immersive and convincing interactions between digital and physical elements.
Marker-based tracking: Marker-based tracking is a technology used in augmented reality (AR) that relies on visual markers, such as QR codes or specific patterns, to accurately identify and track the position and orientation of digital content in relation to the real world. This method allows AR applications to overlay virtual objects onto physical environments by recognizing and analyzing the markers through a camera, providing precise alignment and interaction.
Markerless Tracking: Markerless tracking is a technology used in augmented reality (AR) that allows virtual objects to be placed in the real world without the need for physical markers or reference points. This method utilizes advanced algorithms and sensor data, such as GPS, accelerometers, and computer vision, to recognize and map the environment, enabling a more seamless integration of digital content into users' surroundings. It enhances user experience by providing more flexibility and reducing the limitations associated with traditional marker-based systems.
Mesh: In the context of augmented reality (AR) development, a mesh refers to a collection of vertices, edges, and faces that define the shape of a 3D object in space. Meshes are essential for creating realistic representations of objects in AR, as they provide the necessary geometry that can be textured, shaded, and animated. This geometry allows developers to overlay digital content onto the physical world effectively, making it a critical component when using AR SDKs and development tools.
Motion tracking: Motion tracking is the process of capturing the movement of objects or people in real time and translating that data into a digital format that can be used in various applications. This technology is crucial for creating immersive experiences in virtual environments, enhancing interactive installations, and accurately mapping projections onto surfaces. By using sensors and tracking systems, motion tracking enables the integration of physical movements into digital interactions, providing users with a more engaging and realistic experience.
OpenXR: OpenXR is an open standard developed by the Khronos Group that enables cross-platform interoperability for virtual reality (VR) and augmented reality (AR) applications. It provides a unified API for different hardware and software ecosystems, allowing developers to create immersive experiences that work seamlessly across various VR headsets, input devices, game engines, and AR SDKs. This standardization helps reduce fragmentation in the immersive technology landscape and enhances compatibility for developers and users alike.
Prototyping: Prototyping is the process of creating an early model or sample of a product to test concepts and functionalities before final production. In the context of AR SDKs and development tools, prototyping allows developers to experiment with user interactions, interface designs, and functionalities in augmented reality applications, helping refine ideas and identify potential issues early on.
Texture mapping: Texture mapping is a technique used in 3D graphics to apply a 2D image (texture) onto the surface of a 3D model, enhancing the visual detail and realism of the object. This process involves wrapping the texture around the model to define how the image corresponds to the geometry, enabling artists to create complex visuals without increasing the polygon count. Texture mapping plays a crucial role in user interface design, avatar customization, and augmented reality development.
Unity: Unity refers to the cohesion and harmony among different elements within immersive environments, ensuring that all components work together seamlessly to create an engaging experience. This concept is crucial for achieving a balanced interaction between visuals, audio, and user input, enhancing overall immersion and user satisfaction.
Unreal Engine: Unreal Engine is a powerful game engine developed by Epic Games, widely used for creating high-quality interactive experiences, including video games, virtual reality, and augmented reality applications. It offers advanced rendering capabilities, real-time lighting, and a robust toolset for content creation, making it a top choice for developers and artists working in immersive environments.
Visual Inertial Odometry: Visual inertial odometry is a technique used in robotics and augmented reality that combines visual information from cameras with inertial data from sensors like accelerometers and gyroscopes to estimate the position and orientation of a moving object in real time. This method enhances the accuracy of motion tracking by leveraging both visual cues and inertial measurements, making it particularly effective in dynamic environments where traditional tracking might struggle.
Vuforia: Vuforia is a widely used augmented reality (AR) software development kit (SDK) designed to help developers create AR applications. It provides a comprehensive platform that supports both marker-based and markerless AR experiences, allowing for the seamless integration of digital content into the real world through various devices. Vuforia's robust tracking capabilities and user-friendly tools make it a popular choice for developers aiming to leverage AR technology.
WebXR: WebXR is an API that enables the development of immersive experiences, including virtual reality (VR) and augmented reality (AR), directly in web browsers. This technology allows developers to create applications that can run seamlessly across various devices, making it easier for users to access immersive content without needing additional software or plugins. By providing a unified interface for both AR and VR, WebXR opens up new possibilities for interactive experiences on the web.
Wikitude: Wikitude is an augmented reality (AR) platform that provides developers with tools to create immersive AR experiences through its SDK (Software Development Kit). It enables the integration of digital content with the real world by using computer vision and image recognition technologies, making it a popular choice for creating location-based AR applications.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.