Exploring Ambiguity and Accessibility in Augmented Reality Interactions
Augmented Reality (AR) has become an exciting playground for developers, offering endless possibilities for creating immersive and interactive experiences. One common interaction in AR is the act of reaching out and touching virtual objects in the augmented space. While this may seem like a fun and intuitive way to engage with AR, it's essential to consider design principles and accessibility when developing such experiences.
Today, we’re exploring the challenges and some solutions in making AR interactions inclusive and engaging by delving into experiments that blend ambiguity and accessibility.
The "Hyper Match Cube'' lens on Snapchat, made by Flat Pixel, is an AR 3D puzzle. The concept is intriguing, seamlessly transitioning between hand tracking and screen-based gestures. Initially relying on hand tracking, it cleverly shifts to screen interactions when the user withdraws their hand from view. This seemingly subtle shift carries significant implications, introducing both ambiguity and accessibility to the interaction process.
For users who can't utilize hand tracking, it offers an alternative, inclusive means of engagement. Moreover, for those who can use hand tracking, it provides a discreet option for scenarios where extending one's hand might be socially conspicuous, echoing the versatility of features like subtitles, as an essential accessibility feature for silent viewing in public spaces.
Accessibility features benefit everyone. It is crucial we consider all types of user needs as AR creators, but accessibility interaction design principles can be used to enhance usability for everyone.
To explore the potential of combining ambiguity and accessibility in AR interactions, we created a small prototype.
The first step was to introduce an object for users to interact with. To keep things simple, we used a basic sphere with a material from our assets library. Our goal was to implement five different interaction types, each adding a layer of depth and engagement to the AR experience.
Screen Touch Interaction
The simplest interaction we implemented was screen touch. This interaction is built into Lens Studio and allows users to interact with objects by tapping on the screen. Although basic this is vastly usable when compared to hand tracking.
Collision Interaction
Collision interaction was another straightforward implementation. We used the script.collider.onOverlapEnter method to detect when the user's hand or a virtual object collided with our sphere. This interaction is well-documented and widely used within Lens Studio.
AR Touch No Depth
One of the unique interactions we explored was "AR Touch No Depth." This enables users to interact with AR objects from a distance by hovering over them. To achieve this, we created variables for the world positions of the scene object and the user's fingertip, along with a variable for the screen space of the scene object using the camera's worldSpaceToScreenSpace() method. By comparing these values with the bounds of the screen space object, we made it possible for users to point and interact with AR objects without physically touching them, making it more accessible for those with limited mobility, those who don't want to move and for for those in locations where they can not move.
Point Interaction
By checking the direction of the user's index finger relative to the scene object and evaluating the dot product, we triggered events when the vectors aligned within a specified threshold. This interaction encourages users to engage with AR objects by pointing at them, offering an alternative to physical touch and “Touch no depth”.
Proximity Interaction
The proximity interaction dynamically switches between "AR Touch No Depth" and "Physics Collision" based on the distance between the user and the scene object. When the user is far away, they can use AR Touch No Depth to interact, and when they get closer, physics collision takes over. This dynamic interaction provides users with multiple ways to engage with AR content based on their proximity.
Although these seem basic they offer both ambiguity and accessibility to users by allowing for multiple ways of interacting with experiences.
Drawing from research conducted by our Tech Lead, Liam, on playful technology for wheelchair users, we apply the following across our AR applications.
Embrace Ambiguity: This means, rather than constantly striving for pinpoint precision in movement execution and sensor data, we should welcome the variability that can arise. This is especially relevant when designing for players who may have diverse movement capabilities and may not conform to the game's expected norms. We should also keep in mind that sensor precision may be limited for individuals with restricted mobility and/or certain neurological conditions. So, in these cases, it's essential to let go of the pursuit of hyper-realism and acknowledge the flexibility and adaptability needed in our designs.
For instance, don't be afraid to make colliders slightly larger than the actual objects (this is particularly pertinent on mobile devices where tracking is not perfect where it's hard/ non performant to strive for realism) to accommodate diverse movements comfortably or make experiences have multiple interactions i.e. screen touch and physical touch.
Map Imaginatively and Accessibly: When it comes to mapping movements, we advocate for imaginative mapping, especially in experiences where realism isn't the top priority. Imaginative mapping allows us to amplify user input and enable actions that may not be feasible in the real world for instance using the force. However, we must be cautious about applying this principle in simulations, as it can potentially replicate real-world access barriers for physically disabled players, for instance dance lenses with no alternative forms of input. Therefore, while we encourage imaginative mapping, stressing the importance of mapping accessibly, ensuring that the experience is inclusive and prioritizes users’ ease of access over strict movement realism.
For example, the below lens is only triggered by full body gestures, to add accessibility and ambiguity one could offer touch screen effects that trigger those same effects.
Facilitate Social Fun with Sensitivity: Another vital aspect we consider is the concept of facilitating social fun through movement in AR experiences. While making movement a social experience is essential, we must exercise sensitivity in its implementation. For some, being the center of attention in public spaces while engaging in movement-based experiences can lead to social awkwardness. This is especially relevant when considering the experiences of those who might feel uncomfortable being perceived as a spectacle by strangers. Instead of forcing the experience to be outwardly performative, we should prioritize users' comfort zones.
It's crucial to offer options for social fun that do not inadvertently turn disability into a spectacle. This can be achieved by offering different options for users to choose how they interact.
Above is an extreme example, but for some engaging with lenses that require these kinds of interactions can be socially daunting, simply allowing the user to discreetly interact with the experience using screen touches offers them ambiguity and accessibility.
As AR continues to evolve, so do the possibilities for creating engaging and accessible experiences. Our small prototype experiment has shown that by blending ambiguity and accessibility, we can design AR interactions that are not only fun but also inclusive. As developers we actively explore innovative ways to make AR experiences accessible to all users while keeping the magic of exploration and discovery alive in augmented reality. If you’re interested in this, or anything related, get in touch with us.
Read Liam’s research paper on accessibility in immersive experiences here.