It's mid-summer of 2022 and Oculus has just released their Hand Tracking 2.0. Ultraleap is now embedded in VR headsets, and smaller startups and Apple are creating their own input hardware. There is a race to determine which input modality will be primary within the AR/VR space.
To test these modalities, I conducted an experiment with 17 students. They were asked to click on a blue cube/square with either hand tracking or controllers. The blue cube/square was in different positions and distances for each turn, and the students' times and accuracy were calculated. The goal of the experiment was to determine which input modality is the most accurate, efficient, effective, versatile, and preferred among Generation Z.
Based on the results of our tests, I have identified the following suggestions for improving users' awareness of their ability to directly interact with objects when using hand tracking. Please note that these suggestions have not yet been tested.
The majority of testers had difficulty using the hand ray interactor, as they assumed that the ray was emanating from the fingertip. Additionally, there were several ergonomic issues with the arm shaping and moving in a circular rather than linear motion, which caused the cursor to be shaky and less precise, leading to inefficiency.
The controllers had the fewest issues of the three methods tested, although testers did experience some difficulty in distinguishing between the cursors for the left and right hands. There was also some slight shakiness when interacting with objects at a distance, and some testers struggled to identify which objects were interactable.
This video shows the entire testing process. For their second round, the user will repeat the flow using either hand tracking or a controller, depending on which input system they used in their first round.
01. Identifying test variables and factors
02. Thoroughly planning the study
03. Onboarding testers into VR
04. Analyzing for Insights over Solutions
Initially, I planned to test the click, drag, and release interactions, which are primitive digital actions. However, due to the limited time, I focused on the click interaction and included various factors in the test.
The initial factors were distance, size, occlusion, 2D/3D and shapes. However, likewise with the interactions, I had to cut down to mainly distance, occlusion and 2D/3D.
As a first-time researcher in the field of Human-Computer Interaction, I was unsure of what data to collect from my participants before testing. I knew that collecting too little data could impact the results, so I identified the main variables that could affect a person's testing performance. While it is generally better to collect more data, I also had to consider ethical considerations when deciding which data to collect.
To understand how other HCI testing studies were conducted, I researched relevant papers and used the information to create two quantitative surveys, one for each input, and a few qualitative questions to conclude the study. This provided me with both the actions and thoughts of the users during the experience.
Initially, I planned to randomly assign the cube positions for each user, but I realized that this would impact the validity of the test. Therefore, I learned that consistency is crucial to ensure that the 2D and 3D occlusions are equally challenging. To achieve this, I kept the cube positions constant and slightly adjusted the position of the interactable blue cube to be pressed in all five directions.
Prototyping all forms of direct interactors for both controllers and hand tracking allows users to explore the full capabilities of each modality and choose the option that works best for them.
After testing with one user, I realized that it was important to provide an onboarding scene for users who are unfamiliar with hand tracking or VR in general. This would create a more level playing field with users who have prior experience. The onboarding scene included both 2D and 3D interactable blue cubes in random directions and allowed users to try it with both controllers and hand tracking.
I recorded and timed all of the tests and used averages and percentages to calculate the time differences. The data was collected both in VR and off VR.
It had been a while since I used any statistical knowledge, so this was a new experience for me. I calculated the standard deviation of the post-survey results to understand the diversity of opinions among the participants. I also assigned each participant a color and created a whiteboard chart of all survey results to identify any commonalities. Unfortunately, there were no notable trends
Finally, I coded each qualitative interview and matched them with the participants' behaviors to identify any common patterns. However, there were very few, if any, patterns that could be justified. Therefore, I focused on the general feedback I received about input modalities in the present and future of XR.