Abstract
Purpose: In “human teleoperation” [1], augmented reality
(AR) and haptics are used to tightly couple an expert leader to a human
follower. To determine the feasibility of human teleoperation, we
quantify the abil- ity of humans to track a position and/or force
trajectory via AR cues. The human response time, precision, frequency
response, and step response were characterized, and several rendering
methods were compared.
Methods: Volunteers (n=11) performed a series of tasks as the
fol- lower in our human teleoperation system. The tasks involved
tracking pre-recorded series of motions and forces, each time with a
different rendering method. The order of tasks and rendering methods was
ran- domized to avoid learning effects and bias. The volunteers then
performed a series of frequency response tests and filled out a
questionnaire.
Results: Rendering the full ultrasound probe as a position
target with an error bar displaying force led to the best position and
force tracking. Following force and pose simultaneously was more
difficult but did not lead to significant performance degradation versus
following one at a time. On average, subjects tracked positions,
orientations, and forces with rms tracking errors of 6.2 ± 1.9 mm, 5.9 ±
1.9˚, 1.0 ± 0.3 N, steady-state errors of 2.8 ± 2.1 mm, 0.26 ± 0.2 N,
and lags of 345.5 ± 87.6 ms respectively. Performance decreased with
input frequency, until the person could no longer follow, depending on
the input amplitude.
Conclusion: This paper characterizes human tracking ability in
aug- mented reality human teleoperation, which shows the system’s feasi-
bility and good performance, and is important for designing future human
computer interfaces using augmented and virtual reality.