logo

SCIENTIA SINICA Informationis, Volume 46, Issue 12: 1711-1736(2016) https://doi.org/10.1360/N112016-00252

A survey on human-computer interaction in virtual reality

More info
  • ReceivedOct 26, 2016
  • AcceptedDec 3, 2016
  • PublishedDec 22, 2016

Abstract

Human computer interaction is one of the core technologies of virtual reality, which has an important significance on promoting the widely use of virtual reality and improving the user experience. Due to the development of various sensors and hardwares, human computer interaction technology in virtual reality has made remarkable progress. This paper firstly introduces the paradigm of human computer interaction in virtual reality, and then makes a summary about the main research contents and development trends of various virtual reality and augmented reality technologies, including 3D interaction, hand gesture interaction, handheld devices interaction, speech interaction, haptic interaction and multimodal interaction. Finally, the paper proposes some existing problems that need to be further studied.


Funded by

国家自然科学基金(61572479)

国家自然科学基金(61135003)

国家重点研发计划(2016YFB1001403)

国家自然科学基金委员会与新加坡国家研究基金会合作研究(61661146002)


References

[1] Wang C W, Gao W, Wang X R. The Theory, Implementation and Application of Virtual Reality Technology. Beijing: Tsinghua University Press, 1996 [汪成为, 高文, 王行仁. 灵境(虚拟现实)技术的理论、实现及应用. 北京: 清华大学出版社, 1996]. Google Scholar

[2] Green M, Jacob R. SIGGRAPH'90 workshop report: software architectures and metaphors for non-WIMP user interfaces. Comput Graph, 1991, 25: 229-235 CrossRef Google Scholar

[3] van Dam A. Post-WIMP user interface. Commun ACM, 1997. 40: 63-67. Google Scholar

[4] Dong S H, Wang J, Dai G Z. Human-Computer Interaction and Multi Model User Interface. Beijing: Science Press, 1999 [董士海, 王坚, 戴国忠. 人机交互和多通道用户界面. 北京: 科学出版社, 1999]. Google Scholar

[5] Jacob R J K, Girouard A, Hirshfield L M, et al. Reality-based interaction: a framework for Post-WIMP interfaces. In: Proceedings of ACM CHI, Florence, 2008. 201-210. Google Scholar

[6] Bowman D, Kruijff E, LaViola J, et al. 3D User Interface: Theory and Practice. Boston: Addison-Wesley Professional, 2004. Google Scholar

[7] LaViola J, Zeleznik R. Flex and pinch: a case study of whole hand input design for virtual environment interaction. In: Proceedings of the 2nd IASTED International Conference on Computer Graphics and Imaging, California: Palm Springs, 1999. 221-225. Google Scholar

[8] Lin C P, Wang C Y, Chen H R, et al. RealSense: directional interaction for proximate mobile sharing using built-in orientation sensors. In: Proceedings of the 21st ACM International Conference on Multimedia, Barcelona, 2013. 777-780. Google Scholar

[9] Mine M. Virtual environment interaction techniques. UNC Chapel Hill CS Dept: Technical Report TR95-018. 1995. Google Scholar

[10] Liang J, Green M. Jdcad: a highly interactive 3D modeling system. In: Proceedings of 3rd Interrnational Conference on CAD and Computer Graphics, Beijing, 1994. 499-506. Google Scholar

[11] Poupyrev I, Billinghurst M, Weghorst S, et al. Go-Go interaction technique: non-linear mapping for direct manipulation in VR. In: Proceedings of UIST, Seattle, 1996. 79-80. Google Scholar

[12] Bowman D, Hodges L. An evaluation of techniques for grabbing and manipulating remote objects in immersive virtual environments. In: Proceedings of Symposium on Interactive 3D Graphics, Providence, 1997. 35-38. Google Scholar

[13] Wloka M, Greenfield E. The virtual tricorder: a uniform interface for virtual reality. In: Proceedings of UIST, Pittsburgh, 1995. 39-40. Google Scholar

[14] Janzen I, Rajendran V K, Booth K S. Modeling the impact of depth on pointing performance. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, San Jose, 2016. 188-199. Google Scholar

[15] Stoakley R, Conway M, Pausch R. Virtual reality on a Wim: interactive worlds in miniature. In: Proceedings of ACM CHI, Denver, 1995. 265-272. Google Scholar

[16] Pierce J, Forsberg A, Conway M, et al. Image plane interaction techniques in 3D immersive environments. \linebreak In: Proceedings of Symposium on Interactive 3D Graphics, Providence. 1997. 39-43. Google Scholar

[17] Pierce J, Stearns B, Pausch R. Two handed manipulation of Voodoo Dolls in virtual environments. In: Proceedings of Symposium on Interactive 3D Graphics, Atlanta, 1999. 141-145. Google Scholar

[18] Mine M, Brooks F, Sequin C. Moving objects in space: exploiting proprioception in virtual-environment interaction. In: Proceedings of SIGGRAPH, Los Angeles, 1997. 19-26. Google Scholar

[19] Szalavari Z, Gervautz M. The personal interaction panel -- a two-handed interface for augmented reality. Comput Graph Forum, 1997, 16: 335-346 CrossRef Google Scholar

[20] Schmalsteig D, Encarnacao L, Szalzvari Z. Using transparent props for interaction with the virtual table. In: Proceedings of Symposium on Interactive 3D Graphics, Atlanta, 1999. 147-154. Google Scholar

[21] Billinghurst M, Baldis S, Matheson L, et al. 3D pallette, a virtual reality content creation tool. In: Proceedings of VRST, Lausanne, 1997. 155-156. Google Scholar

[22] Poupyrev I, Tomokazu N, Weghorst S. Virtual notepad: handwriting in immersive VR. In: Proceedings of VRAIS, Atlanta, 1998. 126-132. Google Scholar

[23] Nancel M, Chapuis O, Pietriga E, et al. High-precision pointing on large wall displays using small handheld devices. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Paris, 2013. 831-840. Google Scholar

[24] Olivier C, Bezerianos A, Frantzeskakis S. Smarties: an input system for wall display development. In: Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems, Toronto, 2014. 2763-2772. Google Scholar

[25] Fakourfar O, Ta K, Tang R, et al. Stabilized annotations for mobile remote assistance. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, San Jose, 2016. 1548-1560. Google Scholar

[26] Pick S, Puika A S, Kuhlen T W. SWIFTER: design and evaluation of a speech-based text input metaphor for immersive virtual environments. In: Proceedings of 2016 IEEE Symposium on 3D User Interfaces, Greenville, 2016. 109-112. Google Scholar

[27] Mapes D, Moshell J. A two-handed interface for object manipulation in virtual environments. Presence: Teleop Virt Environ, 1995, 4: 403-416 CrossRef Google Scholar

[28] Conner B, Snibbe S, Herndon K, et al. Three-dimensional widgets. In: Proceedings of Interactive 3D Graphics Symposium, Cambridge, 1992. 183-188. Google Scholar

[29] Arun K, Joseph J, LaViola J. Exploring 3D user interface technologies for improving the gaming experience. \linebreak In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, 2015. 125-134. Google Scholar

[30] Cerney M M, Vance J M. Gesture recognition in virtual environments: a review and framework for future development. Iowa State University Human Computer Interaction Technical Report ISU-HCI-2005-01. 2005. Google Scholar

[31] Cadoz C. Les Realites Virtuelles. Dominos: Flammarion, 1994. Google Scholar

[32] McNeill D. Hand and Mind: What Gestures Reveal About Thought. Chicago: University of Chicago Press, 1992. Google Scholar

[33] Brooks J F P. Grasping reality through illusion: interactive graphics serving science. In: Proceedings of ACM CHI, Washington, 1988. 1-11. Google Scholar

[34] Bolt R A. Put-that-there: voice and gesture at the graphics interface. In: Proceedings of ACM SIGGRAPH Computer Graphics, New York, 1980. 262-270. Google Scholar

[35] McGuire R M, Hernandez R J, Starner T, et al. Towards a one-way American sign language translator. In: Proceedings of the 6th IEEE International Conference on Automatic Face and Gesture Recognition, Paris, 2004. 620-625. Google Scholar

[36] Bolt R A, Herranz E. Two-handed gesture in multi-modal dialog. In: Proceedings of the ACM Symposium on User Interface Software and Technology, Monteray, 1992. 7-14. Google Scholar

[37] Cavazza M, Pouteau X, Pernel D. Multimodal communication in virtual environments. Adv Hum Factor, 1995, 20: 579-604. Google Scholar

[38] Granit L, Simon J, Elisabeth L, et al. A sliding window approach to natural hand gesture recognition using a custom data glove. In: Proceedings of IEEE Symposium on 3D User Interfaces, Greenville, 2016. 81-90. Google Scholar

[39] Zhang Z. Microsoft kinect sensor and its effect. In: Proceedings of IEEE. Piscataway: IEEE, 2012. 4-10. Google Scholar

[40] Potter L E, Araullo J, Carter L. The leap motion controller: a view on sign language. In: Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration. New York: ACM, 2013. 175-178. Google Scholar

[41] Alejandro J, Nicu S. Multimodal human computer interaction: a survey. In: Proceedings of IEEE International Workshop on Human Computer Interaction in Conjunction with ICCV, Beijing, 2005. 1-15. Google Scholar

[42] LaViola J J. An introduction to 3D gestural interfaces. In: Proceeding ACM SIGGRAPH 2014 Courses, Vancouver, 2014. Article No. 25. Google Scholar

[43] LaViola J J. 3D gestural interaction: the state of the field. ISRN Artif Intell, 2013: 514641. Google Scholar

[44] Porta M. Vision-based user interfaces: methods and applications. Int J Hum Comput Stud, 2002, 57: 27-73 CrossRef Google Scholar

[45] Pavlovic V I, Sharma S, Huang T S. Visual interpretation of hand gestures for human computer interaction: a review. IEEE Trans Pattern Anal Mach Intell, 1997, 19: 677-695 CrossRef Google Scholar

[46] Shahidul I, Ionescu B, Gadea C, et al. Full-body tracking using a sensor array system and laser-based sweeps. \linebreak In: Proceedings of IEEE Symposium on 3D User Interfaces, Greenville, 2016. 71-80. Google Scholar

[47] Zhang Y, Zhou J, Laput G, et al. Skintrack: using the body as an electrical waveguide for continuous finger tracking on the skin. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, San Jose, 2016. 14911503. Google Scholar

[48] Arun K, Joseph J, LaViola J. Exploring the usefulness of finger-based 3D gesture menu selection. In: Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems, Toronto, 2014. 1093-1102. Google Scholar

[49] Hinckley K, Pierce J, Sinclair M, et al. Sensing techniques for mobile interaction. In: Proceedings of UIST, California, 2000. 91-100. Google Scholar

[50] Zhou F, Duh H B, Billinghurst M. Trends in augmented reality tracking, interaction and display: a review of ten years of ISMAR. In: Proceedings of the 7th International Symposium on Mixed and Augmented Reality, Washington, 2008. 193-202. Google Scholar

[51] Abowd G D, Iftode L, Mitchell H. The smart phone: a first platform for pervasive computing. IEEE Pervas Comput, 2005, 4: 18-19. Google Scholar

[52] Fitzmaurice G W. Situated information spaces and spatially aware palmtop computers. Commun ACM, 1993, 36: 39-49. Google Scholar

[53] Michael R. Linking Physical and Virtual Worlds with Visual Markers and Handheld Devices. Dissertation for Ph.D. Degree. Zurich: Swiss Federal Institute of Technology, 2005. Google Scholar

[54] Fitzmaurice G W, Zhai S, Chignell M. Virtual reality for palmtop computers. ACM Trans Inform Syst, 1993, 11: 197-218 CrossRef Google Scholar

[55] Rekimoto J, Nagao K. The world through the computer: computer augmented interaction with real world environments. In: Proceedings of the 8th Annual ACM Symposium on User Interface and Software Technology, Pittsburgh, 1995. 29-36. Google Scholar

[56] Wagner D, Schmalstieg D. First steps towards handheld augmented reality. In: Proceedings of the 7th IEEE International Symposium on Wearable Computers, Sardinia, 2003. 127-135. Google Scholar

[57] Rohs M. Visual code widgets for marker-based interaction. In: Proceedings of IEEE International Conference on Distributed Computing Systems Workshops, Columbus, 2005. 506-513. Google Scholar

[58] Rohs M, Gfeller B. Using camera-equipped mobile phones for interacting with real-world objects. Adv Pervas Comput, 2004, 176: 265-271. Google Scholar

[59] Yee K P. Peephole displays: pen interaction on spatially aware handheld computers. In: Proceedings of ACM CHI, Florida, 2003. 1-8. Google Scholar

[60] Mehra S, Werkhoven P, Worring M. Navigating on handheld displays: dynamic versus static peephole navigation. ACM Trans Comput Hum Interact, 2006, 4: 448-457. Google Scholar

[61] Rohs M, Antti O. Target acquisition with camera phones when used as magic lenses. In: Proceedings of ACM CHI, Florence, 2008. 1409-1418. Google Scholar

[62] Morrison A, Oulasvirta A, Peltonen P, et al. Like bees around the hive: a comparative study of a mobile augmented reality map. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, 2009. 1889-1898. Google Scholar

[63] Reitmayr G, Drummond T. Going out: robust model-based tracking for outdoor augmented reality. In: Proceedings of 5th IEEE and ACM International Symposium on Mixed and Augmented Reality, Minnesota, 2006. 109-118. Google Scholar

[64] Miyashita T, Meier P, Tachikawa T, et al. An augmented reality museum guide. In: Proceedings of the 7th IEEE and ACM International Symposium on Mixed and Augmented Reality, San Francisco, 2008. 103-106. Google Scholar

[65] Henrysson A, Billinghurst M, Ollila M. Face to face collaborative AR on mobile phones. In: Proceedings of the 4th IEEE and ACM International Symposium on Mixed and Augmented Reality, Vienna, 2005. 80-89. Google Scholar

[66] Wagner D, Barakonyi I. Augmented reality Kanji learning. In: Proceedings of IEEE International Symposium on Mixed and Augmented Reality, Tokyo, 2003. 335-336. Google Scholar

[67] Schmalstieg D, Wagner D. Experiences with handheld augmented reality. In: Proceedings of IEEE International Symposium on Mixed and Augmented Reality, Cambridge, 2007. 3-18. Google Scholar

[68] Miyaoku K, Higashino S, Tonomura Y. C-Blink: a hue-difference-based light signal marker for large screen interaction via any mobile terminal. In: Proceedings of ACM Symposium on User Interface Software and Technology, New Mexico, 2004. 147-156. Google Scholar

[69] Hachet M, Pouderousx J, Guitton P. A camera-based interface for interaction with mobile handheld computers. \linebreak In: Proceedings of Symposium on Interactive 3D Graphics and Games, New York, 2005. 65-72. Google Scholar

[70] Ballagas R, Borchers J, Rohs M, et al. The smart phone: a ubiquitous input device. IEEE Pervas Comput, 2006, 5: 70-77. Google Scholar

[71] Rafael B, Michael R, Jennifer G, et al. Sweep and point {&} shoot: phonecam based interactions for large public displays. In: Proceedings of ACM CHI, Protland, 2005. 1200-1203. Google Scholar

[72] Pears N, Jackson D C. Smart phone interaction with registered display. IEEE Pervas Comput, 2009, 8: 14-21. Google Scholar

[73] Jiang H, Ofek E, Moraveji N, et al. Direct pointer: direct manipulation for large-display interaction using handheld cameras. In: Proceedings of ACM CHI, Montréal, 2006. 22-27. Google Scholar

[74] Wang J, Zhai S, Canny J. Camera phone based motion sensing: interaction techniques, applications and performance study. In: Proceedings of ACM Symposium on User Interface Software and Technology, Montreux, 2006. 101-110. Google Scholar

[75] Sebastian B, Dominikus B, Andreas B, et al. Touch projector: mobile interaction through video. In: Proceedings of ACM CHI, Atlanta, 2010. 10-15. Google Scholar

[76] Yang X D, Edward M, David M, et al. LensMouse: augmenting the mouse with an interactive touch display. \linebreak In: Proceedings of ACM CHI, Atlanta, 2010. 10-15. Google Scholar

[77] Gugenheimer J, Dobbelstein D, Christian W, et al. FaceTouch: enabling touch interaction in display fixed UIs for mobile virtual reality. In: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, 2016. 49-60. Google Scholar

[78] Henrikson R, Araujo B, Chevalier F, et al. Multi-device storyboards for cinematic narratives in VR. In: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, 2016. 787-796. Google Scholar

[79] Bowman D, Rhoton C, Pinho M. Text input techniques for immersive virtual environments: an empirical comparison. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Maryland, 2002. 2154-2158. Google Scholar

[80] Billinghurst M. Put that where? voice and gesture at the graphics interface. In: Proceedings of ACM SIGGRAPH Computer Graphics, Orlando, 1998. 60-63. Google Scholar

[81] McTear M F. Spoken dialogue technology: enabling the conversational interface. ACM Comput Surv, 2004, 34: 90-169. Google Scholar

[82] Gosselin F, Andriot C, Savall J, et al. Large workspace haptic devices for human-scale interaction: a survey. In: Proceedings of 6th International Conference on Haptics: Perception, Devices and Scenarios, Heidelberg, 2008. \linebreak 523-528. Google Scholar

[83] Schätzle S, Hulin T, Preusche C, et al. Evaluation of vibrotactile feedback to the human arm. In: Proceedings of EuroHaptics, Paris, 2006. 577-560. Google Scholar

[84] Cassinelli A, Reynolds C, Ishikawa M. Augmenting spatial awareness with haptic radar. In: Proceedings of the 10th IEEE International Symposium on Wearable Computers, Minnesota, 2006. 61-64. Google Scholar

[85] Ross D A, Blasch B B. Wearable interfaces for orientation and wayfinding. In: Proceedings of the 4th International ACM Conference on Assistive Technologies, Arlington, 2000. 193-200. Google Scholar

[86] van Erp J B F, van Veen H A H C. Vibro-tactile information presentation in automobiles. In: Proceedings of Eurohaptics, Minnesota, 2001. 99-104. Google Scholar

[87] van Veen H A H C, van Erp J B F. Tactile information presentation in the cockpit. In: Proceedings of Haptic Human-Computer Interaction. Berlin: Springer Berlin Heidelberg, 2001. 174-181. Google Scholar

[88] Gemperle F, Ota N, Siewiorek D. Design of a wearable tactile display. In: Proceedings of 5th International Symposium on Wearable Computers, Argentina, 2001. 5-12. Google Scholar

[89] Spelmezan D, Jacobs M, Hilgers A, et al. Tactile motion instructions for physical activities. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, 2009. 2243-2252. Google Scholar

[90] Hoggan E, Brewster S A, Johnston J. Investigating the effectiveness of tactile feedback for mobile touchscreens. \linebreak In: Proceedings of ACM CHI, Florence. 1573-1582. Google Scholar

[91] Altinsoy M E, Merchel S. Audiotactile feedback design for touch screens. In: Proceedings of Haptic and Audio Interaction Design. Berlin: Springer Berlin Heidelberg, 2009. 136-144. Google Scholar

[92] Jansen Y, Karrer T, Borchers J. MudPad: tactile feedback and haptic texture overlay for touch surfaces. In: Proceedings of ACM International Conference on Interactive Tabletops and Surfaces, Pittsburgh, 2010. 11-14. Google Scholar

[93] Bau O, Poupyrev I, Israr A, et al. TeslaTouch: electrovibration for touch surfaces. In: Proceedings of the 23rd Annual ACM Symposium on User Interface Software and Technology, California, 2010. 283-292. Google Scholar

[94] Stone R J. Haptic feedback: a brief history from telepresence to virtual reality. In: Proceedings of Haptic Human-Computer Interaction. Berlin: Springer Berlin Heidelberg, 2001. 1-16. Google Scholar

[95] Hrvoje B, Christian H, Sinclair M, et al. NormalTouch and texture touch: high-fidelity 3D haptic shape rendering on handheld virtual reality controllers. In: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, 2016. 717-728. Google Scholar

[96] Kikin G E. Light-induced shape-memory polymer display screen. U.S. Patent 8279200. 2012. Google Scholar

[97] Biet M, Giraud F, Lemairesemail B. Squeeze film effect for the design of an ultrasonic tactile plate. IEEE Trans Ultrason Ferr Freq Contr, 2007, 54: 2678-2688 CrossRef Google Scholar

[98] Massie T H, Salisbury K J. The phantom haptic interface: a device for probing virtual objects. In: Proceedings of the ASME Winter Annual Meeting, Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. Amsterdam: IOS Press, 1994. 295-300. Google Scholar

[99] Karlin S. Tactus technology [Resources Start-Ups]. IEEE Spectrum, 2013, 50: 23. Google Scholar

[100] Cohen P R, Dalrymple M, Moran D B, et al. Synergistic use of direct manipulation and natural language. \linebreak In: Proceedings of ACM SIGCHI Bulletin, Austin, 1989. 227-233. Google Scholar

[101] Hauptmann A G. Speech and gestures for graphic image manipulation. In: Proceedings of ACM SIGCHI Bulletin, Austin, 1989. 241-245. Google Scholar

[102] Oviatt S. Multimodal interactive maps: designing for human performance. Hum Comput Interact, 1997, 12: 93-129 CrossRef Google Scholar

[103] Lucente M, Zwart G J, George A. Visualization space: a testbed for deviceless multimodal user interfaces. In: Proceedings of AAAI Spring Symposium on Intelligent Envrionments. Menlo Park: AAAI Press, 1998. 87-92. Google Scholar

[104] Laviola, J. MSVT: a virtual reality-based multimodal scientific visualization tool. In: Proceedings of IASTED International Conference on Computer Graphics and Imaging, Innsbruck, 2000. 1-7. Google Scholar

[105] Cohen P R, Johnston M, McGee D R. et al. QuickSet: multimodal interaction for distributed applications. \linebreak In: Proceedings of International Multimedia Conference. New York: ACM Press, 1997. 31-40. Google Scholar

[106] Latoschik M E. A gesture processing framework for multimodal interaction in virtual reality. In: Proceedings of International Conference on Computer Graphics, Camps Bay, 2001. 95-100. Google Scholar

[107] Koons B D, Sparrell C J, Thorisson K R. Integrating simultaneous input from speech, gaze, and hand gestures. \linebreak In: Proceedings of AAAI Workshop on Intelligent Multimedia Interfaces, Anaheim, 1993. 257-276. Google Scholar

[108] Oviatt S L, Cohen P R, Wu L, et al. Designing the user interface for multimodal speech and gesture applications: state-of-the-art systems and research directions. Hum Comput Interact, 2000, 15: 263-322 CrossRef Google Scholar

[109] Althoff F, McGlaun G, Schuller B, et al. Using multimodal interaction to navigate in arbitrary virtual vrml worlds. In: Proceedings of Workshop on Perceptive User Interfaces, Florida, 2001. 1-8. Google Scholar

[110] Touraine D, Bourdot P, Bellik Y, et al. A framework to manage multimodal fusion of events for advanced interactions within virtual environments. In: Proceedings of the Workshop on Virtual Environments, Barcelona, 2002. 159-168. Google Scholar

[111] Ilmonen T, Kontkanen J. Software architecture for multimodal user input-Fluid. In: Proceedings of ERCIM Workshop on User Interfaces for All, Heidelberg, 2002. 319-338. Google Scholar

[112] Reitmayr G, Schmalstieg D. An open software architecture for virtual reality interaction. In: Proceedings of ACM Symposium on Virtual Reality Software and Technology, Alberta, 2001. 47-54. Google Scholar

[113] Feiner S, Shamash A. Hybrid user interfaces: breeding virtually bigger interfaces for physically smaller computers. In: Proceedings of ACM Symposium on User Interface Software and Technology, Hilton Head, 1991. 9-17. Google Scholar

[114] Benko H, Ishak E W, Feiner S. Cross-dimensional gestural interaction techniques for hybrid immersive environments. In: Proceedings of IEEE Virtual Reality Annual International Symposium, Bonn, 2005. 209-216. Google Scholar

[115] Benko H, Ishak E W, Feiner S. Collaborative mixed reality visualization of an archaeological excavation. In: Proceedings of the International Symposium on Mixed and Augmented Reality, Arlington, 2004. 132-140. Google Scholar

[116] Huang J, Han D Q, Chen Y N, et al. A survey on human-computer interaction in mixed reality. J Comput Aided Des Comput Graph, 2016, 28: 869-880 [黄进, 韩冬奇, 陈毅能, 等. 混合现实中的人机交互综述. 计算机辅助设计与图形学学报, 2016, 28: 869-880]. Google Scholar

[117] Schmalstieg D, Fuhrmann A, Hesina G, et al. The studierstube augmented reality project. Presence Teleop Virt Environ, 2002, 11: 33-54 CrossRef Google Scholar

[118] Schmalstieg D, Fuhrmann A, Hesina G. Bridging multiple user interface dimensions with augmented reality. \linebreak In: Proceedings of IEEE and ACM International Symposium on Augmented Reality, Munich, 2000. 20-29. Google Scholar

[119] Butz A, Hollerer T, Feiner S, et al. Enveloping users and computers in a collaborative 3D augmented reality. \linebreak In: Proceedings of IEEE International Workshop on Augmented Reality, San Francisco, 1999. 35-44. Google Scholar

[120] Darken R P, Durost R. Mixed-dimension interaction in virtual environments. In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, New York, 2005. 38-45. Google Scholar

[121] Vuurpijl L G, Bosch L T, Ruiter J D, et al. Overview of the state of the art in fusion of speech and pen input. Technical Report IST02001-32311, 2002. Google Scholar

[122] Nigay L, Coutaz J. A generic platform for addressing the multimodal challenge. In: Proceedings of Conference on Human Factors in Computing Systems, Denver, 1995. 98-105. Google Scholar

[123] Johnston M, Cohen P R, McGee D, et al. Unification-based multimodal integration. In: Proceedings of the 35th Annual Meeting of the Association Far Computational Linguistics, Madrid, 1997. 1-8. Google Scholar

[124] Kaiser E, Olwal A, McGee D, et al. Mutual dissambiguation of 3D multimodal interaction in augmented and virtual reality. In: Proceedings of the 5th International Conference on Multimodal Interfaces, Vancouver, 2003. 12-19. Google Scholar

[125] Li J, Tian F, Wang W X, et al. A multimodal interaction system for children. J Software, 2002, 13: 1846-1851 [李杰, 田丰, 王维信, 等. 面向儿童的多通道交互系统. 软件学报, 2002, 13: 1846-851]. Google Scholar

[126] Ao X, Li J F, Wang X G, et al. Structuralizing digital ink for efficient selection. In: Proceedings of the ACM International Conference on Intelligent User Interfaces, Sydney, 2006. 148-155. Google Scholar

[127] Wang L, Fu Y G, Ji L E, et al. A layout system by two handed manipulation based on constraints. J Comput Aided Des Comput Graph, 2006, 18: 1243-1249 [王亮, 付永刚, 纪连恩, 等. 基于约束语义的双手交互场景布局系统. 计算机辅助设计与图形学学报, 2006, 18: 1243-1249]. Google Scholar

[128] Lin Y M, Dong S H. The realization of algorithm and software platform. Chin J Comput, 2000, 23: 90-94 [林应明, 董士海. 多通道融合算法和软件平台的实现. 计算机学报 2000, 23: 90-94]. Google Scholar

[129] Pu J T, Wang Y, Chen W G, et al. FreeVoieeCAD -- a multi-modal user interface prototype system. J Comput Res Develop, 2003, 40: 1382-1388 [普建涛, 王悦, 陈文广, 等. 多通道用户界面原型系统 FreeVoiceCAD. 计算机研究与发展, 2003, 40: 1382-1388]. Google Scholar

[130] Chen Y Q, Gao W, Liu J F, et al. Multi-model behavior synchronizing prosody model in sign language synthesis. Chin J Comput, 2006, 29: 822-827 [陈益强, 高文, 刘军发, 等. 手语合成中的多模式行为协同韵律模型. 计算机学报, 2006, 29: 822-827]. Google Scholar

[131] Wang Z Q, Gao W. A method to synthesize Chinese sign language based on virtual human technologies. J Software, 2002, 13: 2051-2056 [王兆其, 高文. 基于虚拟人合成技术的中国手语合成方法. 软件学报, 2002, 13: 2051-2056]. Google Scholar

[132] Zeng X Y, Lu P, Zhang M T, et al. A multi-modal game player interface system based on video and speech. J Comput Aided Des Comput Graph, 2005, 17: 2353-2358 [曾祥永, 鲁鹏, 张满囤, 等. 基于视频与语音的多通道游戏用户界面系统. 计算机辅助设计与图形学学报, 2005, 17: 2353-2358]. Google Scholar

[133] Azuma R T. A survey of augmented reality. J Presence: Teleop Virt Environ, 1997, 6: 355-385 CrossRef Google Scholar

[134] Azuma R Y, Baillot R. Behringer S, et al. Recent advances in augmented reality. IEEE Comput Graph Appl, 2001, 21: 34-47. Google Scholar

[135] Bell B, Höllerer T, Feiner S. An annotated situation-awareness aid for augmented reality. In: Proceedings of ACM Symposium on User Interface Software and Technology, Paris, 2002. 213-216. Google Scholar

[136] Kiyokawa K, Kurata Y, Ohno H. An optical see-through display for mutual occlusion of real and virtual environments. In: Proceedings of International Symposium on Augmented Reality, Munich, 2000. 60-67. Google Scholar

[137] Weiser M. The computer for the twenty-first century. IEEE Pervas Comput, 2002, 3: 19-25. Google Scholar

[138] Ishii H, Ullmer B. Tangible bits: towards seamless interfaces between people, bits and atoms. In: Proceedings of ACM CHI, Atlanta, 1997. 234-241. Google Scholar

[139] Ullmer B. Ishii H. Emerging frameworks for tangible user interfaces. IBM Syst J, 2000, 9: 915-931. Google Scholar

[140] Kenneth P. Fishkin: a taxonomy for and analysis of tangible interfaces. Pers Ubiquit Comput, 2004, 8: 347-358. Google Scholar

[141] Orit S, Nancy L, Eduardo H, et al. The TAC paradigm: specifying tangible user interfaces. Pers Ubiquit Comput, 2004, 8: 359-369. Google Scholar

[142] Ullmer B, Ishii H. The metaDESK: models and prototypes for tangible user interfaces. In: Proceedings of ACM Symposium on User Interface Software and Technolog, Alberta, 1997. 223-232. Google Scholar

[143] Billinghurst M, Poupyrev I, Kato H, et al. Mixing realities in shared space: an augmented reality interface for collaborative computing. In: Proceedings of the IEEE International Conference on Multimedia and Expo, New York, 2000. 1641-1644. Google Scholar

[144] Poupyrev I, Tan D S, Billinghurst M, et al. Tiles: a mixed reality authoring interface. In: Proceedings of 7th Conference on Human-Computer Interaction, Netherlands, 2001. 334-341. Google Scholar

[145] Kato H, Billinghurst M, Poupyrev I, et al. Tangible augmented reality for human computer interaction. J Soc Art Sci, 2002, 1: 97-104 CrossRef Google Scholar

[146] Billinghurst M, Kato H, Poupyrev I. The magicbook: a transitional AR interface. Comput Graph, 2001, 25: 745-753 CrossRef Google Scholar

Copyright 2020 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有

京ICP备18024590号-1       京公网安备11010102003388号