Research on the Construction of Virtual-Physical Learning Environments Driven by Multimodal Brain-Computer Interfaces
Published 2025-11-06
How to Cite
Abstract
Against the backdrop of the global digital transformation of education and the deep integration of artificial intelligence technologies, achieving precise and personalized intelligent education has emerged as a pivotal scientific issue. This study employs a multimodal brain-computer interface as the experimental equipment, integrates extended reality with educational multi-agent technologies, and constructs a new paradigm of virtual-real learning environments encompassing neural perception, intelligent decision-making, and scenario adaptation. Specifically, multimodal electroencephalographic data are collected, followed by the application of Unity dynamic scene generation technology. Finally, the Coze platform is utilized to access the DeepSeek-R1 large model, thereby developing a dual-mode educational agent with both coaching and strategic capabilities. Immersive virtual teaching experiments were conducted in pilot institutions, and empirical results demonstrate that this research can effectively enhance students' path-planning abilities and learning efficiency.