an article by Jingsheng Tang, Yadong Liu, Jun Jiang, Yang Yu, Dewen Hu and Zongtan Zhou (National University of Defense Technology, Changsha, Hunan, People’s Republic of China) pubslihed in International Journal of Human–Computer Interaction Volume 35 Issue 10 (2019)
Abstract
This study presents a brain–computer interface (BCI) system aimed at providing disabled patients with mobile solutions for practical use.
The proposed system employs an omnidirectional chassis and a bionic robot arm to construct a multi-functional mobile platform. In addition, the system is equipped with a Kinect and 12 ultrasonic sensors to capture environment information. Based on artificial intelligence technology, the mobile system can understand the environment and smartly completes certain tasks.
A hybrid BCI combined with movement imagery paradigm and asynchronous P300 paradigm is designed to translate human intent to computer commands. The users interact with the system in a flexible way: on the one hand, the user issues commands to drive the system directly; on the other hand, the system searches for predefined operable targets and reports the results to the user.
Once the user confirms the target, the system will automatically complete the associated operation. To evaluate the system’s performance, a testing environment with a small room, aisle, and an elevator was built to simulate the mobile tasks in the daily scene.
Participants were instructed to operate the mobile system in the room, aisle, and using the elevator to go outdoors.
In this study, four subjects participated in the test, and all of them completed the task.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment