Cognitive assistance for persons with disabilities
Cristina Urdiales
Nowadays, due to progressive ageing in developed society and improvements in health care , there is an increasing number of persons with physical and or cognitive disabilities that, in the worst case, become dependent. Unfortunately, human and economic resources to support this population sector are quite limited; hence, these people lose quality of life and, ultimately, need to be institutionalized. In order to avoid related personal, social and economic problems, there has been a major effort in FP6 and FP7 to use Information and Communication Technology (ICT) to assist people with special needs. Specifically, it has been reported that loss of mobility is tightly coupled with loss of quality of life and dependency. Indeed, most assistive devices at home are basically focused on mobility assistance. Conventional wheelchairs are the most common tool to assist people with reduced mobility. However, an important group of these people is not able to use them. Furthermore, specific conditions, e.g. post-stroke apraxia, make it impossible for some users to even cope with power wheelchairs, that require only minimal physical effort. In these cases, it has been suggested to robotize power wheelchairs, so they can help persons to remain autonomous. In our case, we have robotized a Meyra Runner wheelchair adding encoders, an external joystick, a Hokuyo laser and an industrial PC A robotic wheelchair is basically an autonomous robot that yields a proper interface to be guided by a user. In fact, it could reach any target fed by the user on its own, like any autonomous mobile robot. However, it is necessary to keep users as active as possible to avoid loss of residual skills, as reported by rehabilitation professionals, and also to increase his/her self confidence and sense of control over his/her actions. Thus, most robotic wheelchairs rely on the so called shared control paradigm, where user and robot make decisions together. Most shared control systems are based on swapping control from user to robot and viceversa either when a triggering condition is detected (e.g. door crossing, narrow corridor...) or when the user chooses to do so. The main drawback of these systems is that the user never copes with situations he/she finds complex, so he/she could end up losing any residual skill related to those situations. Besides, control switching provokes discontinuities that may cause problems for navigation algorithms. This work focuses on a new shared control approach to solve the commented problems.