Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Interactive Learning with Implicit Human Feedback

Improving Bionic Limb Control through Reinforcement Learning in an Interactive Game Environment

Kilian Freitag · Rita Laezza · Jan Zbinden · Max Ortiz-Catalan


Abstract:

Enhancing the accuracy and robustness of bionic limb controllers that decode motor intent is a pressing challenge in the field of prosthetics. State-of-the-art research has mostly focused on Supervised Learning techniques to tackle this problem. However, obtaining high-quality labeled data that accurately represents muscle activity during daily usage remains difficult. In this work, we investigate the potential of Reinforcement Learning to further improve the decoding of human motion intent by incorporating usage-based data. We propose a new method which starts with a control policy, pretrained on a static recording of electromyograhic (EMG) ground truth data. We then fine-tune the pretrained classifier with dynamic EMG data obtained during interaction with a game environment developed for this work. We evaluate our approach in real-time experiments, showing substantial improvements for human-in-the-loop performance. The method proves more effective in predicting simultaneous finger movements, doubling the decoding accuracy both during gameplay and in a separate motion test.

Chat is not available.