Abstract / Description of output
The loss of hand profoundly affects an individual's quality of life. Prosthetic hands can provide a route to functional rehabilitation by allowing the amputees to undertake their daily activities. However, the performance of current artificial hands falls well short of the dexterity that natural hands offer. The aim of this study is to test whether an intelligent vision system could be used to enhance the grip functionality of prosthetic hands. To this end, a convolutional neural network (CNN) deep learning architecture was implemented to classify the objects in the COIL100 database in four basic grasp groups: tripod, pinch, palmar and palmar with wrist rotation. Our preliminary, yet promising, results suggest that the additional machine vision system can provide prosthetic hands with the ability to detect object and propose the user an appropriate grasp.
Original language | English |
---|---|
Title of host publication | 2nd IET International Conference on Intelligent Signal Processing 2015 (ISP) |
Publisher | IET |
Pages | 1-5 |
Number of pages | 5 |
DOIs | |
Publication status | Published - 2 Dec 2015 |
Event | 2nd IET International Conference on Intelligent Signal Processing - London, United Kingdom Duration: 1 Dec 2015 → 2 Dec 2015 |
Conference
Conference | 2nd IET International Conference on Intelligent Signal Processing |
---|---|
Abbreviated title | ISP 2015 |
Country/Territory | United Kingdom |
City | London |
Period | 1/12/15 → 2/12/15 |