Thursday 31 January 2013

Another paper resulting from the collaboration


Here is another paper to be presented soon. It is a joint paper from this project.

C. Assad, M. Wolf, T. Theodoridis, K. Glette and A. Stoica, BioSleeve: a Natural EMG-Based Interface for HRI, 8th ACM/IEEE International Conference on Human-Robot Interaction, Mar. 3-6, 2013

Theo Theodoridis work at JPL

Theo Theodoridis is an RA on the project working with Huosheng Hu. He spent  4 months at JPL since the autumn last year. He has been involved a wide variety of projects in diverse areas of robotics such as robot control, visual guidance, pattern recognition, and human-robot interfaces. The projects  are listed below.

Project #1: Multimodal (Voice, EMG, IMU) interfaces for the coordination of diverse robots using a custom made multi-sensor apparatus.

Sensor(s):
Multimodal apparatus including Voice, Electromyographic, IMU, Joystick.

Robot(s):
Activmedia Pioneer 3, Brookstone AC13 rover.

Description:
Understanding human behaviours is crucially important for developing Human-Robot Interfaces (HRIs) in scenarios related to human assistance, such as object handling and transportation, tooling, and safety control in remote areas. In this project we demonstrate the control of diverse robots using a multimodal (multi-sensor fusion) architecture, inline to human-robot (high level) control. The purpose of using such interfaces targets on the operator's flexibility, reliability, and robustness for commanding, collaborating/coordinating and controlling mobile robots. For the experimentation we have used a custom multi-sensor apparatus, intergrading voice, Electromyographic, and inertial sensors.


Project #2: Using monocular stereo vision for mobile robot navigation, planning, and target tracking with Markov models.

Sensor(s):
Pinhole 320x240 colour camera.

Robot(s):
Brookstone AC13 rover.

Description:
Robots with limited but yet efficient sensors, such as colour cameras, could be good test beds for navigation and tracking. Utilising the Brookstone AC13 rovers, carrying a single colour camera sensor, was a challenging task for visual guidance and tracking applications. In this project, we solve 5 problems: (a) Data fusion using a fuzzy TSK model, (b) Visual obstacle avoidance using a monocular 5 region correlation-based stereo vision algorithm, (c) Short-term planning using a 4 state Markov model, (d) Visual object tracking using the blob colouring algorithm, and (d) Way-point navigation using a velocity model. The purpose of this project is to have a fully autonomous mobile robot using limited sensor resources. Future application will focus on clearance scenarios where small rovers will be enabled for clearing an environment from rocks, rubbish, etc.



Project #3: Stereo visual odometry for UAV inertial navigation, including terrain classification and image stitching.

Sensor(s):
Pinhole 320x240 down-looking colour camera.

Robot(s):
Brookstone Ardrone 2.0 (quad-rotor) UAV.

Description:
Inertial navigation of UAVs is a demanding concept as it involves multidisciplinary architectures dealing with problems related to stability for hovering, and navigation on a 3D axis coordinate system. In this project, we cover and solve a wide range of problems targeting mainly on the UAVs' aerial perception using computer vision. The tasks undertaken are (a) Path Building: aerial image stitching, (b) Localisation: stereo visual odometry, and (c) Recognition: 2D terrain classification. The purpose here is to provide aerial vehicles with exhaustive perception capabilities, such as being able to recognise a terrain, localise, and build a path-mosaic.


Project #4: Sparce-PCA analysis and implementation of high dimensional datasets for dimensionality reduction, classification, and rover robot control.

Sensor(s):
JPL's 16 EMG channel Biosleeve.

Robot(s):
Brookstone AC13 rover.

Description:
Teleoperated reactive control is the implementation of a consistent mapping between directed sensor input commands (operator's wearable device) to control multiple outputs (robot). JPL's biosleeve was one of the main sensors used for the classification of finger and hand-based gestures, directed to control tractor-based rovers. This involved an ensemble of Machine Learning classifiers, as well as a dimensionality reduction method (Sparse-PCA), which takes place in a pre-classification phase. The purpose of using Sparse-PCA is the need to reduce hardware resources (EMG channels) to the minimum possible, sustaining high classification accuracy for quality control


Project #5: A gesture-based BNF grammar in the synthesis of sign expressions for classification and robot control.

Sensor(s):
JPL's 16 EMG channel Biosleeve.

Robot(s):
2D multi-robot simulator (custom made).

Description:
Beyond single gesture classification for robot control, an alternative approach is the structuring of a gesture language that consists of hand signs composing gesture syntaxes. Such syntaxes of gestures (expressions) are primarily classified and then composed as sentences using a gesture grammar interpreter (BNFs scheme). Such gesture sequences are being used for commanding robot teams, from which, the human operator can select an individual robot, a team, or the whole group or robots. This method suggest an entirely new and innovative way of using syntaxes of gestures for robot control, and instigates a whole new field of "quite communication". Relevant application can be seen in people with disabilities as well as in the army.


Project #6: Advanced EMG classification and signal processing for proportional (reactive) control of prosthetic (multi-DOF) robotic manipulators .

Sensor(s):
128 channel wearable EMG apparatus for amputees.

Robot(s):
7-DOF robotic manipulator for prosthesis (custom made).

Description:
While reactive control of a small scale data or inputs of variables is considered as a primitive method for driving or controlling robots, it is on the other hand a significantly practical way in prosthesis control. In this project, we have attempted to not only classify an enormous dataset, but also to test and reveal the importance of using powerful yet good classifier systems for delicate tasks such as in prosthesis. A DARPA RIC Chicago dataset taken from amputees chest was used for controlling prosthetic robot manipulators. The dataset consists of 128 variables (mono-polar EMG sensors), 19 classes, and some ten thousands of instances per class. The purpose of carrying out this research is to aid the disabled such as amputees and paraplegics, so as allowing them to gain some degree of independency in their daily activities. 

Wednesday 16 January 2013

Paper accepted for presentation as SPIE Defense, Security and Sensing Conference

We had another paper accepted for presentation at an international conference (the SPIE Defense, Security and Sensing Conference). The paper is entitled "Multi-brain fusion and applications to intelligence analysis". It is co-authored by Adrian Stoica, Riccardo Poli and other people at both JPL and Essex.
Here is the abstract:
In rapid serial visual presentation (RSVP) images shown extremely rapidly can still be parsed by the visual system, and the detection of specific targets triggers specific EEG response. Research funded by DARPA's Neurotechnology for Intelligence Analysts program has proven speed-ups in sifting through satellite images when EEG signals of an intelligence analyst act as triggers. This paper extends the use of neurotechnology from individual analysts to collaborative teams; it presents concepts of collaborative brain computer interfaces and experiments indicating that the aggregation of information in EEGs collected from multiple users results in performance improvements compared to that of individual users.