Thursday, 31 January 2013

Theo Theodoridis work at JPL

Theo Theodoridis is an RA on the project working with Huosheng Hu. He spent  4 months at JPL since the autumn last year. He has been involved a wide variety of projects in diverse areas of robotics such as robot control, visual guidance, pattern recognition, and human-robot interfaces. The projects  are listed below.

Project #1: Multimodal (Voice, EMG, IMU) interfaces for the coordination of diverse robots using a custom made multi-sensor apparatus.

Sensor(s):
Multimodal apparatus including Voice, Electromyographic, IMU, Joystick.

Robot(s):
Activmedia Pioneer 3, Brookstone AC13 rover.

Description:
Understanding human behaviours is crucially important for developing Human-Robot Interfaces (HRIs) in scenarios related to human assistance, such as object handling and transportation, tooling, and safety control in remote areas. In this project we demonstrate the control of diverse robots using a multimodal (multi-sensor fusion) architecture, inline to human-robot (high level) control. The purpose of using such interfaces targets on the operator's flexibility, reliability, and robustness for commanding, collaborating/coordinating and controlling mobile robots. For the experimentation we have used a custom multi-sensor apparatus, intergrading voice, Electromyographic, and inertial sensors.


Project #2: Using monocular stereo vision for mobile robot navigation, planning, and target tracking with Markov models.

Sensor(s):
Pinhole 320x240 colour camera.

Robot(s):
Brookstone AC13 rover.

Description:
Robots with limited but yet efficient sensors, such as colour cameras, could be good test beds for navigation and tracking. Utilising the Brookstone AC13 rovers, carrying a single colour camera sensor, was a challenging task for visual guidance and tracking applications. In this project, we solve 5 problems: (a) Data fusion using a fuzzy TSK model, (b) Visual obstacle avoidance using a monocular 5 region correlation-based stereo vision algorithm, (c) Short-term planning using a 4 state Markov model, (d) Visual object tracking using the blob colouring algorithm, and (d) Way-point navigation using a velocity model. The purpose of this project is to have a fully autonomous mobile robot using limited sensor resources. Future application will focus on clearance scenarios where small rovers will be enabled for clearing an environment from rocks, rubbish, etc.



Project #3: Stereo visual odometry for UAV inertial navigation, including terrain classification and image stitching.

Sensor(s):
Pinhole 320x240 down-looking colour camera.

Robot(s):
Brookstone Ardrone 2.0 (quad-rotor) UAV.

Description:
Inertial navigation of UAVs is a demanding concept as it involves multidisciplinary architectures dealing with problems related to stability for hovering, and navigation on a 3D axis coordinate system. In this project, we cover and solve a wide range of problems targeting mainly on the UAVs' aerial perception using computer vision. The tasks undertaken are (a) Path Building: aerial image stitching, (b) Localisation: stereo visual odometry, and (c) Recognition: 2D terrain classification. The purpose here is to provide aerial vehicles with exhaustive perception capabilities, such as being able to recognise a terrain, localise, and build a path-mosaic.


Project #4: Sparce-PCA analysis and implementation of high dimensional datasets for dimensionality reduction, classification, and rover robot control.

Sensor(s):
JPL's 16 EMG channel Biosleeve.

Robot(s):
Brookstone AC13 rover.

Description:
Teleoperated reactive control is the implementation of a consistent mapping between directed sensor input commands (operator's wearable device) to control multiple outputs (robot). JPL's biosleeve was one of the main sensors used for the classification of finger and hand-based gestures, directed to control tractor-based rovers. This involved an ensemble of Machine Learning classifiers, as well as a dimensionality reduction method (Sparse-PCA), which takes place in a pre-classification phase. The purpose of using Sparse-PCA is the need to reduce hardware resources (EMG channels) to the minimum possible, sustaining high classification accuracy for quality control


Project #5: A gesture-based BNF grammar in the synthesis of sign expressions for classification and robot control.

Sensor(s):
JPL's 16 EMG channel Biosleeve.

Robot(s):
2D multi-robot simulator (custom made).

Description:
Beyond single gesture classification for robot control, an alternative approach is the structuring of a gesture language that consists of hand signs composing gesture syntaxes. Such syntaxes of gestures (expressions) are primarily classified and then composed as sentences using a gesture grammar interpreter (BNFs scheme). Such gesture sequences are being used for commanding robot teams, from which, the human operator can select an individual robot, a team, or the whole group or robots. This method suggest an entirely new and innovative way of using syntaxes of gestures for robot control, and instigates a whole new field of "quite communication". Relevant application can be seen in people with disabilities as well as in the army.


Project #6: Advanced EMG classification and signal processing for proportional (reactive) control of prosthetic (multi-DOF) robotic manipulators .

Sensor(s):
128 channel wearable EMG apparatus for amputees.

Robot(s):
7-DOF robotic manipulator for prosthesis (custom made).

Description:
While reactive control of a small scale data or inputs of variables is considered as a primitive method for driving or controlling robots, it is on the other hand a significantly practical way in prosthesis control. In this project, we have attempted to not only classify an enormous dataset, but also to test and reveal the importance of using powerful yet good classifier systems for delicate tasks such as in prosthesis. A DARPA RIC Chicago dataset taken from amputees chest was used for controlling prosthetic robot manipulators. The dataset consists of 128 variables (mono-polar EMG sensors), 19 classes, and some ten thousands of instances per class. The purpose of carrying out this research is to aid the disabled such as amputees and paraplegics, so as allowing them to gain some degree of independency in their daily activities. 

Wednesday, 16 January 2013

Paper accepted for presentation as SPIE Defense, Security and Sensing Conference

We had another paper accepted for presentation at an international conference (the SPIE Defense, Security and Sensing Conference). The paper is entitled "Multi-brain fusion and applications to intelligence analysis". It is co-authored by Adrian Stoica, Riccardo Poli and other people at both JPL and Essex.
Here is the abstract:
In rapid serial visual presentation (RSVP) images shown extremely rapidly can still be parsed by the visual system, and the detection of specific targets triggers specific EEG response. Research funded by DARPA's Neurotechnology for Intelligence Analysts program has proven speed-ups in sifting through satellite images when EEG signals of an intelligence analyst act as triggers. This paper extends the use of neurotechnology from individual analysts to collaborative teams; it presents concepts of collaborative brain computer interfaces and experiments indicating that the aggregation of information in EEGs collected from multiple users results in performance improvements compared to that of individual users.

Wednesday, 19 December 2012

Paper accepted for Intelligent User Interfaces conference


Our paper entitled "Towards Cooperative Brain-Computer Interfaces for Space Navigation," has been
accepted for presentation at ACM's Intelligent User Interfaces (IUI) 2013 conference.  The review process was extremely selective  with only about 20% of submissions being accepted for presentation. The paper is co-authored by Riccardo Poli  Caterina Cinel  Ana Matran-Fernandez, Francisco Sepulveda and Adrian Stoica.

Here is the abstract of the paper:


We explored the possibility of controlling a spacecraft simulator using an analogue Brain-Computer Interface (BCI) for 2-D pointer control. This is a difficult task, for which no previous attempt has been reported in the literature. Our system relies on an active display which produces event-related potentials (ERPs) in the user’s brain. These are analysed in real-time to produce control vectors for the user interface. In tests, users of the simulator were told to pass as close as possible to the Sun. Performance was very promising, on average users managing to satisfy the simulation success criterion in 67.5% of the runs. Furthermore, to study the potential of a collaborative approach to spacecraft navigation, we developed BCIs where the system is controlled via the integration of the ERPs of two users. Performance analysis indicates that collaborative BCIs produce trajectories that are statistically significantly superior to those obtained by single users.

Monday, 17 December 2012

Paper accepted for oral presentation at CogSIMA conference


Our  paper entitled "Improving Decision-making based on Visual Perception via a Collaborative Brain-Computer Interface" has been accepted for oral presentation at the 2013 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA). The paper is co-authored by Riccardo Poli , Caterina Cinel , Francisco Sepulveda and Adrian Stoica.

Here is the abstract of the paper:


In the presence of complex stimuli, in the absence of sufficient time to complete the visual parsing of a scene, or when attention is divided, an observer can only take in a subset of the features of a scene, potentially leading to poor decisions. In this paper we look at the possibility of integrating the percepts from multiple non-communicating observers as a means of achieving better joint perception and better decision making. Our approach involves the combination of brain-computer interface (BCI)
technology with human behavioural responses. To test our ideas in controlled conditions, we asked observers to perform a simple visual matching task involving the rapid sequential presentation of pairs of visual patterns and the subsequent decision as whether the two patterns in a pair were the same
or different. Visual stimuli were presented for insufficient time for the observers to be certain of the decision. The degree of difficulty of the task also depended on the number of matching features between the two patterns. The higher the number, the more difficult the task. We recorded the response times of observers as well as a neural feature which predicts incorrect decisions and, thus, indirectly indicates the confidence of the decisions made by the observers. We then built a composite neuro-behavioural feature which optimally combines these behavioural and neural measures. For group decisions, we tested the use of a majority rule and three further decision rules which weigh the decisions of each observer based on response times and our neural and neuro-behavioural features. Results indicate that the integration of behavioural responses and neural features can significantly improve accuracy when compared with individual performance. Also, within groups of each size, decision rules based on such features outperform the majority rule.

Friday, 14 December 2012

Recent papers published within the project


We have published a number of papers under the support from this grant in the last few months. Here is a list: 

[1].   J Cannan and H. Hu, A Multi-Sensor Armband based on Muscle and Motion Measurements, Proc. of IEEE Int. Conf. on Robotics and Biomimetics, Guangzhou, China, 11-14 December 2012, pages 1098-1103.
[2].   S. Wang, L. Chen, H. Hu and and K. McDonald-Maier, Doorway Passing of an Intelligent Wheelchair by Dynamically Generating B´ezier Curve Trajectory, Proc. of IEEE Int. Conf. on Robotics and Biomimetics, Guangzhou, China, 11-14 December 2012, pages 1206-1211.
[3].   E.J. Rechy-Ramirez, H. Hu and K. McDonald-Maier, Head movements based control of an intelligent wheelchair in an indoor environment, Proc. of IEEE Int. Conf. on Robotics and Biomimetics, Guangzhou, China, 11-14 December 2012, pages 1464-1469.
[4].   L. Chen, H. Hu and K. McDonald-Maier, EKF based Mobile Robot Localisation, Proc. of the 3rd International Conf. on Emerging Security Technologies (EST-2012), Lisbon, Portugal, 5-7 Sept. 2012, pages 149-154.
[5].   S. Wang, H. Hu and Klaus McDonald-Marie, Optimization and Sequence Search based Localization in Wireless Sensor Networks, Proc. of the 3rd International Conf. on Emerging Security Technologies (EST-2012), Lisbon, Portugal, 5-7 September 2012, pages 155-160.
[6].   Y. Kovalchuk, H. Hu, D. Gu, K. McDonald-Maier, D. Newman, S. Kelly, G. Howells, Investigation of Properties of ICmetrics Features, Proc. of the 3rd International Conf. on Emerging Security Technologies (EST-2012), Lisbon, Portugal, 5-7 September 2012, pages 115-120.
[7].   Y. Kovalchuk, H. Hu, D. Gu, K. McDonald-Maier, G. Howells, ICmetrics for Low Resource Embedded Systems, Proc. of the 3rd International Conf. on Emerging Security Technologies (EST-2012), Lisbon, Portugal, 5-7 September 2012, pages 121-126.
[8].   B. Lu, D. Gu, H. Hu and K. McDonald-Marier, Sparse Gaussian Process for Spatial Function Estimation with Mobile Sensor Networks, Proc. of the 3rd International Conf. on Emerging Security Technologies (EST-2012), Lisbon, Portugal, 5-7 September 2012, pages 145-150.
[9].   L. Chen, S. Wang and H. Hu, B´ezier Curve based Path Planning for an Intelligent Wheelchair to pass a Doorway, Proceedings of the UKACC Int. Conference on Control, Cardiff, 3-5 September 2012.

Wednesday, 12 September 2012

ROBOSAS Workshop – Robotics

We held a joint workshop in the Essex Robot Arena between JPL and Essex on the special topic of Robotics.

Here is the program of the day:


09:30 Huosheng Hu – Introduction of Essex robotics research
09:40 Adrian Stoica – Human-cantered robotics: learning & multi-robot control
10:10 Yumi Iwashita – Visual recognition for robots
10:30 Adrian Clark – Learning to see
10:50 Theo Theodoridis – Multi-modality control of multiple robots
11:30 James Cannan – Robot management and wearable technology
11:50 Ericka Rechy-Ramirez – Facial expressions based control of wheelchair
12:10 Lunch
14:00 John Woods – Region based analysis of images for robots
14:30 Ling Chen – IMU/GPS based pedestrian localization
14:50 Hossein Farid Ghassem Nia – Bayesian decision theory for optical counting
15:10 Sen Wang – Localization in wireless sensor networks
15:30 Potential collaboration discussion in Room 1NW.3.7
17:30 End

Saturday, 1 September 2012

Klaus visit's at JPL this Summer

Professor Klaus McDonald-Maier, who leads this collaborative EPSRC Global Engagement Project with NASA, visited JPL this Summer. Klaus is interested in looking at increased reliability for hardware and software architectures in robotic systems. These systems include extra-terrestrial robotic rovers such as the Curiosity Mars Science Laboratory, which recently successfully landed on Mars where it will undertake a series of experiments. Curiosity relies on complex programmable control systems and an enormous amount of dedicated software which must be extremely reliable as it is deployed in such a remote location.

Klaus commented ‘It was extremely exciting being at NASA JPL when Curiosity successfully landed and it will be very interesting as it embarks on fundamental experiments which should give us great insights into the structure and characteristics of Mars.’