A Web site in support of the project "Gobal engagement with NASA JPL and ESA in Robotics, Brain Computer Interfaces, and Secure Adaptive Systems for Space Applications" funded by EPSRC
Friday, 30 August 2013
Article in The Guardian
Very kindly on the 8th of August The Guardian discusses some of our work on collaborative BCI in an article entitled "Are two heads better than one? The psychology of Pacific Rim". Below I attach the picture they use. Many thanks!
Thursday, 11 April 2013
Focus magazine reports on our collaborative brain computer interfaces
Focus, an Italian national science magazine (probably the most widespread with approximately half a million copies printed monthly), has devoted a short article to our work on collaborative BCI in its April 2013 issue.
Thursday, 7 March 2013
Discovery News article on our collaborative BCI
Discovery News, a well known news and media web site, has devoted a long-ish article entitled "Steer a Spaceship with Your Brain" to our work on space-craft control (and also decision making) using a collaborative brain-computer interface (see page 1 and page 2 of the article).
They gave a very good account of the work, and even interviewed an independent expert, Prof Deniz Erdogmus of Northeastern University in Boston, to ask for an evaluation of our work. Luckily he was very supportive :-)
Many thanks!
PS: Discovery News have the habit of sprinkling articles with what look like click-able section headings. They are not: they are just links to related news items. Don't get fooled. (I was....)
They gave a very good account of the work, and even interviewed an independent expert, Prof Deniz Erdogmus of Northeastern University in Boston, to ask for an evaluation of our work. Luckily he was very supportive :-)
Many thanks!
PS: Discovery News have the habit of sprinkling articles with what look like click-able section headings. They are not: they are just links to related news items. Don't get fooled. (I was....)
Friday, 15 February 2013
Article in the Financial Times Magazine
The Financial Times has devoted an article to the collaborative brain-computer interfaces research we have carried out with NASA JPL in their weekend Magazine this week. The article is entitled "How to fly a spaceship – with your mind". It can be accesses also through the FT web site (here, second article in the page) or see picture below
Article on our BCI work in The Rabbit
The Rabbit is the University of Essex's student newspaper. It comes out every other Friday. Today, in their issue 140, they were kind enough to report on our Brain Computer Interfaces work with NASA JPL.
Wednesday, 6 February 2013
ScienceOmega coverage of our collaborative BCI
The technology news web site ScienceOmega has covered our work with collaborative forms of Brain-Computer Interfaces with an article today:
Tuesday, 5 February 2013
Monday, 4 February 2013
Ana and Dimitrios to JPL
Ana Matran-Fernandez and Dimitrios Andreou, two Essex PhD students working in the area of BCI, flew to Pasadina on Friday, arrived safely, were badged and have already started work at JPL. The will spend over a month there. Experiments are already ongoing.
Have a good and productive time, Ana and Dimitrios!
Have a good and productive time, Ana and Dimitrios!
Saturday, 2 February 2013
New Scientist's article on our Collaborative BCI
The new scientist has just published an article on our work with NASA JPL in the area of collaborative brain computer interfaces, focusing in particular on our work on the spacecraft control.
If you are interested, you can find the full article here.
If you are interested, you can find the full article here.
Thursday, 31 January 2013
Another paper resulting from the collaboration
Here is another paper to be presented soon. It is a joint paper from this project.
C. Assad, M. Wolf, T. Theodoridis, K. Glette and A. Stoica, BioSleeve: a Natural EMG-Based Interface for HRI, 8th ACM/IEEE International Conference on Human-Robot Interaction, Mar. 3-6, 2013
Theo Theodoridis work at JPL
Theo Theodoridis is an RA on the project working with Huosheng Hu. He spent 4 months at JPL since the autumn last year. He has been involved a wide variety of projects in diverse areas of robotics such as robot control, visual guidance, pattern recognition, and human-robot interfaces. The projects are listed below.
Project #1: Multimodal (Voice, EMG, IMU) interfaces for
the coordination of diverse robots using a custom made multi-sensor apparatus.
Sensor(s):
Multimodal
apparatus including Voice, Electromyographic, IMU, Joystick.
Robot(s):
Activmedia Pioneer
3, Brookstone AC13 rover.
Description:
Understanding human behaviours is crucially important
for developing Human-Robot Interfaces (HRIs) in scenarios related to human
assistance, such as object handling and transportation, tooling, and safety
control in remote areas. In this project we demonstrate the control of diverse
robots using a multimodal (multi-sensor fusion) architecture, inline to
human-robot (high level) control. The purpose of using such interfaces targets
on the operator's flexibility, reliability, and robustness for commanding,
collaborating/coordinating and controlling mobile robots. For the
experimentation we have used a custom multi-sensor apparatus, intergrading
voice, Electromyographic, and inertial sensors.
Project #2: Using monocular stereo
vision for mobile robot navigation, planning, and target tracking with Markov
models.
Sensor(s):
Pinhole 320x240
colour camera.
Robot(s):
Brookstone AC13
rover.
Description:
Robots with limited but yet efficient sensors, such as
colour cameras, could be good test beds for navigation and tracking. Utilising
the Brookstone AC13 rovers, carrying a single colour camera sensor, was a
challenging task for visual guidance and tracking applications. In this
project, we solve 5 problems: (a) Data fusion using a fuzzy TSK model, (b)
Visual obstacle avoidance using a monocular 5 region correlation-based stereo
vision algorithm, (c) Short-term planning using a 4 state Markov model, (d) Visual
object tracking using the blob colouring algorithm, and (d) Way-point
navigation using a velocity model. The purpose of this project is to have a
fully autonomous mobile robot using limited sensor resources. Future
application will focus on clearance scenarios where small rovers will be
enabled for clearing an environment from rocks, rubbish, etc.
Project #3: Stereo visual odometry for
UAV inertial navigation, including terrain classification and image stitching.
Sensor(s):
Pinhole 320x240
down-looking colour camera.
Robot(s):
Brookstone Ardrone
2.0 (quad-rotor) UAV.
Description:
Inertial navigation of UAVs is a demanding concept as
it involves multidisciplinary architectures dealing with problems related to
stability for hovering, and navigation on a 3D axis coordinate system. In this
project, we cover and solve a wide range of problems targeting mainly on the UAVs'
aerial perception using computer vision. The tasks undertaken are (a) Path
Building: aerial image stitching, (b) Localisation: stereo visual odometry, and
(c) Recognition: 2D terrain classification. The purpose here is to provide
aerial vehicles with exhaustive perception capabilities, such as being able to
recognise a terrain, localise, and build a path-mosaic.
Project #4: Sparce-PCA analysis and
implementation of high dimensional datasets for dimensionality reduction,
classification, and rover robot control.
Sensor(s):
JPL's 16 EMG
channel Biosleeve.
Robot(s):
Brookstone AC13
rover.
Description:
Teleoperated
reactive control is the implementation of a consistent mapping between directed
sensor input commands (operator's wearable device) to control multiple outputs
(robot). JPL's biosleeve was one of the main sensors used for the
classification of finger and hand-based gestures, directed to control
tractor-based rovers. This involved an ensemble of Machine Learning
classifiers, as well as a dimensionality reduction method (Sparse-PCA), which
takes place in a pre-classification phase. The purpose of using Sparse-PCA is
the need to reduce hardware resources (EMG channels) to the minimum possible,
sustaining high classification accuracy for quality control
Project #5: A gesture-based BNF grammar in
the synthesis of sign expressions for classification and robot control.
Sensor(s):
JPL's 16 EMG
channel Biosleeve.
Robot(s):
2D multi-robot
simulator (custom made).
Description:
Beyond single gesture classification for robot
control, an alternative approach is the structuring of a gesture language that
consists of hand signs composing gesture syntaxes. Such syntaxes of gestures (expressions)
are primarily classified and then composed as sentences using a gesture grammar
interpreter (BNFs scheme). Such gesture sequences are being used for commanding
robot teams, from which, the human operator can select an individual robot, a team,
or the whole group or robots. This method suggest an entirely new and
innovative way of using syntaxes of gestures for robot control, and instigates
a whole new field of "quite communication". Relevant application can
be seen in people with disabilities as well as in the army.
Project #6: Advanced EMG
classification and signal processing for proportional (reactive) control of prosthetic
(multi-DOF) robotic manipulators .
Sensor(s):
128
channel wearable EMG apparatus for amputees.
Robot(s):
7-DOF robotic
manipulator for prosthesis (custom made).
Description:
While reactive control of a small scale data or inputs
of variables is considered as a primitive method for driving or controlling
robots, it is on the other hand a significantly practical way in prosthesis
control. In this project, we have attempted to not only classify an enormous
dataset, but also to test and reveal the importance of using powerful yet good
classifier systems for delicate tasks such as in prosthesis. A DARPA RIC Chicago
dataset taken from amputees chest was used for controlling prosthetic robot
manipulators. The dataset consists of 128 variables (mono-polar EMG sensors),
19 classes, and some ten thousands of instances per class. The purpose of
carrying out this research is to aid the disabled such as amputees and
paraplegics, so as allowing them to gain some degree of independency in their
daily activities.
Wednesday, 16 January 2013
Paper accepted for presentation as SPIE Defense, Security and Sensing Conference
We had another paper accepted for presentation at an international conference (the SPIE Defense, Security and Sensing Conference). The paper is entitled "Multi-brain fusion and applications to intelligence analysis". It is co-authored by Adrian Stoica, Riccardo Poli and other people at both JPL and Essex. | ||
Here is the abstract: | ||
In rapid serial visual presentation (RSVP) images shown extremely rapidly can still be parsed by the visual system, and the detection of specific targets triggers specific EEG response. Research funded by DARPA's Neurotechnology for Intelligence Analysts program has proven speed-ups in sifting through satellite images when EEG signals of an intelligence analyst act as triggers. This paper extends the use of neurotechnology from individual analysts to collaborative teams; it presents concepts of collaborative brain computer interfaces and experiments indicating that the aggregation of information in EEGs collected from multiple users results in performance improvements compared to that of individual users. |
Subscribe to:
Posts (Atom)