Project #1: Multimodal (Voice, EMG, IMU) interfaces for
the coordination of diverse robots using a custom made multi-sensor apparatus.
Sensor(s):
Multimodal
apparatus including Voice, Electromyographic, IMU, Joystick.
Robot(s):
Activmedia Pioneer
3, Brookstone AC13 rover.
Description:
Understanding human behaviours is crucially important
for developing Human-Robot Interfaces (HRIs) in scenarios related to human
assistance, such as object handling and transportation, tooling, and safety
control in remote areas. In this project we demonstrate the control of diverse
robots using a multimodal (multi-sensor fusion) architecture, inline to
human-robot (high level) control. The purpose of using such interfaces targets
on the operator's flexibility, reliability, and robustness for commanding,
collaborating/coordinating and controlling mobile robots. For the
experimentation we have used a custom multi-sensor apparatus, intergrading
voice, Electromyographic, and inertial sensors.
Project #2: Using monocular stereo
vision for mobile robot navigation, planning, and target tracking with Markov
models.
Sensor(s):
Pinhole 320x240
colour camera.
Robot(s):
Brookstone AC13
rover.
Description:
Robots with limited but yet efficient sensors, such as
colour cameras, could be good test beds for navigation and tracking. Utilising
the Brookstone AC13 rovers, carrying a single colour camera sensor, was a
challenging task for visual guidance and tracking applications. In this
project, we solve 5 problems: (a) Data fusion using a fuzzy TSK model, (b)
Visual obstacle avoidance using a monocular 5 region correlation-based stereo
vision algorithm, (c) Short-term planning using a 4 state Markov model, (d) Visual
object tracking using the blob colouring algorithm, and (d) Way-point
navigation using a velocity model. The purpose of this project is to have a
fully autonomous mobile robot using limited sensor resources. Future
application will focus on clearance scenarios where small rovers will be
enabled for clearing an environment from rocks, rubbish, etc.
Project #3: Stereo visual odometry for
UAV inertial navigation, including terrain classification and image stitching.
Sensor(s):
Pinhole 320x240
down-looking colour camera.
Robot(s):
Brookstone Ardrone
2.0 (quad-rotor) UAV.
Description:
Inertial navigation of UAVs is a demanding concept as
it involves multidisciplinary architectures dealing with problems related to
stability for hovering, and navigation on a 3D axis coordinate system. In this
project, we cover and solve a wide range of problems targeting mainly on the UAVs'
aerial perception using computer vision. The tasks undertaken are (a) Path
Building: aerial image stitching, (b) Localisation: stereo visual odometry, and
(c) Recognition: 2D terrain classification. The purpose here is to provide
aerial vehicles with exhaustive perception capabilities, such as being able to
recognise a terrain, localise, and build a path-mosaic.
Project #4: Sparce-PCA analysis and
implementation of high dimensional datasets for dimensionality reduction,
classification, and rover robot control.
Sensor(s):
JPL's 16 EMG
channel Biosleeve.
Robot(s):
Brookstone AC13
rover.
Description:
Teleoperated
reactive control is the implementation of a consistent mapping between directed
sensor input commands (operator's wearable device) to control multiple outputs
(robot). JPL's biosleeve was one of the main sensors used for the
classification of finger and hand-based gestures, directed to control
tractor-based rovers. This involved an ensemble of Machine Learning
classifiers, as well as a dimensionality reduction method (Sparse-PCA), which
takes place in a pre-classification phase. The purpose of using Sparse-PCA is
the need to reduce hardware resources (EMG channels) to the minimum possible,
sustaining high classification accuracy for quality control
Project #5: A gesture-based BNF grammar in
the synthesis of sign expressions for classification and robot control.
Sensor(s):
JPL's 16 EMG
channel Biosleeve.
Robot(s):
2D multi-robot
simulator (custom made).
Description:
Beyond single gesture classification for robot
control, an alternative approach is the structuring of a gesture language that
consists of hand signs composing gesture syntaxes. Such syntaxes of gestures (expressions)
are primarily classified and then composed as sentences using a gesture grammar
interpreter (BNFs scheme). Such gesture sequences are being used for commanding
robot teams, from which, the human operator can select an individual robot, a team,
or the whole group or robots. This method suggest an entirely new and
innovative way of using syntaxes of gestures for robot control, and instigates
a whole new field of "quite communication". Relevant application can
be seen in people with disabilities as well as in the army.
Project #6: Advanced EMG
classification and signal processing for proportional (reactive) control of prosthetic
(multi-DOF) robotic manipulators .
Sensor(s):
128
channel wearable EMG apparatus for amputees.
Robot(s):
7-DOF robotic
manipulator for prosthesis (custom made).
Description:
While reactive control of a small scale data or inputs
of variables is considered as a primitive method for driving or controlling
robots, it is on the other hand a significantly practical way in prosthesis
control. In this project, we have attempted to not only classify an enormous
dataset, but also to test and reveal the importance of using powerful yet good
classifier systems for delicate tasks such as in prosthesis. A DARPA RIC Chicago
dataset taken from amputees chest was used for controlling prosthetic robot
manipulators. The dataset consists of 128 variables (mono-polar EMG sensors),
19 classes, and some ten thousands of instances per class. The purpose of
carrying out this research is to aid the disabled such as amputees and
paraplegics, so as allowing them to gain some degree of independency in their
daily activities.
No comments:
Post a Comment