Friday 27 September
10h |
Welcome- Arnaud Revel and Jacqueline Nadel
Posters handling |
10h30-11h45 |
Introduction – Jacqueline Nadel (CNRS
UMR7593, La Salpêtrière)
Autism, development and the search for efficient therapies |
11h45-12h |
coffee break |
12h-12h45 |
Anja Rutten, Helen Neale, Sue
Cobb, Steven Kerr, Sarah Parsons,
Ann Leonard, Peter Mitchell, & Tony
Glover (School of Psychology & School
of Computer Science, University of Nottingham)
The AS Interactive Project: Further development and the evaluation of Ves
for users with Asperger Syndrome |
|
Abstract:
The AS Interactive Project aims to develop VEs to allow adolescents
and adults with ASDs to practise social skills. The project was
introduced to the First International Workshop on Robotics and
Virtual Interactive Systems in Therapy for Autism and other Psychopathological
Disorders, 2001 where developments and results of the first year
of the project were presented. During the first year, emphasis
was on development, utilising a user-centred design process.
This resulted in production of two Single User Virtual Environments
(SVEs), the Café and the Bus. These environments focused
on helping users learning how to find an appropriate place to
sit down. Based on the environments created during the first
year, an evaluation study was carried out (Leonard et al., 2002)
During the second year of the project, SVEs were refined following a
period of evaluation and implementation of changes in consultation with
staff and students in a school for pupils with ASD. Several features
have been added or changed following consultation with users and facilitators.
The number of levels of difficulty in each of the environments was increased
to provide a more challenging environment and flexibility was included,
so that each time the same task was presented, the environment appeared
visually distinct to prevent students from rote-learning what to do based
on visual memory alone. In addition to the main task of finding a seat,
in the more difficult levels the users had to queue appropriately. Evaluations
of the SVE carried out in year 2 are detailed in Neale et al., (2002).
Additionally, two Collaborative Virtual Environments (CVEs) have been
developed - the Social Café and the Job Interview/Formal Meeting.
These environments provide less structure, and potential for richer social
interactions with others through the CVE, including the use of gestural
behaviours. We have carried out some informal trials of CVE use within
a school environment and are currently evaluating these to examine the
potential of this technology for social skills practise.
Year 3 of the project will see continued evaluations of SVE and CVE scenarios.
The SVEs will be developed into project deliverables and additional support
material for teachers and support workers will be prepared.
This presentation will give an overview of project developments from
school-based studies with adolescent AS users.
|
12h45- 14h |
Lunch |
14h – 14h45 |
Kerstin Dautenhahn, Aude
Billard, Megan Davis, Tamie Salter, Iain
Werry (ASRG, School of Computer Science, University of Hertfordshire)
Children with autism interacting with robots in the Aurora project |
|
Abstract:
The talk will give an update on current progress in the Aurora
project which studies how to use robots in autism therapy.
The chosen setup is inherently playful and unconstraint, e.g.
the children are not required to solve any tasks other than
playing, and the only purpose of the robot is to engage children
with autism in therapeutically relevant behaviours such as
turn-taking and imitation. A key issue is that the children
proactively initiate interactions rather than merely responding
to particular stimuli. Additionally, the chosen setup is social,
i.e. it involves not only the robot and the autistic child
present, but also the teacher and one or two experimenters.
As we have shown previously (Werry et al 2001; Dautenhahn et
al., 2002) this social setup is used by some children in a
very constructive manner demonstrating their communicative
competence: they use the robot as a focus of attention in order
to interact and/or communicate with other people in the room.
The first part of the talk will address issues of design spaces and niches
spaces of robots in autism therapy. Given the wide range of abilities
of children with autism it seems unlikely that one type of robot will
be the solution: rather, the design space of robotic designs (variations
in behaviour as well as appearance) need to be mapped to the "niche
space" of particular requirements that individual children, or groups
of children with similar set of symptoms show. We therefore argue that
any progress in the field needs to systematically assess how children
with autism interact with robots and what the particular benefits of
robots are in comparison to other non-robotic toys.
The second part of the talk will present studies where we investigated
how 15 autistic children interacted with a humanoid robotic doll called
Robota. The purpose of the robot was to imitate children's arm movements
(Dautenhahn & Billard 2002). Different from the trials with the mobile
robot, here the teacher was greatly involved in setting up and guiding
the "game" that the children played with the robot. We discuss
advantages and disadvantages of such a robotic design as well as first
results on the analysis of the videos documenting the interactions. If
time permits then the last part of the talk will summarise results from
a comparative study on eye-gaze and contact-behaviour for a group of
17 autistic children where we compared how they behave towards a small
mobile robot as opposed to a non-robotic toy (Werry 2002). We will briefly
summarise and discuss the data.
|
14h45-15h |
coffee break |
15h- 15h45 |
François Michaud (Mobile Robotics & Autonomous
Intelligent Systems, Sherbrooke)
Mobile robotic toys in therapy of Autism |
|
Abstract:
Since 1999 we, as engineers in electrical and computer engineering,
have been designing a great variety of mobile robotic toys with
the goal of using them as pedagogical tools for children suffering
from autism or other developmental disorders. These mobile robots
can move autonomously in the environment and interact in various
manners (vocal messages, music, visual cues, movements, etc.)
with the child. Compared to a human, a robot may be less intimidating
and more predictable. It can follow a deterministic play routine,
and also adapt over time and change the ways it responds to the
world, generating more sophisticated interactions and unpredictable
situations. This flexibility allows robotic toys to evolve from
simple machines to systems that demonstrate more complex behavior
patterns. In our case, the interaction framework created by our
robots is to get the attention of the child, ask the child to
do something, and to reward the child if the request is successfully
satisfied. Since each child is a distinct individual with preferences
and capabilities, it might not be possible to design one complete
robotic toy that can help capture and retain the interest of
every child. So our strategy is to design many different types
of robots, and observe the possible factors that might influence
the child's interests in interacting with a robotic toy, like
shape, colors, sounds, music, voice, movements, dancing, trajectory,
special devices, etc., and learning from our observations to
design new robots that could in the near future be used by parents
and educators.
For the workshop, we will present the particularities of our robotic
toys and what they can do as pedagogical tools, what we have learned
from our experiences over the last four years, and outline what we plan
to do in the next two years to study more closely what impacts these
devices can have on children with autism. As engineers, we need to combine
our expertise with scientists in the field, in order to get interesting
insights that will help guide the design of innovative new robots. And
our hope is that mobile robotic toys can become efficient therapeutic
tools that will help children with autism develop early on the necessary
skills they need to compensate for and cope with their disability.
|
15h45-16h |
coffee break |
16h- 16h45 |
Brian Scassellati (University of Yale)
How to use anthropomorphic robots to study social development |
|
Abstract:
In the last ten years, there has been an emphasis in the robotics
community on developing robots that look like people, act like
people, and interact with people in the same ways that people
interact with each other. This talk will examine four different
ways of using these anthropomorphic robots to study the development
of social skills in children. Observing how humans react when
placed in a social context with machines that share some human
characteristics allows us to study our own mental processes and
our views of ourselves. Humanoid robots are evocative objects
in that they provoke people to question and to reassess their
ideas about what it is to be intelligent, to have emotions, and
to be a person. Pilot research on how interactions with two anthropomorphic
robots impacted concepts of identity and self in agroup of sixty
children aged 6 o 14 years will be presented. (This is joint
work with Cynthia Breazeal and Sherry Turkle).
A robot that is capable of perceptually detecting social cues also provides
a quantifiable metric for those social cues. These metrics have potential
uses in characterizing the development of social abilities and in the
diagnosis of developmental disorders such as autism. While we do not
claim that the metrics identified by building social robots will take
the place of the clinician's judgment, these quantitative metrics may
be extremely useful to the medical community in establishing a diagnosis,
in tracking the success of intervention programs, and in reporting results.
Joint work with Ami Klin, Warren Jones, and Fred Volkmar from the Yale
Child Study Center on the application of these metrics will be presented.
Social robots that are constructed according to models of skill acquisition
in children can also be used as an evaluation tool for those models.
Just as simulations of neural networks have been useful in evaluating
the applicability of models of neural function, these robots can serve
as a test-bed for evaluating the predictive power and validity of models
of human evelopment. Further, a robotic model can also be subjected to
controversial testing that is potentially hazardous, costly, or unethical
to conduct on humans. Research on two models of the development of theory
of mind and joint attention skills that were implemented on Cog will
be presented as well as a new robot that is being constructed to address
issues of sensorimotor development and social development.
Finally, we speculate on the use of social robots as a therapeutic device
for autism. If you could control the level of social sophistication in
a robotic device, would that robot provide a crutch for learning social
skills gradually? What can be learned from these intervention approaches
about the structure of social skill development?
|
16h45-17h |
coffee break |
17h-18h |
Isabelle Viaud-Dalmon (CNRS UMR7593,
La Salpêtrière)
Virtual reality as a tool for rehabilitation in Psychiatric disorders (+
demonstration) |
|
Abstract:
Virtual reality (VR) represents a set of computer technologies,
which allow users to interact with a three-dimensional, computer-generated
environment in real time. VR is starting to be used in psychophysics
experiments as well as in psychological therapy around the world.
VR provides a way to immerse a user in an environment in which
all the parameters can be measured, and in which the interaction
between different sensory modalities can be controlled.
Therefore VR represents an interesting tool to study the integration
of space-related multisensory information in human and its disorders.
For example, we have used VR to study the adaptation to incoherent visual-vestibular
stimulation, in a task in which subjects had to control their whole-body
rotations with a joystick. In another experiment, we studied the effect
of sensory conflict both on sensorimotor control and on the stored representation
of a path.
However, before this tool can be established as a standard and can be
used on an everyday basis for therapy, several of its aspects need to
be thoroughly studied. VR accounts on the presence of users in the virtual
environment: the user has to believe that he is actually in the virtual
world and not anymore in the physical world. This first aspect is far
from being trivial since it invokes derealisation experiences. Another
aspect is linked to the fact that the interaction with any kind of VR
system necessitates an adaptation of the subject. The effectiveness of
VR for experimental and therapeutic purposes will be discussed in this
framework.
|
Saturday 28 September
9h15-10h |
Lola Canamero (Department
of Computer Science, University of Hertfordshire) & Philippe
Gaussier (Group
Signal and Image Processing, University of Cergy/CNRS)
Emotion understanding : robots as tools and models |
|
Abstract:
Affective Computing is a new research area that aims at endowing
robots and computers with emotional capabilities (e.g., to express,
recognize, or "have" emotions) in order to make them
more life-like and better adapted to interact with humans. Whereas
the perspective of having artifacts that can display emotional
expressions, respond to and adapt to our emotional states on
a superficial level seems increasingly appealing, there is much
scepticism regarding whether artifacts can "have" emotions
in a deeper sense, since this is often considered as a unique
feature of the human (and some other animal) species.
In our opinion, this and other fundamental questions that stem from it
(as for example in what cases it makes / does not make sense to give
our artifacts emotions, and what aspects of them) must be thoroughly
investigated if affective computing is to become a serious discipline
that can contribute to our understanding of emotional phenomena. Indeed,
robots offer an excellent platform for this investigation, since they
can be used not only as tools (with easily modifiable parameters) to
support research in other disciplines, but also as (synthetic, implemented,
and working) models of emotional systems in non-human, non-biological
species, hopefully shedding light on some integral elements and aspects
of emotions. The current state of the art in affective computing research
is still far from this objective, not only due to the early age of the
discipline, but in particular to the complexity of emotional phenomena
and our still limited understanding of them. Work in this area has tended
to focus on only one of the aspects that seem more apparent regarding
the "dual nature" of emotions:
a) "Internal" robotic/agent architectures integrating emotional
elements for behavior modulation and control – what we would term
emotions as "second order" control or monitoring mechanisms.
b) "External manifestations" of emotions (e.g., facial displays)
that can be used as signals for social interaction and communication – for
example, work on expressive robots.
However, although the use of emotions can certainly provide novel solutions,
the problems underlying these lines of research – behavior control
and social interactions – are classic problems in robotics that
have largely been tackled without resorting to emotional mechanisms.
We will therefore review some representative problems and (the achievements
and the limitations of) classical solutions in these areas, in order
to better appreciate what the roles and contributions of emotions have
been / can be with respect to particular problems in robotics. We will
finally sketch some ideas on how this "dual aspect" of emotions
can be meaningfully integrated in robots if we want to use them as tools
and models to investigate and understand emotions.
To conclude our talk, we will illustrate how we are undertaking the implementation
of these ideas in, and some of the problems raised by, an expressive
robotic head designed to investigate emotion understanding – currently
recognition and imitation of facial displays – in typical and autistic
children.
|
10h-10h15 |
Coffee break |
10h15-11h |
Gerardo Guttierez (Robotic Institute,
University of Valencia, Spain)
& Rita Jordan (School of Education, Birmingham)
Virtual reality for understanding imagination in people with autism |
|
Abstract:
Difficulties and delay in understanding symbolism, especially
in relation to symbolic play, have long been documented as characteristic
of people with autistic spectrum disorders (ASDs). It is not
clear whether such difficulties and delays represent a core deficit
in imagination, as some have proposed, or whether they result
from other aspects of autism (Jordan, in preparation). Nor is
it clear whether the problems lie with all aspects of play or
with the aspect of pretend play referred to as 'symbolic play'
only. Leslie (1994) suggests three categories of symbolic play:
object substitution, attribution of false properties and reappearance/
disappearance. There have been many attempts to teach symbolic
play to children with ASDs, and a recent attempt (Sherratt, 2002)
attributes the comparative success of the programme (with children
with both autism and severe learning difficulties) to the use
of structure, repetition and affective engagement. Virtual Reality
(VR) has been claimed to provide a particularly facilitatory
environment for people with ASDs in that it also offers structure,
opportunities for repetition, affective engagement and, additionally,
control of the learning environment. Virtual reality shares the
advantages of computer-based learning, and has the additional
advantage of making it more likely that the results will generalise
to real-word
This study, is an attempt to use a virtual reality environment to develop
understanding of symbolic representation and imagination, within a 'familiar,
yet playful environment. It also attempts to evaluate the contribution
of virtual reality to any observed gains through comparison with traditional
teaching approaches. In this paper we present the design and software
developed under project INMER, which is currently being used and evaluated
with a sample of people with autism. The project is used to ensure understanding
over a range of 'teaching steps' leading towards symbolic understanding,
with the VR tool being used to elucidate the symbolic and imaginary aspects,
when appropriate. These steps cover: functional use of objects, functional
play, imaginary play (involving object substitution at two different
levels of difficulty), actual (or in this case VR) transformation of
objects, 'magic' transformations and imaginary transformations. This
careful stepped approach to teaching, ensuring understanding at each
stage, is aimed at avoiding confusion, which could result from the premature
use of a VR tool. The evaluation will also include an evaluation of generalisation.
At this preliminary stage there are no results to report. However, the
paper concludes with some of the limitations that have already become
apparent, such as the lack of opportunities for the people with ASDs
to themselves be creative and the lack of social stimulation to spark
creativity; the authors suggest some future developments in VR that might
address these.
|
11h-11h15 |
Coffee break |
11h15- 12h |
Arnaud Revel ( Group Signal and Image
Processing, ENSEA/CNRS, Cergy),
Jacqueline Nadel, Marie Maurer & Pierre
Canet (CNRS, UMR7593, & ITIN
group, Cergy)
VE: a tool for testing imitative capacities of low-functioning children
with autism |
|
Abstract:
Equipped with our knowledge of developmental indices (Nadel & Butterworth,
1999), we have started an exploration of imitative capacities
in low-functioning children with autism. Such an exploration
is needed, since results in this area are controversial, with
authors claiming that children with autism have specific impairments
in the domain of imitation, others (saying that the imitative
deficits are not specific to autism but more generally include
children with different developmental impairments with dysphasia
and more specifically with language impairments and still others
like us denying noticeable deficits in low-level imitation in
children with autism. Our stance is based on the idea that a
hierarchy of mechanisms are at play when we imitate according
of the kind of imitation we use, from low-level use of mirror
neurons in perception-action coupling to high-level mechanism
involved in the representation of actions or program of actions
(Rizzlolatti et al., 2002). The relatively late diagnosis of
autism suggests that early motor development is not specifically
impaired. We thus postulate the integrity of perception-action
coupling in autism.
The major impairment of children in autism lies in social component,
therefore, it is of major interest to try to distinguish what in imitative
performance is due to motor and cognitive capacities and what is due
to capacities to relate with partners. We propose an experiment with
3 interactive conditions: an on-line condition, where a real partner
asks the child to do like him/her, an off-line condition, where a televised
partner asks to do like him/her, a virtual environment condition where
the virtual partner asks the child to do like him/her. The imitative
performance of the children is the discriminant variable.
Such a project would need us to answer to several pre-requisites: is
it possible for low-functioning children with autism as well as for young
infants to discriminate between virtual reality and real life? To our
own knowledge the attempts to make persons with autism interacting in
a virtual environment were always performed with adults or adolescents
and with high functioning persons. With young and low-functioning children
with autism however, the mastery of such complex situations is far from
being obvious and we need at first to process an analysis of the different
components of the context which can lead to differentiate virtual environment
from real life. This is the reason why the virtual environment we will
test is very simple. It is composed of only one avatar displayingtwo
facial expressions with no eye-to-eye contact and proposing several simple
actions, with only one sentence "do like me". A demonstration
of the virtual environment design will be provided by ITIN group.
|
12h-14h |
Lunch |
14h – 18h |
Session of Posters and Demonstrations:
Begonia Pino: Use of computers to enhance
social engagement and social understanding in children with
autism –poster–
|
|
Abstract:
Children with Autistic Spectrum Disorders (ASD) have difficulties
understanding social situations and displaying appropriate behaviours.
Many educational programmes have attempted to decrease these
difficulties by teaching social skills to these children. Although
these programmes have successfully taught social rules in many
cases, children fail to transfer them to daily life. A social
interaction project focused on social understanding succeeded
because children had the opportunity to interact in an 'almost'
natural setting (Dunlop et al., 2002). Besides, children with
ASD tend to enjoy working or playing with computers. Murray (1997)
shows that computers are not threatening, non judgemental, predictable,
reliable, etc., thus, providing a safe environment in which social
interaction can take place. This research investigated the use
computers as an environment to teach and practice social understanding.
The focus on social understanding was inspired by a social interaction
project where children were involved in different real life activities,
such as games, snacks, and outings. The rationale was that computer
provided a 'real life' environment, and a shared interest as
well as a motivational and safe tool around which construct a
relationship.
In the first stage the goal was to observe whether the computer fostered
greater social engagement: child involved in more and longer interactions,
initiating more, increased eye contact, etc. Child and experimenter carried
out an activity (playing a game) in a computer version and later in a
non-computer version (half of subjects started with non-computer version).
The second stage intended to improve children's social understanding
by putting social rules into practice with the mediation of the computer,
for a longer period (a series of weekly sessions) where the activity
was tailored to the child's interests and assisted by the experimenter.
The expected results of the first stage were, first, that there would
be more interactions in the computer version, but maybe less eye contact,
which may suggest that children with ASD become more socially engaged
when using computers with another person. Differences between the children
with ASD and the control group could pinpoint the adequacy of the use
of computers to enhance social interaction in the population with ASD
in particular.
|
|
Yufang Cheng, David Moore & Paul
McGrath: Virtual learning environments for children
with autism –poster- |
|
Abstract:
Autism is a neurodevelopmental disorder characterised by a triad
of impairments: in communication, social understanding, and rigidity
of thought (Wing 1996). It is often held children with autism
are poor at mind-reading, have a limited understanding of emotional
expressions of both others and themselves.
An interesting possibility is that the use of Collaborative Virtual Environment
(CVE) technology may be able to help children with autism counter these
difficulties. CVE can be defined as a computer-based, distributed, virtual
space or set of spaces, in which people can meet and interact with others.
Given this definition, the prima facie argument for CVE for people with
autism is clear: a CVE can potentially provide a means by which people
with autism can communicate with others (autistic or non-autistic) and
thus circumvent their social and communication impairment and sense of
isolation. The technology can also be used for purposes of practice and
rehearsal. A key aspect of CVE is that users are represented in the environment
by their personal "avatar" (Cassell et al, 2000). If autistic
children are to benefit from CVE, therefore, it is important that they
are able to interact successfully with their own and other people's avatars.
Further, it may be that working with avatars that represent the emotions
of their users, helps combat any theory of mind deficit. Our research
interest, then, concerns how people with autism interact with avatar
representations. Given this, we have built a system, utilising avatar
representations of emotions based on work by Fabri (2001), that requires
users to work through three stages. In stage 1 an avatar is presented
in isolation, for the emotions happy, sad, angry and frightened. Stage
2 represents the same emotions in the context of a social story, since
this may help users infer the likely emotion caused by certain events.
In stage 3 the user is given an avatar representation of one of the emotions
and asked to select what event caused this emotion; the argument here
is that inferring the possible cause of a displayed emotion is likely
to be essential when using a CVE for communication. Users' responses
to the system are recorded by the software for subsequent analysis.
|
|
Nicole Oudin, Jacqueline
Nadel & Joëlle Proust: Computarized
facilitated communication for nonverbal children with autism –demonstration--
|
|
Hideki Kozima:The Infanoïd -
demonstration - |
|
Abstract:
We are designing a robot that can help children, either normal
or autistic, lean to communicate with others. Communication is
one form of the social interaction in which one predicts and
controls someone else's behavior by using social clues like bodily/facial
gestures and speech. This project note describes our on-going
exploration on the design principle of a robot with which normal
or autistic children can play contingency-detection game. In
the game, the robot reacts to such social clues that the children
will make and displays such social clues that will induce some
response in the children, possibly forming social interaction.
As a possible embodiment, we introduce our infant-robot, Infanoid,
which is currently capable of primordial attentional interaction
with humans.
|
|
Caroline Potier, Daniel Viezzi, Jacqueline
Nadel & Philippe
Gaussier: Neonatal imitation modelled by a robot –demonstration- |
|
Abstract :
In the context of an interdisciplinary cooperation, we have
conceived a robotics "mouth" to study imitation of
tongue protrusion and mouth opening as performed by human newborns
in a human context. Results obtained in this framework will suggest
further robotics developments which could help us in the understanding
of the mechanisms involved in imitation. The robotics mouth can
be programmed simply by the final user which can specify both
the amplitude and the speed of the movements. The minimalist
program is written in C under linux and works in text mode. This
implies no specific expansive powerful equipment. The study of
tongue protrusion is simplified by the ability of the system
to create patterns of actions, or sequences of patterns in correspondence
with psychological protocols.
One of the main interest in the robotic solution in comparison with the
human one is the precision (up to 1 millisecond) and the reproducibility
of the sequences. This allows to compare results obtained during several
experiments very precisely. The use of a robotic solution allows to keep
an identical referential both in time and space since we can test the
same protocol whenever and wherever. Besides, compared with a real mouth,
the robotic one can be schematised at will in order to test which features
are really important in a human mouth. Conversely, compared with a simulated
mouth displayed on a 2D monitor, the robotic mouth is 3D, and more than
that, it is really "embedded" and "situated".
The basic question for developmental psychologists is to explore whether
neonatal imitation in humans is a selective process that requires biological
modelling or whether it is an elective process likely to occur in front
of animated though non biological stimuli.
|
|
Ronald Kemeling: Mimic approach –demonstration- |
|
Abstract :
Children with a development problem such as autism rarely get
the opportunity to explore their environment and, even if they
are able to do so, positive feedback is often lacking. Their
body scheme awareness is also often poorly or insufficiently
developed. With the MIMIC program an environment can be developed
which gives children full control of what happens. Moreover,
they receive multichannel feedback. Studies in Sweden and the
US have shown that this form of feedback can be very effective.
MIMIC is a unique multimedia computer program with which a fully
interactive multisensory development and learning environment
can be created in a simple way. The principle is very simple.
The computer observes the space by means of the camera. When
any movement is observed on a pre-defined spot, the computer
responds with an action. The action depends on what the counsellor
has programmed in the computer. All kinds of movements are possible
by means of which concepts such as high, low, left-right, large-small,
in-out can be visualised. A number of persons can simultaneously
use this environment and in this way they can make music or participate
in an interaction. Colours and subsequently emotions can be linked
to a movement towards a specific spot. Language and communication
exercises can be composed. Spatial orientation exercises, body
scheme development, matching exercises, behavioural therapeutic
approaches, eliciting of movements are only some of the possibilities
of MIMIC. I have worked with children of two different schools
for special education (SLD). Now I am working on a content with
videos of a 15 year old girl with autism. I wish to know whether
this form is appropriate for her. I wish to show you several
aspects of the program's operation and I have some video material.
I would like to set up a collaborative study of the effects and
development of content.
|
18h |
Final comments and what next? |
|