According to Johnson and Lakoff, morality emerges from and addresses itself to our embodied experience. Our moral goals are focused on the promotion of bodily needs such as health, strength and nurturance. It would seem natural to extend our moral concepts to other embodied, sentient beings who require the same things to thrive. As long as we engage in practices like animal testing of cosmetic products, however, we prioritize relatively trivial desires before the health and life of other sentient beings. These practices disrupt the relationship between morality and embodied well-being.
I will propose that our capacity for an appropriate moral response is impaired by the interplay of our metaphors regarding thought, communication and empathy. A critical component of the Moral Empathy metaphor is the following entailment: In order to experience empathy toward another being, we must believe we have the ability to project our consciousness into them and experience the world from their perspective. Some of our key metaphors regarding thought and communication – particularly Thought As Language, the Conduit Metaphor and an emerging metaphor comprised of a system of nesting containers of language, reason and morality – conflict with this view of empathy when applied to nonhuman animals. When we believe that thoughts are comprised of language and that language is the conduit for expressing thoughts, we perceive animals as unintelligible, as creatures whose minds we do not have access to.
A metaphorical understanding of mind and communication that involves strictly internal, enclosed thought bars us from seeing other animals as moral actors. It is not until we unleash meaning-making from the mind and allow it to reverberate throughout the body that we are able to see other beings as living morally meaningful lives. It is then that we begin to enact an adequate moral response.
1. George Lakoff and Mark Johnson, Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought (New York: Basic Books, 1999), 331.
2. Lakoff and Johnson, Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought, 309.
In many situations people act in social context. Social context has been demonstrated to have an influence on cognitive processes like perception and reasoning. In contrast, little is known about the influence of social context on action planning and action control. Does knowledge of others' tasks and potential actions have an impact on individual performance? That is, do individuals form shared representations of each other’s actions and task rules? Previous research (Sebanz, Knoblich, & Prinz, 2003) has demonstrated that individuals performing complementary actions in a spatial compatibility paradigm formed representations of each other’s actions. In the present study, we distributed two further cognitive tasks – namely the SNARC
paradigm (Dehaene, Bossini, & Giraux, 1993) and the Flanker paradigm (Eriksen & Eriksen, 1974) - between two persons, in a way that each
individual did not need to take into account the co-actor's actions and task rules for her own performance. For both paradigms, RT patterns of
individuals acting on their own and individuals acting with a co-actor differed significantly from each other. In the SNARC task, participants
in the group condition showed the typical RT pattern of reacting faster to small numbers on the right and faster to large numbers on the left.
In the Flanker task, responses in the group were slowed when the flankers surrounding the target referred to the other person's S-R mapping. The results indicate that individuals in the group condition co-represented each others' actions and task rules, even though coordination was not required. We suggest that the development of such an ability to form shared representations of tasks is a cornerstone of human social cognition: It allows individuals to extend the temporal
limits of their action planning in order to act in anticipation of others’ actions rather than just to respond.
Using genetic algorithms, our multi-disciplinary research team is artificially evolving robots with the capacity to draw. We are interested in what adaptive robotics can teach us about creative cognition, with an eye especially towards the creation of art. A number of interesting philosophical and scientific issues have already begun to emerge to this end. This paper discusses just one of these issues, namely, constraints on creative artistic processes.
Our initial experiments are carried out in a simulated arena and once fit individuals evolve, we will transfer their neural network control systems to a physical robot. Our research strategy involves a well demarcated set of constraints, deriving both from (a) fitness functions (based on the individuals’ line drawing and wall avoidance behaviour), and (b) the particular physical embodiment of the robot (the morphology and response characteristics of the sensors and motors). (a) potentially provides insight into the analogous fitness constraints on artistic creation—an artist is constrained (sometime explicitly sometimes implicitly) by social, theoretical, and historical factors in creating art. (b) may provide insight into the physical situatedness of the artist and its role in the creative process.
Our methodology thus provides a novel angle for rejecting the following romantic notion of creativity: creative processes are unpredictable and thus necessarily unconstrained. Artificially evolved robots are constrained. In spite of such constraints, however, they use anything available (noise, environmental regularities, programming errors) to solve a problem or increase their fitness, and in ways unpredicted by the programmer. Creative artistic processes often occur under constraints but nonetheless involve unexpected or unusual conceptual combinations that result in creative solutions. For instance, in composing The Well-Tempered Clavier, Bach was constrained by the 12 tone scale, tempered tuning, and clavier instrumentation. So the romantic can keep unpredictability, but not without constraints.
Disembodied approaches to cognition are characterised in part by their
focus on neurocomputational architecture at the expense of
investigating cognition's dynamics. This has no small impact on
vision research: the temporal dynamics of sensory inputs are
intimately involved in perception – at the very least for time
judgments, but likely also for perception more generally. Most models
of perception, however, assume passive, stimulus-driven computation
(Engel et al, 2001, Nat. Rev. Neuro.). Models that do involve temporal
dynamics are typically self-contained motor control theory accounts
(at a level of analysis within, rather than among, cognitive
subsystems, e.g. Tweed, 1998, Science), take a more philosophical tack
(e.g. Grush, 2005, J. Neural Eng.), or have been psychological models
principally concerned with explanations of time perception, rather
than the temporal dynamics of perception (e.g. Poppel, 1997, Trends Cogn. Sci.). That research addressing the temporal dynamics of sensory
processing is still in its early stages is evidenced by the recent
finding that the perceived order of stimulus flashes is reversed in
vision immediately prior to a saccade (Concetta et al, 2005, Nat.
Neuro.). In the paper describing the effect, the authors theorise that
this might be explained by appealing to the slowing of a neural
'clock', but provide no workable model. Here we present a number of
experiments designed to tease apart the details of the temporal
inversion illusion: we first confirm that the effect does not occur
for pre-saccadic sound stimuli and then use these stimuli as temporal
'landmarks' to determine which visual stimulus is being temporally
'moved'. If, for example, dynamic information synchrony needs are at
play, for successful trans-saccadic integration the visual system may
be buffering visual information immediately prior to a saccade,
saturating the timing part of the visual system and resulting in an
unbuffered and thus accelerated second flash percept.
When traditional user interfaces are moved into three dimensions
(both on the input and output side), new cognitive design challenges
emerge. With Norman's (1988) design principles (making functions
visible, using good conceptual models, determining appropriate
mappings, and providing consistent feedback) as a baseline, we look
at some of the unique challenges of 3D interfaces.
We observed subjects using a large, high-resolution, stereoscopic
projection system designed (with the aid of tracked glasses and wand)
to allow chemists to interact with molecular models. We found that
subjects had difficulty remembering spatial and button mappings that
were neither logical nor conventional. They relied instead on the
world (in this case, the interface), to remind them of the
possibilities that a mode offered through ongoing (re)experimentation
with that mode. However, since the mappings of tracked wand-movements
to model-movements is not always logical or conventional, simply
moving the wand does not always give sufficient feedback to tell the
user what mode they are in. These difficulties were more pronounced
in cases where there were conflicts with the a priori expectations of
the user about how mice perform (you don't twist mice, for example),
and with buttons that served multiple functions, or different
functions in different modes or kinds of manipulations.
We outline a predictive methodology for testing the efficacy of
cognitive design principles, illustrating it with results from this
Cognition, once thought of as a product of the symbolic, "top down" processes of the brain, has undergone a revolution in recent years with the popularisation of "bottom up" models that situate intelligence as a "naturalised", relational phenomena. This perspective is dependent not only on the symbolic processing of internal data, but inextricably contingent upon various sensorimotor processes and substrates that produce a "spontaneous emergence" between the brain, the embodied sensorium, and environmental stimuli external to the epidermal surface of the body. In this capacity it can be said that environment is as much a part of cognition (and the self) as any isolated internal model.
Using this revelation as part of their uniquely interdisciplinary platform, the 43 year collaboration between Artists/Architects Arakawa and Gins has produced architectural "procedures" that pay exceedingly close attention to the way cognition is contingent upon environmental influences, or to employ their nomenclature, the way that the "organism person" cleaves to its "biosphere". Taking as their cue that the body is indistinguishable from its surrounds, they have designed, innovated and built architectural procedures that exploit this relationship in order to interrogate, challenge and yield from the human condition new and radical possibilities. Arakawa and Gins take this "possibility" to the audacious extreme of issuing the proclamation that, through an "Architectural Body", we may indeed "not need to die".
To contextualise, my research is directed toward the pressing social need for the invention and assembly of innovative procedures that delay (and perhaps even displace) the onset of dementia. This endeavour carries particular importance for western democracies, with the demographic bulge of rapidly ageing baby boomer populations facing what is popularly understood as an imminet dementia epidemic. Contrary to the hype surrounding Arakawa and Gins "Reversible Destiny" architecture, my concern lies not so much with the immortality enterprise as it does with the architectural enablement of the elderly in ways that foster independence and autonomy.
At the conference I will critically examine Arakawa and Gins interrogation of cognition as a site of unknown potentiality, articulating how the technical aspects of their concept "landing sites" (the cleaving of person and surround) work, via experiential case studies of specific architectural "Sites of Reversible Destiny"; namely Yoro Park in Gifu, Japan, the Reversible Destiny Lofts in Tokyo, Japan, and the Bioscleave House in New Hampshire, USA.
The classic assumption in psychology has been that emotion is separate from cognition and that it hinders rationality. The need to reduce the influence of subjectivity and emotion in decision making and behaviour has been emphasized (Sayegh, Anthony and Perrewe, 2004; Cacioppo, Gardner and Bernston, 1999; Pitcher, 1999) and much of the development of decision theory to date has focused on the cognitive aspects of decision.
However, advances in neuropsychology and neurophysiology point to definite interactions between affective and cognitive functions (Damasio, 1994; Ledoux, 1995; Oatley & Jenkins, 1992, 1996; Goleman,1995). Many theorists now recognize that affect and cognition interact dynamically (e.g. Blascovich & Mendes, 2001; Smith & Kerby, 2001; Fiedler, Forgas & Greenwald, 2001). The relationship between affect and cognition is deemed to be complex, context sensitive and bidirectional (Forgas, 2001: 400): cognition can influence affective experiences (Forgas, 2001:393, Smith & Kirby, 2001) and affect in turn influences cognition (Izard & Ackerman, 2000).
Ethical decision making theory has traditionally used the same rationalist view as general decision-making theory even though situations where ethical dilemmas occur are often fraught with emotions. Emotions seem to have been considered mostly non-essential to the ethical decision making process and best ignored or controlled, since they tend to disrupt logical, rational moral judgment (Gaudine and Thorne, 2001: 175).
The purpose of this paper is threefold. We will review to what extent emotions have been considered in ethical decision making models to date and outline ways in which they contribute to the different components of the decision-making process. A cognitive-emotional interactionist model is then proposed, based on the assumption that cognitions and emotions are virtually intertwined throughout the process. Future research directions and implications for ethics training and ethics programs are discussed.
Le récent développement des recherches en sciences cognitives dans le champ de l’olfaction souligne avant tout l’idiosyncrasie des qualia odorants et leur difficile communication en langue (Rouby & ali. 2002). Pour cette raison, cette modalité sensorielle serait la plus impropre au traitement culturel défini comme le partage de représentations mentales et publiques (Sperber 1996). A contrario, en mobilisant les premiers résultats de nos recherches en cours auprès d’une population française, nous défendrons que la prise en compte de la complexité des savoirs et savoir-faire domestiques en la matière en ferait un excellent outil pour critiquer les modèles de transmission culturelle (e.a. Dawkins 2003 (1976), Aunger 2003, Cavalli-Sforza 2005) reposant sur le principe d’un transfert de contenus propositionnels (Weingart & ali. 1997).
Parallèlement, nous rappellerons que l’anthropologie cognitive ne s’est presque exclusivement occupée que de questions d’ordre visuel (e.a. Berlin & Kay 1969; Rosch & Lloyd 1978), défendant une conception mentaliste de la cognition qui réduit la culture à un partage rarement problématisé de structures mentales relativement identiques. Partant de l’exemple olfactif et d’une échelle microsociale, notre propos sera de discuter les bases d’un modèle alternatif de la transmission culturelle prenant en compte le caractère distribué et situé des mécanismes de production, traitement et stockage de l’information sensible.
Dans un premier temps, nous caractériserons les différentes formes d’« éducations » sensorielles à l’œuvre au niveau de la transmission familiale. Dans un second temps, nous avancerons que la question du partage de représentations mentales (olfactives) n’est pas la clef de la compréhension des régularités comportementales observées, ceci nécessitant que nous nous recentrions sur le problème de la rencontre entre un environnement structuré (Odling-Smee et ali. 2003), le développement de compétences attentionnelles (Ingold 2000) et la négociations entre tiers des qualités phénoménales de la perception (Candau 2001).
Références bibliographiques citées :
Aunger, R. (dir.) ; Darwinizing culture, Oxford, Oxford University Press, 2003
Berlin, B. & Kay, P. ; Basic colors terms. Their universality and evolution, Berkeley, University of California Press, 1969
Candau, J. ; Mémoire et expériences olfactives, Paris, PUF, 2001
Cavalli-Sforza, L.L., Evolution biologique, évolution culturelle, Paris, Odile Jacob, 2005
Dawkins, R. ; Le gêne égoïste, Paris, Odile Jacob, 2003 (1976)
Ingold, T. ; The perception of the environment, Londres, Routledge, 2000
Odling-Smee, F. ; Laland, K & Feldman, M. ; Niche construction. The neglected process in evolution, Princeton, Princeton University Press, 2003
Rosch, E. & Lloyd, B. ; Cognition and categorization, Hillsdale, Lawrence Erlbaum Associates, 1978
Rouby, C. ; Schaal, B. ; Dubois, D.; Gervay, R. & Holley, A. (dir.), Olfaction, taste, and cognition, Cambridge, Cambridge University Press, 2002
Sperber, D. ; La contagion des idées, Paris, Odile Jacob, 1996
Weingart, P. ; Mitchell, S. ; Richerson, P. & Maasen, S. (dir.), Human nature. Between biology and the social sciences, Londres, Lawrence Erlbaum Associates, 1997
In biorobotics, autonomous agents have successfully been applied as empirical models of simple
behavioral patterns and of the influence of morphology on adaptive behavior. We propose an
extended methodology called “comparative cognitive robotics” to use autonomous mobile robots
as empirical models in the comparative psychology of learning and adaptation. One central idea
of this approach which is inherited from biorobotics is to test robot models and the animals to be
modeled in the same experiments within the same experimental environments, applying the same
means of behavioral analysis. In an interdisciplinary collaboration initiated by two PhD students
in cognitive science and animal learning (R. S. John and Christian W. Werner), an empirical
autonomous agent model of visual discrimination learning in chickens was constructed in a one-
year student project in cognitive science and is now constantly being refined by different graduate
students at the University of Osnabrück, Germany. The model is evaluated by comparing the
animal and robot learning behavior under the same experimental conditions. To control the robot,
an exemplar-based learning mechanism is used which is able to operate directly on unprocessed
sensory data that is not analyzed into features before entering the learning process. Our model
does show differentiated responses to different categories defined by different features, although
no abstracted representation of these features or categories is formed inside the agent. We call this
ability “categorization without categories”. The learning curves turn out to be fully explainable by
the interaction of our simple learning mechanism with a real-world environment. This influence
of real-world data on the emergence of learning phenomena can only be made visible through the
use of a robot-based model. As our autonomous agent model does not need to assume abstracted
features and representations of categories, it can be judged to be more parsimonious than earlier
models of discrimination learning.
John, R.S., Werner, C. W. (2004). Comparative cognitive robotics: Using autonomous robots as
empirical models of animal learning. In: Schaal, Stefan; Ijspeert, Auke; Billard, Aude,
Vijayakumar, Sethu; Hallam, John; Meyer, Jean-Arcady (eds.). From Animals to Animats 8:
Proceedings of the Eighth International Conference on the Simulation of Adaptive Behavior
(SAB’04), Cambridge, MA, London, UK: MIT Press, pp. 23-32.
The famous functionalist Hilary Putnam once said, "We could be made of Swiss cheese and it wouldn't matter." He used "copper, cheese, or soul" to demonstrate that it should not matter what sort of material is used to realize a mind, as long as the states are isomorphic to our own mental states. He rejected much of his own functionalism later and embraced a more pragmatist view, but he always retained the belief that any body could have a mind like ours. This paper argues that it is a mistake to think that any body can have a mind like ours. It isn't the copper, cheese, or soul that matters as much as the form those things take (with a full acknowledgment that cheese is incapable of taking the necessary form). Starting from the American Pragmatists (specifically John Dewey) and moving through different fields like the phenomenology of Maurice Merleau-Ponty and the neuroscience of Antonio Damasio, this paper ultimately argues that all of this embodiment theory culminates in the work of Lakoff and Johnson on cognitive metaphor, whose work is mostly ignored in AI. Their work specifically shows why the types of bodies we have do matter. If AI were to pay more attention to this work on metaphor, the field would necessarily recognize that not only can we never achieve a level of "human action" in intelligence without these specific bodily isomorphisms, but communication between our species and an artificial one would be impossible as a result of conceptual and experiential incompatibility. So, while there are people working on embodiment in AI (very few), most still fail to recognize why the artificial body must be non-trivially very much like our own. Without this recognition, AI will continue to fail even theoretically, as it has for the last 50 years.
As topics of research have become more concerned with embodied
cognition, there has been a proliferation of approaches for scientific
investigation in such domains. The shear complexity of behaviour
inherent in any richly embodied cognitive situation is such that the
standard methodologies become difficult to apply, and less prone to
lead to meaningful conclusions about the cognitive agent. In the work
described in this talk, we address this situation using two different
and complementary computational modelling techniques. First, we
develop a series of cognitive models of increasing complexity. This
process always starts with an exceedingly simplistic (and often highly
random) model. We demonstrate the importance of this approach in our
work modelling the development of social groups in children, where we
found that the primary measure used by child psychologists can be
adequately replicated by a completely random model. Second, we
address the oft-neglected problem of modelling the agent's
environment, especially when that environment includes other agents.
This has been of considerable importance in our investigations of
human game playing, where we have gone to considerable lengths to
ensure concordance between the environment experienced by the human
subjects and that 'experienced' by the cognitive model. Following
these guidelines allows us to draw more accurate and convincing
conclusions as to how well our models represent the real organism, and
what aspects of those models are responsible for this success.
La littérature portant sur la spécialisation fonctionnelle des hémisphères pour les émotions est souvent contradictoire. Afin d'expliquer certaines de ces contradictions, nous avons étudié le rôle des hémisphères en fonction de la valence (positive ou négative) et du niveau de corporéité (implication du corps faible ou forte) de mots de la langue française. Nous cherchions à observer une interaction corporéité*valence*hémisphère. Elle devait montrer lors de l'évaluation de mots avec une faible corporéité, a) un avantage de l'hémisphère gauche pour ceux qui étaient négatifs par rapport à ceux positifs et b) un avantage de l'hémisphère droit pour ceux positifs par rapport à ceux négatifs. Cette interaction valence*hémisphère devait s'inverser pour l'évaluation de mots avec une forte corporéité (i.e. avantage de l'hémisphère gauche pour les mots positifs et du droit pour les négatifs).
Les quarante participants de l'expérience avaient pour tâche de juger la valence affective de mots positifs ou négatifs avec une faible ou forte corporéité. Ces mots étaient brièvement présentés à l'hémisphère droit ou à l'hémisphère gauche selon la méthode de présentation en champ visuel divisé.
Nous avons obtenu des interactions corporéité*valence et corporéité*valence*hémisphère significatives. La première montre que les participants étaient moins rapides pour évaluer un mot négatif avec une faible corporéité qu'un mot négatif avec une forte corporéité tandis qu'ils évaluaient moins rapidement un mot positif avec une forte corporéité qu'avec une faible. La deuxième montre que l'interaction valence*corporéité ne s'observait que pour les mots présentés à l'hémisphère droit.
Ces résultats indiquent que l'évaluation explicite de la valence d'un stimulus est dépendante de la dimension corporelle d'un mot. Les résultats mettent également en avant le rôle de l'hémisphère droit dans le lien entre la valence et la corporéité.
Selon la tradition scientifique de la cognition distribuée, il est largement accepté que les outils technologiques ne servent pas uniquement d’appui occasionnel aux processus cognitifs humains mais, en fonction des actions intentionnelles des acteurs, transforment et réorganisent ces processus de manière dynamique (Salomon, 1993 ; Rogers & Ellis, 1994). D’après cette perspective qui perçoit les acteurs et les dispositifs techniques comme des entités inséparables dans une situation instrumentée (Hutchins, 1995 ; Rabardel, 1995), nous explorerons la manière dont se sont déterminés mutuellement le système symbolique d’un outil technologique de mise en commun (plate-forme de travail collectif) et les aptitudes réflexives des participants à une formation pré-professionnelle.
Nous nous proposons d’approfondir sur le questionnement suivant : comment interagissent les aptitudes des acteurs (en l’occurrence des étudiants) et le système symbolique de la plate-forme et de quelle manière se déterminent-ils mutuellement ? Plus particulièrement, nous nous pencherons sur deux points de réflexion :
A) la manière dont se conjuguent d’une part les « affordances » des outils et d’autre part la volonté des acteurs de structurer et de réguler leur travail
B) les effets que l’objectivation sur écran a eu sur les modes de fonctionnement des acteurs.
Notre terrain d’analyse constitue une formation expérimentale aux TICE pour de futurs enseignants de langue (projet« le français en (première) ligne » coordonnée par l’Université Stendhal Grenoble III et l’Ecole Normale Supérieure Lettres et Sciences Humaines de Lyon). Tandis que notre réflexion s’inscrit principalement dans le champ des sciences du langage, nous souhaitons emprunter notre modèle d’analyse à la cognition distribuée dans le but d’explorer les dynamiques sociales instrumentées qui ont vu le jour.
HOLLAN, J., HUTCHINS, E., KIRSCH, D. Distributed Cognition : towards a new foundation for Human-Computer Interaction Research. ACM Transactions on Computer-Human Interaction, juin 2000, vol. 7, n° 2, p. 174-196.
HUTCHINS, E. Cognition in the wild. Cambridge : MIT Press, 1995.
JOUËT, J. Retour critique sur la sociologie des usages. FLICHY, P., QUÉRÉ, L. (coord.) Communiquer à l’ère des réseaux. Réseaux, 2000, n° 100, p. 489-521. Paris : CNET/ Hermès Science.
PERRIAULT, J. La logique de l’usage. Essai sur les machines à communiquer. Paris : Flammarion, 1989.
RABARDEL, P. Les hommes et les technologies. Approche cognitive des instruments contemporains. Paris : Armand Colin, 1995.
RESNICK, L.B. Shared Cognition : Thinking as social practice. RESNICK, L.B, LEVINE, J.M., TEASLEY, S.D. (éd.) Perspectives on socially shared cognition. Washington : American Psychological Association, 1991.
ROGERS, Y., ELLIS, J. Distributed Cognition : an alternative framework for analyzing and explaining collaborative working. Journal of Information Technology, 1994, vol. 9, n° 2, p. 119-128.
SALOMON, G. (éd.) Distributed cognitions. Psychological and educational considerations. Cambridge : Cambridge University Press, 1993,
SPERBER, D. L’individuel sous l’influence du collectif. La Recherche, juillet – août 2001, 344, p. 32-35.
VANDENDORPE, C. Du papyrus à l’hypertexte. Essai sur les mutations du texte et de la lecture. Paris : La Découverte, 1999.
A variety of embodied and situated approaches have been developed within
the disciplines of cognitive science which have been instrumental in
criticizing standard cognitive science. While Embodied Action and
Situated Cognition overlap significantly (e.g. in emphasizing
environmental interaction), I argue that it is useful to clearly
differentiate them and to treat them as complementary perspectives – two
sides of a coin.
At the core of the embodied approach is the physiological body and
bodily action: the physics, biomechanics, and neurophysiology of (human)
movement (e.g. Shadmehr & Wise 2005). At the core of the situated
approach is the ecological environment, including the social and
cultural environment in which (human) activities are situated (e.g.
To do justice to cognition, all three perspectives – the physiological,
psychological, and ecological – are necessary.
Integrating these perspectives raises significant practical challenges.
Among them is the Laboratory – Real Life tension: investigating the
complex physiological processes involved in human action requires a high
degree of control present only in restricted experimental paradigms. Yet
the ecological perspective requires far richer environments.
I will discuss the activity of tea-making as a “boundary object” for
relating these perspectives. Taking place in a semi-laboratory
environment (“natural task”), it can be adequately controlled but still
captures some of the complexity of ecologically situated socio-cultural
activities (Land et al. 1999).
The benefits of this analysis are twofold. First, Tea-making provides a
framework for relating paradigms of the cognitive neuroscience of action
(reaching, grasping, tool-use) to an ecological context. This enables an
evaluation of the power and the limitations of underlying explanatory
principles (such as internal models) in terms of how they scale up to
more complex actions (involving for example bimanual or even whole-body
Second, the rich interplay between embodied, cognitive, and ecological
processes in sequential actions becomes apparent thus leading to an
enriched view on planning and decision making.
Clancey, W. J. (2002). Simulating Activities: Relating Motives,
Deliberation, and Attentive Coordination. Cognitive Systems Research
Land, M. F., Mennie, N. and Rusted, J. (1999). The roles of vision and
eye movements in the control of activities of daily living. Perception
Shadmehr, R. & Wise, S. P. (2005). The Computational Neurobiology of
Reaching and Pointing: A Foundation for Motor Learning. Cambridge, MA:
Bradford / MIT Press.
The question how “meaning” arises has always been a major challenge for a “science of the mind”. Since the early days of psychology, in the struggle to define the subject matter of the discipline, meaning has appeared in various guises: meaningful thought (intentionality), meaningful behaviour (goal-directedness), and representational meaning (reference) to use Bühler’s (1927) distinction. Thus, the problem of meaning was framed in many different ways by various approaches, reflecting the respective explanatory strategies employed.
After briefly sketching some of the approaches including the one taken by classical cognitive science and showing how it got into trouble (“symbol grounding problem”, Harnad 1990, Dupuy 1999), I would like to explore some new perspectives on the old question of meaning opened up by embodiment and situatedness.
I will do this by discussing some research on categorization and concepts. The field conventionally associated with these issues follows in the tradition of philosophy and psychology, additionally drawing from computer science, linguistics and anthropology (Murphy 2002). I will offer a somewhat complementary perspective and focus on two areas more remote from traditional cognitive science, which unfortunately still tend to go unnoticed by large parts of categorization research. These are the cognitive neuroscience of action - with an emphasis on simulation theory (Jeannerod 2001) - and a set of approaches in the tradition of ethology (neuro-/cognitive ethology, behavioural and sensory ecology).
Being prime examples of embodied action and environmental situatedness, these areas closely relate to recent trends in cognitive science. In particular, the closed loop conception of behaviour that comes naturally to these approaches nicely fits the current shift of thinking in cognitive science from mapping/encoding models to a control view on cognition. By placing cognition in the context of closed loop processes of interaction dedicated to anticipation and control, this may provide a basic framework for a more integrative approach to meaning (Cisek 1999).
Bühler, K. (1927/2000). Die Krise der Psychologie, Werke Vol 4 (Weilerswist, Velbrück Wissenschaft).
Cisek, P. (1999). Beyond the Computer Metaphor: Behaviour as Interaction. In Reclaiming Cognition, R. Nunez, and W. J. Freeman, eds. (Thoverton, Imprint Academic).
Dupuy, J.-P. (1999). The Mechanization of the Mind: On the Origins of Cognitive Science (Princeton, Princeton University Press).
Harnad, S. (1990). The Symbol Grounding Problem. Physica D 42, 335-346.
Jeannerod, M. (2001). Neural Simulation of Action: A Unifying Mechanism for Motor Cognition. NeuroImage 14, 103–109
Murphy, G. L. (2002). The Big Book of Concepts (Cambridge, MA, MIT Press).
An experiment was carried out to examine the ‘action–sentence
compatibility effect’ (ACE) first reported by Glenberg and Kaschak
(2002). The effect is a prediction of Barsalou (1999)'s theory of
perceptual symbols which states that the same areas of the brain
concerned with the planning of real-world actions are also critically
involved in the understanding of sentences describing an action.
The assumption and result of Glenberg and Kaschak were that the
understanding of sentences describing a directed action (e.g., 'You open
the drawer') takes longer when participants perform a contra-directed
action during understanding (i.e., moving their hand away from the body
in the example sentence above) compared to when performing a congruent,
non-conflicting action (i.e., moving their hand towards their body). The
ACE was most prominent in the so-called 'transfer' conditions of the
study where two persons were involved and less prominent in the
'imperative' condition where only one person was involved.
Our experiment adds a dissociation between two conditions which were
confounded in Glenberg and Kaschak's 'transfer' conditions, namely
between actions being taken by the protagonist (the sentence's subject
referred to as ‘you’, which the participants reading the sentence were
expected to identify with) and actions being taken by another person
mentioned in the sentence.
It was found that the ACE was significant only in the condition where
someone else than the protagonist of the situation was performing the
action. This suggests that the ACE in Glenberg and Kaschak's 'transfer'
conditions is actually due to a significance of the effect in those
'transfer' sentences where another person than the protagonist is
performing the action, rather than to an overall significance in all of
the 'transfer' sentences. It is argued that measurability of the ACE
depends on the complexity of the situation model being built in order to
understand the sentences.
Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain
Glenberg, A. M. and Kaschak, M. P. (2002). Grounding language in action.
Psychonomic Bulletin & Review, 9(3):558–565.
Recent theories in Philosophy of Mind and Cognitive Science suggest that our cognitive processes need not be limited to the body or the brain. These theories of “Extended Cognition” claim that there is no reason we shouldn't consider environmental tools which aid cognition to be part of the cognitive process simply because they are located outside of our skull. It has been argued that if we can make these sorts of claims about our cognitive processes, then we can make them for certain mental states as well. Specifically, Andy Clark and David Chalmers argue in their paper "The Extended Mind" (1998) that our explicit non-occurrent beliefs can be stored in the external world. In order for them to support such a view, they are required to provide a functionalist account of beliefs in which we automatically endorse the contents of our beliefs whenever we access them from memory. In this paper, I will detail the problems that emerge from such a functionalist model of beliefs and see what implications it has for extended beliefs. If the theory presented by Clark and Chalmers is correct, then our beliefs are only endorsed when accessed and therefore remain unendorsed while non-occurrent. I will show that this is an improbable account of how our beliefs function on the grounds that we can, and often do, endorse our beliefs even when we are not accessing them. Much of our behaviour can only be explained by appealing to beliefs that we would be extremely hesitant to consider occurrent. It should be noted that this paper is not an attack on the thesis of extended cognition or on the thesis of extended beliefs. It is merely to demonstrate the problems inherent in Clark and Chalmers’s functional theory of belief and to determine how this would impact their theory of extended beliefs.
Why are we moral ? What compels us to forget the purely egocentric line of our own interests to take care about others’ interests?
I think the answer lies in the nature of empathy. I define empathy on the basis of a somatic theory of emotions, which has its roots in James, Lange, Damasio and Prinz. Through this theory, the empathy is a an essential process which allows us to simulate affective states of others, meaning visceral bodily states. These bodily states, which are the emotions, commonly allow us to fix an affective weight, positive or negative, i.e. pleasant or unpleasant, to our own mental representations, making thus in particular the decision taking possible. It is indeed in comparing the assigned weight to two different mental representations in conflict in a dilemma, for instance, that I will naturally prefer to choose the representation with a more positive weight, i.e. litteraly the one I feel more pleasant. In the case of empathy, the representation to which I will add such a weight is really particular : it is a representation which is performed from the other’s point of view. Although this simulation is subject to mistake, it is what allows me to litteraly assume the other’s interests. In fact the somatic weights are also what allows us to deeply feel concerned by what we picture.
I claim too that the emotions are what allows us to seize values. This is why entering in an empathic process allows us to seize the value of a situation which is not ours. Not entering in such a process is showing indifference, an indifference which can be criminal, or even constitutes the foundation of a racism.
On the contrary, in obeying to the call to empathy which the other can sometimes do, in distress cases, I recognize his or her situation in a deeply visceral way and I perform a necessary first moral step.
We will present our work on a tendon driven robotic hand with 13 degrees of freedom,
complex dynamics of actuation and different types of tactile sensors used to test the
A. Cheap grasping: we investigate shape adaptation and how morphology and
materials can be beneficially exploited as the hand interacts with an object’s shape. When
the hand is closed, the fingers will, due to its anthropomorphic morphology,
automatically come together. Because of its morphology, the elastic tendons, and the
deformable finger tips, the hand will automatically self-adapt to the object it is grasping
without the need to "know" beforehand what the shape of the to-be-grasped object will be.
The shape adaptation is taken over by morphological computation.
B. Adaptive learning: In biological systems, at all stages of development, the nervous
system must be able to innervate and adapt functionally to any changes in the body. As
not all possible changes can be anticipated by the designer, the system should be capable
to explore its own movements and coherently adapt its behavior to the new situations.
Aiming to endow our robots with such adaptivity, we present a common basis to
investigate the growing of a neural network, value systems and learning mechanisms. The
proposed neural network allows the robotic hand to explore its own movement
possibilities to interact with objects of different shape, size and material and learn how to
C. Information self-structuring: if the robotic hand actively manipulates an object in a
sensorimotor coordinated way, there are likely to be correlations in the sensorimotor
space (e.g., proprioception, motor activity, and different sensory channels) and causal
structure will be generated. Manipulation of objects will not only be important for the
learning of a multi-modal representation of the objects, but will allow one to draw
conclusions regarding the impact of morphology on building such a representation, given
the direct relation between the robot's morphology and its ability to manipulate objects.
The experiments are performed with two different prototypes of the robotic hand, the
first was built on aluminum and equipped with standard FSR pressure sensors, while the
second was built on industrial plastic and equipped with pressure sensitive conductive
rubber. The second prototype is approximately half of the weight of the first one.
Furthermore, changes in the power of the servo motors and in the length of the tendons
made the second prototype not only lighter but stronger.
Traditional models of mirrored self-recognition (i.e., the recognition of and response to one’s mirrored image as such) have regarded it as the culmination of inferential processes integrating knowledge of oneself and one’s appearance with an understanding of the reflective properties of mirroring surfaces (for reviews, see Mitchell (1993a, 1993b)). I argue against such models on various grounds: first, it seems false that this is how we usually identify ourselves in mirrors; second, it seems wrong to ascribe these sorts of complicated inferences to animals that lack linguistic capacities; third, this sort of inference-driven approach has trouble accounting for various data from developmental psychology (e.g. those of Courage et al. 2004) and clinical psychology (e.g. those reviewed in Binkofski et al. 1999 and Postal 2005). Hence in place of these “intellectualist” views, I propose an understanding of mirrored self-recognition that grounds it in implicit, sensorimotor skills for the direct and non-inferential employment of reflective surfaces to monitor objects (including oneself) that are normally not in immediate view (for related ideas, see Loveland (1987, 1993)).
If correct, this sort of result – that an important class of our abilities to perceive and refer to ourselves (and other objects) are grounded in practical, bodily skills (some of which directly involve the use of environmental objects as tools) rather than concepts and inferences – will have important consequences for cognitive science and the philosophy of mind. Hence I go on connect the view I propose here to work in the emerging “embodied and embedded (/situated)” and “dynamic systems” approaches to cognition and cognitive development, and to philosophical understandings of selfhood and first-personal thought.
Binkofski, F., G. Buccino, C. Dohle, F.J. Seitz, and H.J. Freund. 1999. ‘Mirror agnosia and mirror ataxia constitute different parietal lobe disorders’. Annals of Neurology 46: 51-61.
Courage, M., S. Edison, and M. Howe, 2004. ‘Variability in the early development of visual self-recognition’. Infant Behavior and Development 27: 509-532.
Loveland, K.A. 1987. ‘Discovering the affordances of a reflecting surface’. Developmental Review 6: 1-24.
————. 1993. ‘Autism, affordances, and the self’. In U. Neisser, ed., The Perceived Self: Ecological and Interpersonal Sources of Self-Knowledge (New York: Cambridge University Press), pp. 35-50.
Mitchell, R.W. 1993a. ‘Mental models of mirror-self-recognition: two theories’. New Ideas in Psychology 11: 295-325.
————. 1993b. ‘Recognizing oneself in a mirror? A reply to Gallup and Povinelli, de Lannoy, Anderson, and Byrne’. New Ideas in Psychology 11: 351-377.
Postal, K.S. 2005. ‘The mirror sign delusional misidentification syndrome’. In T.E. Feinberg and J.P. Keenan, eds., The Lost Self: Pathologies of the Brain and Identity (New York: Oxford University Press), pp. 131-146.
La mission d’exploration Humaine de la planète Mars est une entreprise risquée.
Elle nécessite un processus d’amélioration continue de la sécurité du système socio-technique complexe (sstc), support de la future mission Martienne.
Cela veut dire que, pour avoir des chances suffisantes de réussir la mission, il est nécessaire de mieux penser l’autonomie et l’aptitude à l’apprentissage continu.
Ce sont deux qualités dont va avoir besoin le sstc qui doit évoluer dans un environnement à demi-connu et à demi-inconnu.
L’objectif commun de notre recherche qui est de mieux penser la cognition et l’autonomie du système global d’exploration vu comme un système multi-agents, comporte deux aspects, l’un théorique, l’autre pratique.
Remarquons de plus que les articulations entre cognition incarnée, c’est à dire située et distribuée, ainsi que la répartition des rôles Homme & Système sont deux autres aspects clés pris en considération, avant même d’être en phase opérationnelle et bien plus avant d’être en phase de construction.
Sur la base d’une mission de référence de 1000 jours comprenant une traversée d’au moins 120 millions de km (décrite rapidement), nous montrerons le cheminement de notre réflexion épistémologique pour améliorer la sécurité intrinsèque d’un système ouvert [aspect théorique].
Puis nous présenterons un modèle de la cognition orienté vers l’action, à même d’améliorer la conception et la prise de décision en environnement incertain [aspect pratique]. Nous expliciterons notre approche sur la base d’une situation aux limites qui met la survie de l’équipage en péril.
Cette situation limite, état dégradé critique sera simulée à l’aide du modèle indiqué plus haut pour montrer l’aptitude à laquelle les acteurs et les agents du système sont parvenus :
* pour dégager des pistes de solutions en cas de dilemmes,
* mais aussi pour rétro-agir sur la conception afin de mieux distinguer les modes d’articulation de la cognition (située, distribuée) en temps réel ou décalé en fonction du danger s’actualisant.
Since the advent of the embodied, situated (Clark, 1997) and more
generally naturalistic approaches in cognitive science and
philosophy, most proponents of the embodied approach subscribed to a
particular conception of classical models of pratical rationality
(decision theory and game theory). According this conception, these
models are too abstract, « rational » (meaning conscious, attention-
demanding and high-level processing) and formal to account for
embodied, situated actions. They are just « mathematics plus
metaphor (Lakoff & Johnson, 1999: 536), or egoistic representation of
human beings (Varela et al., 1991: 245-246). Drawing inspiration from
bounded (A. Tversky & Kahneman, 1981; A. Tversky & Kahneman, 1986)
and ecological rationality (Gigerenzer, 2000; Gigerenzer & Selten,
2001; Gigerenzer & Todd, 1999; Simon, 1982) theories, embodied
cognitive scientist often reject decision- or game-theoretic
approaches of the mind because real biological systems are not «
rational agents that take inputs, compute logically, and produce
output » (Brooks, 1991: 14). Against these criticism of pratical
rationality, we would like to propose another look at what
rationality is about. First, if « rationality » is to be understood
as conscious and logical manipulation of propositional
representations, we would be happy to reject that cartesian picture
of the mind; but the truth is, formal models are scientific tools
that have meaning only once a set of auxiliary hypothesis relate them
to real entities. Hence if these models fail in the predictive and
explanatory project of cognitive science, let’s trash them. But what
if, once properly connected to real biological systems, they are both
successful as predictive and explanatory devices ? We argue the
recent progress in neuroeconomics (Camerer et al., 2004; Glimcher,
2003) and behavioral ecology (Krebs & Davies, 1997) and neuroethology
(Schultz, 2004) show another to look at decison- and game-theory.
Instead of being description of rational thinking, they describe
behavioural processes by which situated agents guide their behavior.
Integrating research on dopaminergic neurons (McCoy & Platt, 2004;
Montague et al., 2004; Tobler et al., 2005), motor control (Tin &
Poon, 2005; Wolpert et al., 2004; Wolpert & Kawato, 1998), we propose
a mechanical model of embodied rationality where « rationality »
supervene on the network of interaction between the agent and its
Brooks, R. A. (1991). Intelligence without reason. MIT AI Lab Memo
Camerer, C. F., Loewenstein, G., & Prelec, D. (2004). Neuroeconomics:
Why economics needs brains. Scandinavian Journal of Economics, 106
Clark, A. (1997). Being there: Putting brain, body, and world
together again. Cambridge, Mass.: MIT Press.
Gigerenzer, G. (2000). Adaptive thinking: Rationality in the real
world. New York: Oxford University Press.
Gigerenzer, G., & Selten, R. (2001). Bounded rationality: The
adaptive toolbox. Cambridge, Mass.: MIT Press.
Gigerenzer, G., & Todd, P. M. (Eds.). (1999). Simple heuristics that
make us smart. New York: Oxford University Press.
Glimcher, P. W. (2003). Decisions, uncertainty, and the brain: The
science of neuroeconomics. Cambridge, Mass.; London: MIT Press.
Krebs, J. R., & Davies, N. B. (1997). Behavioural ecology: An
evolutionary approach (4th ed.). Oxford, England; Malden, MA:
Lakoff, G., & Johnson, M. (1999). Philosophy in the flesh: The
embodied mind and its challenge to western thought. New York: Basic
McCoy, A. N., & Platt, M. L. (2004). Expectations and outcomes:
Decision-making in the primate brain. J Comp Physiol A Neuroethol
Sens Neural Behav Physiol.
Montague, P. R., Hyman, S. E., & Cohen, J. D. (2004). Computational
roles for dopamine in behavioural control. Nature, 431(7010), 760.
Schultz, W. (2004). Neural coding of basic reward terms of animal
learning theory, game theory, microeconomics and behavioural ecology.
Curr Opin Neurobiol, 14(2), 147.
Simon, H. A. (1982). Models of bounded rationality. Cambridge, Mass.:
Tin, C., & Poon, C.-S. (2005). Internal models in sensorimotor
integration: Perspectives from adaptive control theory. Journal of
Neural Engineering(3), S147.
Tobler, P. N., Fiorillo, C. D., & Schultz, W. (2005). Adaptive coding
of reward value by dopamine neurons. Science, 307(5715), 1642-1645.
Tversky, A., & Kahneman, D. (1981). The framing of decisions and
psychology of choice. Science, 211, 453-458.
Tversky, A., & Kahneman, D. (1986). Rational choice and the framing
of decisions. The Journal of Business, 59(4), S251-S278.
Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind:
Cognitive science and human experience. Cambridge, Mass.: MIT Press.
Wolpert, D. M., Ingram, J. N., Howard, I. S., Fukunaga, I., &
Körding, K. P. (2004). A neuroeconomics approach to inferring utility
functions in sensorimotor control. PLoS Biology, 2(10), e330.
Wolpert, D. M., & Kawato, M. (1998). Multiple paired forward and
inverse models for motor control. Neural Networks, 11(7-8), 1317.
Recent work in cognitive psychology and linguistics supports the idea that language often invokes spatial representations, which are grounded in perception and action (Barsalou, 1999). Richardson et al. (2001, 2003) showed for 30 verbs that English speakers assigned consistent spatial associations (horizontal or vertical) to them, associations also activated during language comprehension.
We conducted a similar norming study for 160 English verbs that described concrete, abstract, or mental state processes. Thus, we tested verbs with and without any apparent spatiality in their lexical semantics. That study also yielded consistent directional associations with the verbs. Here, we investigate the hypothesis that some of these spatial associations might be the subtle product of two factors: the direction of a language’s orthography and any culturally entrenched metaphors in the language. For example, English speakers consistently interpret push as a left-to-right action while Arabic speakers, with a right-to-left writing system, regard it just the opposite. Likewise, forget may be a downwards process for speakers of languages that associate under with unconsciousness, but an upwards process for speakers of languages with metaphors that conceptualize lost ideas as disappearing out the top of a person’s head.
We report results from several on-line and off-line experiments with English participants, investigating directional biases in general cognitive tasks (picture description and picture completion) and directional biases in mental images that speakers associate with concrete and abstract verbs when they either read or hear these verbs in isolation or in sentential contexts. These English experiments are part of a larger crosslinguistic study that investigates how metaphor and the direction of an orthographic system can interact with a verb’s cognitive representation and processing, and specifically, the direction in which associated events are imagined to unfold conceptually. We will discuss the results in relation to theories of embodied cognition of relational predications.
Barsalou, L.W. (1999). Perceptual symbol systems. Behavioral & Brain Sciences, 22, 577-660.
Richardson, D., Spivey, M., Edelman, S., & Naples, A. (2001). “Language is Spatial”: Experimental Evidence for Image Schemas of Concrete and Abstract Verbs. In: Proceedings of the 23rd Annual Meeting of the Cognitive Science Society. (pp. 873–878). Mawhah, NJ:Erlbaum.
Richardson, D., Spivey, M., Barsalou, W., & McRae, K. (2003). Spatial representations activated during real-time comprehension of verbs. Cognitive Science 27, 767-780.
Modeling the cognitive processes of learners is fundamental to build educational software
that are autonomous and that can provide highly tailored assistance during learning
(Anderson et al, 1995). Recently, we have proposed our own knowledge representation
model based upon several researches in cognitive psychology. This model represents
domain knowledge according to three layers (Fournier-Viger et al., 2006). The first layer
describes the knowledge from a logical and ontological perspective. The second describes
cognitive processes of learners. The third builds reusable units of teaching material based
upon the two first layers.
In this talk, we will focus on representing the knowledge handled in a complex and
demanding task, the manipulation of the International Space Station (ISS) robotic arm
CanadarmII. This is a demanding duty that involves accurately following an extensive
protocol. Indeed, a single mistake can engender catastrophic consequences. To
accomplish the tasks, astronauts need a good ability to build spatial representations
(spatial awareness), and to visualize them in a dynamic setting (situational awareness).
These representations are constructed from the awareness of the position and motion of
each of the Arm’s parts with respect to the Station’s elements, and the task’s progress.
Arm’s movements can only be seen through three monitors that show views obtained
from cameras mounted at different locations on the ISS and on the Arm. Operational
protocols and security rules guide the astronaut and help him avoid improper
manipulations such as moving the Arm into close proximity of any of the Station’s
elements. Astronauts must also acquire the ability of selecting efficient sequences of
All these aspects create complex situations for a virtual tutor to analyze. In this lecture,
we discuss the challenges that we have faced and the solutions that we have proposed to
represent the knowledge handled in a software that train to achieve complex tasks with
CanadarmII, and we explain how the knowledge is efficiently organized to generate
Simulating crowd and riot behavior concerns research that aims for under-
standing crowd behavior and gaining knowledge of intervening techniques
using computer simulation. Situations where a crowd developss into a riot
are to be avoided. To enhance understanding how various police actions
interfere with crowd behavior, it is necessary to develop better under-
standing of the behavioral dynamics in crowds. This would enable the
police to make better, well-considered choices in their tactics, instead of
the current more intuitive strategies.
Crowd behavior is a complex and dynamical phenomenon that arises from
the interaction of an individual with his/her (social and physical) environ-
ment. This interaction is the key influence factor of the behavior shown.
To relate observable behavior to the interaction process it is necessary to
include the study of internal (i.e. cognitive) processes. In practice this
means, understanding the effect of an action performed by the police and
the social surrounding on every individual on a physical location. To find
answers to these questions, a model for individual behavior in a crowd
is developed. Furthermore, by performing experiments, using computer
simulation, several situations or hypothesises are analyzed.
The model is formed out of relevant psychological theories that describe
physical influence on individual behavior (e.g. density, e.g. Summer), in-
fluence of personal (cognitive) characteristics on individual’s behavior
(e.g. needs and goals, e.g. Max Neef), and theories that relate influence
of others on an individual (e.g. norms Cialdini & Goldstein). This multi-
agent simulation approach enables us to follow the interaction processes
by zooming in on inter- and intra individual level and relate it to behavior
shown on group level. In sum, developing a (cognitive and social) psycho-
logical model and simulation will result in more knowledge on crowd and
riot behavior, and thus will get closer to practical insights in dealing with
crowds and riots.
Teamwork is often essential in complex and dynamic environments such as air traffic control, emergency rooms, and military command and control. In these situations, the expertise and resources required for the successful achievement of the task go beyond the capability of a single individual. However, the addition of people in the execution of the task represents in itself an element of complexity, and one issue that remains a matter of debate concerns which factors are critical in order for teams to perform efficiently. Researchers have struggled to identify and operationalise these variables (Paris et al., 2000) and consequently, numerous taxonomies of team processes have been proposed. Owing to that lack of consensus, there is a need to condense the number of factors that are crucial to teamwork in order for research findings to be easier to apply practically and to help research endeavours on teamwork be more focused (Salas et al., 2005). An extensive examination of the literature reveals that the various lists of team processes focus almost exclusively on the human aspect of team functioning. However, teamwork not only comprises interactions between team members, but also with the task and the tools available to support its execution. To answer the need to synthesise team functioning elements and to account for the interrelations between team members, tasks and tools, we offer a classification of factors critical for team effectiveness that is empirically testable. This taxonomy distinguishes between information sharing activities (which include communication between team members and information distribution from external sources to team members) and coordination activities (which consist in efficiently managing dependencies between subtasks, resources and individuals both a priori and online). We believe this taxonomy to be less redundant than most structures in the literature and generic enough to be applied to a variety of team situations.
It is trivial to say that intelligence is a social phenomenon and that, thanks to language, human cognition becomes collective. Indeed, language is what makes possible a division of labour in human societies and, most importantly, a sharing of the burden of information processing within institutions. One can claim, following John R. Searle (1995, 2005), that language is the first of social institution, the one thanks to which all others become possible.
But what kind of language is needed to create institutions that allow for a real social distribution of cognition? At what moment in humans' evolutionary history did it evolve? Searle remains unclear on this point. Is it enough to symbolically refer to objects or do we need something more? Any serious answer to this question should take into consideration what we know about 1) primatology, 2) paleoanthropology, and 3) developmental psychology.
In this paper, I will argue that the capacity to refer symbolically to objects and the capacity to understand others' behavior in intentional terms are not sufficiant to account for the emergence of institutions and the distribution of cognition within human societies. Following the work of Derek Bickerton (2000, 2003), I will explain that a language based on a hierarchical and recursive syntax is needed to explain the emergence of institutions. Then, on the basis of the work of Joëlle Proust (2002) and Charles Kalish (2005), I will claim that the capacity to interpret others' behavior in intentional terms should lead to a more complex theory of mind, one that allows humans to understand false beliefs and opaque contexts. It is only under these conditions that it becomes possible to fix, socially, a clear distinction between what an individual does as the holder of different statuses and, therefore, to understand the fully conventional nature of institutions.
Joëlle Proust, « Can "Radical" Simulation Theories Explain Psychological Concept Acquisition? », dans J. Dokic et J. Proust (dir.), Simulation and knowledge of action, Amsterdam : John Benjamins, 2002, pp. 201-228.
Derek Bickerton, « Resolving Discontinuity: A Minimalist Distinction between Human and Non-human Minds », dans American Zoologist, Vol. 40, N. 6, 2000, pp. 862-873.
Derek Bickerton, « How Protolangage Became Language », dans C. Knight, M. Studdert-Kennedy, J.R. Hurford (dir.), The Evolutionary Emergence of Language, Cambridge University Press, 2000.
Derek Bickerton, « Symbol and Structure », dans Language Evolution, Oxford University Press, 2003, p. 82.
Charles Kalish, « Becoming Statut Conscious. Children's Apprenciation of Social Reality », Philosophical Explorations, Vol. 8, No. 3, September 2005.
John R. Searle,, « What is an institution? », dans Journal of Institutional Economics, 2005, 1:1, pp. 1-22.
John R. Searle, La construction sociale de la réalité, Gallimard, 1998, 303 pages.
The general topic of this paper is the first-person epistemology of action. I am interested in the question about how we come to have knowledge of our intentional actions. By intentional actions I mean those actions over which we have control. Prima facie, knowledge of our intentional actions seems to involve two distinct cognitive achievements. On the one hand, it involves knowing which actions we are doing. On the other hand, it also involves knowing that those actions are under our control.
The argument of this paper can be summarized as follows. First, I argue that there are not actually two distinct cognitive achievements involved in coming to know our intentional actions. That is, I argue that knowing what we are doing is, ipso facto, knowing whether we are doing it intentionally or not. This is a consequence of a more general thesis about the relation between the ways by which we come to know about our actions and those by which we exercise control over them.
Second, I argue that there are diverse ways by which we exercise control over our actions. These include our capacity for rational deliberation, on account of which we are able to act in diverse, adaptive ways. But they also include the operation of sub-personal mechanisms, which result in movements that exhibit certain stereotypic patterns.
From these two claims, I derive the conclusion that knowledge of our intentional actions is a complex phenomenon. The complexity results from the many ways in which we can control our actions. More precisely, the conclusion of the paper is that the many ways in which we can acquire knowledge of our intentional actions are a function of the many ways in which we can control them.
The argument of the paper combines traditional discussions in the philosophy of action with neuroscientific approaches to behavior, particularly, computational models in motor control theory (see attached bibliography). An important aim of the paper is to suggest how these diverse proposals might be integrated into a unified account of the phenomenon of human agency.
Anscombe, G. E. M. 1958. Intention. Basil Blackwell: Oxford.
Blackmore, S-J. et al. 2002. Abnormalities in the awareness of action. Trends in cognitive sciences, 6(6), pp. 237-242.
Donnellan, K. S. 1963. Knowing what I am doing. Journal of Philosophy, 60(14), 401-409.
Falvey, K. 2000. Knowledge in intention. Philosophical Studies, 99, 21-44.
Fourneret, P. and Jeannerod, M. 1998. Limited conscious monitoring of motor performance in normal subjects. Neuropsychologia, 36, 1133-1140.
Haagard, P. and Clark, S. (2003) Intentional action: conscious experience and neural prediction. Consciousness and cognition, 12, pp. 695-707.
Harris, C. M. and Wolpert, D. M. 1998 Signal-dependent noise determines motor planning, Nature, 394(20), 780-784.
Hogan. N. 1984 An organizing principle for a class of voluntary movements, Journal of Neuroscience, 4, 2745-2754.
Jeannerod, M. and Pacherie, E. (2004) Agency, simulation and self-identification. Mind and language, 19(2), pp. 113-146.
Marcel, A. 2003. The sense of agency: awareness and ownership of action. In Roessler, J. and Elian, N. (eds), Agency and Self-awareness, Clarendon Press: Oxford.
Moran, R. 2004. Anscombe on ‘Practical Knowledge’. In Hyman, J. and Steward, H. (eds), Agency and Action, Cambridge University Press: Cambridge.
Morasso. P. 1981. Spatial control of arm movements, Exp Brain Res, 42, 223-227.
O’Shaugnessy, B. 1980. The Will: a dual aspect theory. Cambridge University Press: Cambridge.
Slachevsky, A. et al. 2000. Preserved adjustment but impaired awareness in a sensory-motor conflict following prefrontal lesions. Journal of Cognitive Neuroscience, 13(3), 332-340.
Shadmehr, R. and Wise, S. P. 2005 The computational neurobiology of reaching and pointing. A foundation for motor learning. MIT Press, Cambridge.
Uno. Y., Kawato, M. and Suzuki, R. 1989 Formation and control of optimal trajectories in human multijoint arm movements. Minimum torque-change model, Bio. Cybern 61, 89-101.
Velleman, J. D. 2004. Précis of ‘The possibility of practical reason’. Philosophical Studies, 121, 225-238.
von Holst, E. and Mittlestaed, H. 1950. Das Reafferenzprinzip. Wechselwirkungen zwischen Zentralnervensystem und Peripherie. Naturwissenchaften, 37; 464-476.
Wilson, G. M. 1989. The Intentionality of Human Action. Stanford University Press: Stanford.
People spend hours on playing computer games without getting bored and
mostly without even realising the time spent. The fact that people can get
deeply involved in such an activity raises questions regarding the nature and
development of skilled human computer game interaction. Computer games
and the activity of playing them are approached from many research direc-
tions in various different fields, but having a background in the area of cog-
nitive science I would like to argue that the human interaction with computer
games is particularly interesting from a situated cognition perspective. In the
real world, people constantly off-load cognitive workload onto the environ-
ment (e.g. Kirsh, 1995; Hutchins, 1995) and use environmental properties as
organisers which help them to organise their work and, on the social level,
contribute to coordination, cooperation and structure (e.g. Rambusch, Susi, &
Ziemke, 2004). Many computer games, on the other hand, do not offer many
opportunities for off-loading activities, and people playing computer games
are usually distributed over several locations and time zones. The interesting
question is how people deal with at times static virtual environments, how
they use environmental (virtual) properties as cognitive aids and to what ex-
tent off-loading extends into the ‘real world’, which might also include other
people. Many games, for instance, are team efforts and teams can develop
complicated strategies and advanced devisions of labor (e.g. StarCraft). Ap-
proaching computer games from a situated cognition perspective can broaden
our understanding of how an on the surface individual game play in front of
a computer is distributed across different places and persons, how people, in-
spite of limited interaction techniques, communicate with each other, how they
learn from each other, and how they make sense of and solve problems in vir-
tual environments provided to them.
Hutchins, E. (1995). Cognition in the wild . Cambridge, MA: MIT Press.
Kirsh, D. (1995). The intelligent use of space . Artificial Intelligence, 73, 31–68.
Rambusch, J., Susi, T., & Ziemke, T. (2004). Artefacts as mediators of dis-
tributed social cognition: A case study. In Proceedings of the 26th Annual
Meeting of the Cognitive Science Society. Mahwah: Lawrence Erlbaum.
Ants excavate their subterranean nests using stigmergic rules.
Three-dimensional computer modeling of this process was conducted in
order to better understand the mechanisms that might underlie both the
digging and depositing of material. The approach allowed various
hypotheses to be tested, and conclusions to be drawn, with the goal of
establishing the minimal requirements for achieving life-like
excavations. The use of two distinct pheromones, in conjunction with a
carbon dioxide gradient present in the soil and a directional ‘inertia’
when digging, proved sufficient in producing coherent and naturalistic
shaft and chamber combinations. In addition, realistic chamber
placement was produced without the need to posit a carbon dioxide
effect on this behaviour. As such, this research takes steps towards an
understanding of how ant colonies are able to coordinate the excavation
of their nests through interactions with their environment and, more
broadly, how fairly simple stigmergic rules can lead to complex