On the West Coast of Ireland..............................................In a zone of
active lava flows on The Big Island, Hawaii
Curiculum Vitae
Click Here
Grants:
By clicking here you can see an abstract about the project. If more information about the project is available,
it can be seen by clicking on the project name above the abstract.
there is more information available about the project, you can get it by clicking on the project name.
From Associations To Rules In The Development Of Concepts (FAR)
(European Commission Sixth Framework: NEST no. 516542)
Denis Mareschal, Project Coordinator
Robert French, Co-Coordinator
Total Amount of the Grant: 1.2 M euros
Duration: 3 years
Humans - The Analogy Making Species (European Commission Sixth
Framework: NEST-2004-Path-HUM)
Boicho Kokinov, Project Coordinator
Robert French, Co-Coordinator
Total Amount of the Grant: 1.8 M euros
Duration: 3 years
The Limits of Co-currence Analyses (Région de Bourgogne)
Robert French & Valerie Camos, Project Co-coordinators
Total Amount of the Grant: 139.500 euros
Duration: 1 year
Basic Mechanisms
of Learning and Forgetting in Natural and Artificial Systems
(European Commission 5th Framework project HPRN-CT-1999-00065)
Robert M. French, Project Coordinator,
Denis Mareschal, Co-Coordinator,
Total Amount of the Grant: 980,000 euros,
Duration: 4 years + 1 year extension. Completed: 2005.
Workshops Organized:
April 12-14, 2007:
The Tenth Neural Computation and Psychology Workshop (NCPW10
was held in Dijon, France, and was organized by Bob French and Xanthi Skoura-Papaxanthis.
Proceedings of this Workshop: (French, R. & Thomas, E. (eds.) (2008).
FROM ASSOCIATIONS TO RULES: Connectionist Models of Behavior and Cognition Singapore: World Scientific can
be seen by
clicking here
.
September 16-18, 2000:
The Sixth Neural Computation and Psychology Workshop (NCPW6)
was held in Liège, Belgium, from. Proceedings of this Workshop
(French, R. and Sougné, J. (eds.) (2001). Connectionist models
of learning, development and evolution. Berlin: Springer-Verlag) can
be seen by
clicking here
.
March 28, 1998: IUAP Workshop "The role of implicit memory and implicit
learning in representing the world", Château de Colonster, University
of Liège, Liège, Belgium. This Workshop eventually led to
a book, entitled, Implicit Learning and Consciousness: An empirical, philosophical,
and computational consensus in the making? This book will be published
by Psychology Press and will appear in the fall of 2001. Introduction: "The
study of consciousness spans a host of disciplines ranging from philosophy
to neuroscience, from psychology to computer modeling. Arguments about consciousness
run the gamut from tenuous, even ridiculous, thought experiments to the
most rigorous neuroscientific experiments. This book offers a novel perspective
on many fundamental issues about consciousness based on empirical, computational
and philosophical research on implicit learning - learning without conscious
awareness of having learned..." For the complete introduction to the book,
please
click here
.
Books:
-
French, R. M. and Thomas, E. (2008)
. FROM ASSOCIATIONS TO RULES: Connectionist Models of Behavior and Cognition
. Singapore: World Scientific, ISBN 978-981-279-731-5
-
French, R. M. and Cleeremans, A. (2002)
. Implicit learning and consciousness: An empirical, philosophical,
and computational consensus in the making. London, UK: Psychology
Press, ISBN: 1-841-69201-8
-
French, R. M. and Sougné, J. (2001)
. Connectionist models of learning,development and evolution
. Berlin: Springer-Verlag, ISBN: 1-852-33354-5
-
French, R. M. (1995)
. The Subtlety of Sameness. Cambridge, MA: The MIT Press, ISBN 0-262-06810-5. To view the Forward (by Dan Dennett),
the Introduction, Chapter 1 and the Conclusion of the book click here
-
French, R. M. and Henry, J. (1985)
. Gödel, Escher, Bach: les Brins d'une Guirlande Eternelle
. French Translation of Gödel, Escher, Bach: an Eternal Golden
Braid by Douglas R. Hofstadter. Paris, France: InterEditions. ISBN
2-10-005435-X.
Recent papers: Click on the name of the author to download the
full paper. All papers are in pdf format and require Acrobat Reader to
be read. (Acrobat Reader can be acquired free of charge at http://www.adobe.com/prodindex/acrobat/readstep.html).
Please be aware that all papers that have been made available electronically
are PRE-PRINTS and may contain differences with respect to their final,
published versions which, for reasons of copyright ownership, cannot be
put on this site. If the paper has appeared in print, the citation references
are given. Clicking on a paper to download it constitutes an explicit
request for a pre-print of that paper. For the final, published version
of a paper, please refer to the references included in the pre-print and
then acquire the paper through the journal in which it appeared.
Index of papers available on this site
By clicking here you can see the abstract of the paper. Then, if you
wish to download the pre-print of the paper, click on the author name
above the abstract.
Categorization in Infants and Children
- Cowell, R.A. & French, R. M. (2007). An unsupervised, dual-network connectionist model of rule emergence in category learning.
- Abreu, A., French, R. M., Cowell, R. A. & de Schonen, S. (2006). Local-Global visual deficits in Williams Syndrome: Stimulus presence contributes to diminished performance on image-reproduction.
- Delbé, C., Bigand, E., & French, R. M. (2006). Asymmetric Categorization in the Sequential Auditory Domain.
-
Abreu, A. M., French, R. M., Annaz, D., Thomas, M., De Schonen, S. (2005). A "Visual Conflict" Hypothesis for Global-Local Visual Deficits in Williams Syndrome: Simulations and Data
- French, R. M., Mareschal, D., Mermillod, M., & Quinn, P. C. (2004)
. The Role of Bottom-up Processing in Perceptual Categorization by
3- to 4-month-old Infants: Simulations and Data.
-
Mermillod, M., French, R. M., Quinn, P. & Mareschal, D., (2003)
. The Importance of Long-term Memory in Infant Perceptual Categorization.
- French,
R. M., Mermillod, M., Quinn, P., Chauvin, A., & Mareschal, D. (2002)
. The Importance of Starting Blurry: Simulating Improved Basic-Level
Category Learning in Infants Due to Weak Visual Acuity.
- Mareschal, D., Quinn, P. & French, R. M. (2002)
. Asymmetric interference in 3- to 4-month-olds' sequential category
learning.
- Labiouse, C., French,
R. M. and Mermillod, M. (2002)
. Using autoencoders to model asymmetric category learning in early
infancy: Insights from Principal Components Analysis.
- French, R. M., Mermillod, M., Quinn, P., & Mareschal, D. (2001)
. Reversing Category Exclusivities in Infant Perceptual Categorization:
Simulations and Data.
- Mareschal, D. & French, R. M. (2000)
. Mechanisms of categorization in infancy.
-
Mareschal, D. & French, R. M. (1999)
. A connectionist account of perceptual category-learning in infants.
-
Mareschal, D., French, R. M. & Quinn, P. (1998)
. A Connectionist Account of Asymmetric Category Learning in Early
Infancy.
- Mareschal,
D. and French, R. M. (1997)
. A connectionist account of interference effects in early infant
memory and categorization.
Catastrophic interference in connectionist networks
-
French, R. M. (2003)
. Catastrophic Forgetting in Connectionist Networks (Encyclopedia review
article).
- French, R. M. & Chater,
N. (2002)
. Using noise to compute error surfaces in connectionist networks:
A novel means of reducing catastrophic forgetting.
-
Ans, B., Rousset, S., French, R. M., & Musca, S. (2002)
. Preventing Catastrophic Interference in Multiple-Sequence Learning
Using Coupled Reverberating Elman Networks
-
Sougné, J. & French, R. M. (2001)
. Synfire Chains and Catastrophic Interference.
-
French, R. M., Ans, B., & Rousset, S. (2001)
. Pseudopatterns and dual-network memory models: Advantages and shortcomings.
- French, R.
M. (1999)
. A review of catastrophic forgetting in connectionist networks.
- French,
R. M. & Ferrara, A. (1999)
. Modeling time perception in rats: Evidence for catastrophic interference
in animal learning
- French,
R. M. (1997)
. Pseudo-recurrent connectionist networks: An approach to the "sensitivity-stability"
dilemma.
- French,
R. M. (1997)
. Using pseudo-recurrent connectionist networks to solve the problem
of sequential learning.
- French,
R. M., (1994)
. Dynamically constraining connectionist networks to produce distributed,
orthogonal representations to reduce catastrophic interference.
- French,
R. M., (1991)
. Using semi-distributed representations to overcome catastrophic interference
in connectionist networks.
Neural Network modeling (general)
- Delbé, C., French, R.M., Bigand, E. (2008).
Catégorisation asymétrique de séquences de hauteurs musicales.
-
Delbé, C.,
Bigand, E., French, R. M. (2006). Asymmetric Categorization in the Sequential
Auditory Domain.
-
Van Rooy, D., Van Overwalle, F., Vanhoomissen, T., Labiouse, C., &
French, R. M. (2003)
. A Recurrent Connectionist Model of Group Biases.
- Labiouse,
C. & French, R. M. (2001)
. A Connectionist Model of Person Perception and Stereotype Formation
- French, R.
M. and Thomas, E. (2000)
Why Localist Connectionist Models are Inadequate for Categorization
- French,
R. M. & Mareschal, D. (1998)
. Could Category-Specific Semantic Deficits Reflect Differences in
the Distributions of Features Within a Unified Semantic Memory?
- French, R. M. (1997)
. Selective memory loss in aphasics: An insight from pseudo-recurrent
connectionist networks
- Sougné,
J. & French, R. M., (1997)
. A Neurobiologically Inspired Model of Working Memory Based on Neuronal
Synchrony and Rythmicity
Bilingual memory
- French, R. M. &
Jacquet, M. (2004)
. All cases of word production are not created equal: a reply to Costa
& Santesteban.
-
French, R. M. & Jacquet, M. (2004)
. Understanding Bilingual Memory: Models and Data.
-
Jacquet, M. & French, R. M. (2002)
. The BIA++: Extending the BIA+ to a dynamical distributed connectionist
framework.
- French, R.
M. (1998)
. A Simple Recurrent Network Model of Bilingual Memory
- French, R.
M. and Ohnesorge, C. (1997)
. Homographic self-inhibition and the disappearance of priming: More
evidence for an interactive-activation model of bilingual memory
- French, R. M.
& Ohnesorge, C. (1996)
. Using interlexical nonwords to support an interactive-activation
model of bilingual memory
- French, R.
M. and Ohnesorge C. (1995)
. Using non-cognate interlexical homographs to study bilingual memory
organization
Analogy-making
-
Thibaut, J.-P., French, R. M., Vezneva, M. (2009)
Cognitive Load and Analogy-making in Children: Explaining an Unexpected Interaction.
-
Thibaut, J.-P., French, R. M., Vezneva, M. (2008)
Analogy-Making in Children: The Importance of Processing Constraints.
-
French, R. M. (2008)
BBS Commentary on Leech et al., Analogy as Relational Priming.
-
French, R. M. (2007)
The dynamics of the computational modeling of analogy-making.
-
Kokinov, B. and French, R. M. (2003)
. Computational Models of Analogy-making.
-
French, R. M. (2006)
. The dynamics of the computational modeling of analogy-making.
-
Kokinov, B. and French, R. M. (2003)
. Computational Models of Analogy-making.
- French,
R. M. (2002)
. The Computational Modeling of Analogy-making.
- French,
R. M. and Labiouse, C. (2001)
. Why co-occurrence information alone is not sufficient to answer
subcognitive questions.
- See also:
French, R. M. (1995)
. The Subtlety of Sameness. Cambridge, MA: The MIT Press, ISBN
0-262-06810-5
Foundations of cognitive science
- French, R. M. (2009).
If it walks like a duck and quacks like a duck... The Turing Test, Intelligence and Consciousness.
- French,
R. M., (2002).
Natura non facit saltum: The need for the full continuum of mental
representations.
- French, R. M. (2000)
. The Chinese Room: Just Say "No!"
-
French, R. M. (2000)
. Peeking Behind the Screen: The Unsuspected Power of the Standard
Turing Test.
-
French, R. M. (2000)
. The Turing Test: the first fifty years.
-
French. R. M. (2000)
. Creation and Discovery: Opposite Ends of a Continuum of Constraints
- French,
R. M. & Anselme, P. (1999)
. Interactively converging on context-sensitive representations: A
solution to the frame problem
- French,
R. M. & Cleeremans, A. (1998)
. Function, Sufficiently Constrained, Implies Form
-
French, R. M. & Thomas, E. (1998)
. The Dynamical Hypothesis: One Battle Behind.
- French, R. M.
and Weaver, M. (1997)
. New-feature learning: How common is it?
- French, R. M. (1997)
. When coffee cups are like old elephants or Why representation modules
don't make sense.
- French, R. M. and Cleeremans,
A. (1996)
. Probing and Prediction: A pragmatic view of cognitive modeling.
- French, R. M. (1996)
. The Inverted Turing Test: How a simple (mindless) program could
pass it.
- French, R. M.,
(1995)
. Refocusing the Debate on the Turing Test.
- French, R. M.,
(1990)
. Subcognition and the Limits of the Turing Test.
Evolution/Artificial life
- French, R. M. (2009). The Red Tooth Hypothesis: A computational model of predator-prey relations, protean escape behavior and sexual reproduction.
- French, R. M. and Kus, E. (2008). KAMA: A Temperature-Driven Model of Mate Choice Using Dynamic Partner Representations
- French, R. M., Kus, E.T.,
(2006). Modeling Mate-Choice using Computational Temperature and
Dynamically Evolving Representations.
-
French, R. M., Brédart, S., Huart, J., & Labiouse, C. (2000)
. The resemblance of one-year-old infants to their fathers: refuting
Christenfeld & Hill (1995).
- Bredart, S.
& French, R. M. (1999)
. Do Babies Resemble their Fathers More Than their Mothers? A Failure
to Replicate Christenfeld and Hill (1995).
- French, R. M. & Messinger,
A. (1994)
. Genes, Phenes and the Baldwin Effect: Learning and Evolution in
a Simulated Population.
Machine Learning
Methodology
Textual Analysis
- A Gill, A. J., French, R. M., Gergle, D., Oberlander, J. (2008).
Identifying Emotional Characteristics from Short Blog Texts.
- Gill, A.J., Gergle, D., French, R.M., and Oberlander, J. (2008).
Emotion Rating from Short Blog Texts.
- A Gill, A. J., French, R. M. (2007). Semantic distance and author personality perception through texts
- French,
R. M. & Labiouse, C. (2002)
. Four Problems with Extracting Human Semantics from Large Text Corpora
- French, R. M. (2008)
. Review of Neuroconstructivism: A new manifesto for child development research
- French, R. M. (2004)
. For historians of automated computing only: A review of Who Invented
The Computer? The Legal Battle That Changed Computing History by Alice
Rowe Burks. Endeavour.
- French, R. M. (2002).
Review of Daniel Levine's Introduction to Neural and Cognitive
Modeling.
- French,
R. M. & Thomas, E. (2001)
. The Dynamical Hypothesis in Cognitive Science: A review essay of
Mind As Motion.
- French,
R. M. (1999)
. Review of Terry Regier's The Human Semantic Potential: Spatial
language and constrained connectionism.
- French, R. M. (1996)
. Review of Paul M. Churchland, The Engine of Reason, the Seat of
the Soul.
From Associations To Rules In The Development Of Concepts (FAR)
(European Commission Sixth Framework: NEST no. 516542)
Denis Mareschal, Project Coordinator
Robert French, Co-Coordinator
Total Amount of the Grant: 1.2 M euros
Duration: 3 years
Human adults appear to differ from other animals by their ability to use
language to communicate, their use of logic and mathematics to reason, and their
ability to abstract relations that go beyond perceptual similarity. These
aspects of human cognition have one important thing in common: they are all
thought to be based on rules. This apparent uniqueness of human adult cognition
leads to an immediate puzzle: WHEN and HOW does this rule-based system come into
being? Perhaps there is, in fact, continuity between the cognitive processes of
non-linguistic species and pre-linguistic children on the one hand, and human
adults on the other hand. Perhaps, this transition is simply a mirage that
arises from the fact that Language and Formal Reasoning are usually described by
reference to systems based on rules (e.g., grammar or syllogisms).
We are studying the transition from associative to rule-based cognition within
the domain of concept learning. Concepts are the primary cognitive means by
which we organise things in the world. Any species that lacked this ability
would, no doubt, quickly become extinct. Conversely, differences in the way that
concepts are formed may go a long way in explaining the greater evolutionary
success that some species have had over others.To address these issues, this project brings together 5 teams of leading
international researchers from 4 different countries, with combined and
convergent experience in Animal Cognition and Evolutionary Theory, Infant and
Child Development, Adult Concept Learning, Neuroimaging, Social Psychology,
Neural Network Modelling, and Statistical Modelling.
BACK
Humans - The Analogy-Making Species (European Commission Sixth
Framework: NEST-2004-Path-HUM)
Boicho Kokinov, Project Coordinator
Robert French, Co-Coordinator
Total Amount of the Grant: 1.8 M euros
Duration: 3 years
The ability to make analogies lies at the heart of human cognition and is a
fundamental mechanism that enables humans to engage in complex mental processes
such as thinking, categorization, and learning, and, in general, understanding
the world and acting effectively on it based on her/his past experience. This
project focuses on understanding these uniquely human mechanisms of
analogy-making, and exploring their evolution and development. A highly
experienced, interdisciplinary, and international team will study and compare
the performance of primates, infants, young children, healthy adults, as well as
children and adults with abnormal brain functioning. An interdisciplinary
methodology will be used to pursue this goal, one that includes computational
modeling, psychological experimentation, comparative studies, developmental
studies, and brain imaging.
The ability to see a novel experience, object, situation or action as being “the
same” as an old one, and then to act in an approximately appropriate manner (and
then fine-tuned to fit the novel experience), is, almost unquestionably, one of
the capacities that sets humans apart from all other animals. What are the
underlying mechanisms that allow us to do this? How did they evolve in the
population? How do they develop in an individual? How do they differ from “the
same” mechanisms in primates? The results from this project will contribute to a
better understanding of the mechanisms of analogy-making, their origin,
evolution and development and will lead to advances, not only in our basic
knowledge of human cognition, but also in the development of educational
strategies to help children and young people to be more efficient learners and
to achieve a better and deeper understanding of the world in which they live.
This project brings together the expertise of a consortium consisting of the
leading researchers in analogy-making in Europe — namely, the New Bulgarian
University (Coordinator: Boicho Kokinov), the LEAD-CNRS at the University of
Burgundy in France (Co-coordinator: Robert French), Cambridge University,
the University of Heidelberg, University College, Dublin, the Institute for
Cognitive Science and Technology (CNR) in Italy, Birkbeck College in London, and
the University of British Columbia in Canada.
BACK
The Limits of Co-currence Analyses (Région de Bourgogne)
Robert French & Valerie Camos, Project Co-coordinators
Total Amount of the Grant: 139.500 euros
Duration: 1 year
The overall objective of this project is to study new applications for, as
well as the limits of, text co-occurrence programs, such as, LSA (Landauer and
Dumais, 1997), HAL (Lund & Burgess, 1996), and PMI-IR (Turney, 2001). French &
Labiouse (2002) pointed out four problems with word co-occurrence programs —
namely, i) the difficulties caused by the intrinsic deformability of semantic
space, ii) the current inability of these programs to detect co-occurrences of
abstract/relational structure, especially, especially distal relational
structure, iii) their lack of essential world knowledge (e.g., fathers are
always men, mothers always women), acquired by humans through learning or direct
experience with the world and, finally, iv) their assumption of the atomic
nature of words. We hope to use techniques drawn from statistics, from
linguistics and from analogy-making to explore these four problems and to
determine to what extent they can (or cannot) be handled by current
co-occurrence programs. In addition, within these limits, we hope to examine new
possibilities for the use of these types of programs.
BACK
Basic Mechanisms of Learning and Forgetting in
Natural and Artificial Systems
(European Commission 5th Framework project HPRN-CT-1999-00065)
Robert M. French, Project Director,
Total Amount of the Grant: 980,000 euros,
Duration: 4 years + 1 year extension. Completed: 2005.
This five-year project received a 980,000 euro grant from the European
Commission to carry out a highly multi-disciplinary theoretical, empirical, and
computational study of the basic mechanisms of learning and forgetting.
Participating members of the project were from the University of Warwick (Nick
Chater) and Birkbeck College (Denis Mareschal) in England, the Université Libre
de Bruxelles (Axel Cleeremans) and the Université de Liège (Bob French) in
Belgium and the Université de Grenoble (Bernard Ans/Stephane Rousset) in France.
The areas of inquiry that the project focused on ranged from theoretical
mathematics at Warwick to fMRI studies at Grenoble, from implicit learning at
Brussels to infant cognitive development at Birkbeck. The primary goal of the
project was to train young European researchers in a variety of cognitive
science research skills. Six Ph.D’s were awarded. Two post-docs in the project
were awarded permanent faculty positions upo n conclusion of their work in the
project. In addition, approximately 80 publications and 4 books were produced by
members of the project.
BACK
French, R. M. (2009). The Red Tooth Hypothesis: A computational model of predator-prey relations, protean escape behavior and sexual reproduction.
Journal of Theoretical Biology. (in press)
This paper presents an extension of the Red Queen Hypothesis (hereafter, RQH) that we call the Red Tooth Hypothesis (RTH). This hypothesis suggests that predator-prey relations may play a role in the maintenance of sexual reproduction in many higher animals. RTH is based on an interaction between learning on the part of predators and evolution on the part of prey. We present a simple predator-prey computer simulation that illustrates the effects of this interaction. This simulation suggests that the optimal escape strategy from the prey's standpoint would be to have a small number of highly reflexive, largely innate (and, therefore, very fast) escape patterns, but that would also be unlearnable by the predator. One way to achieve this would be for each individual in the prey population to have a small set of hard-wired escape patterns, but which were different for each individual.
We argue that polymorphic escape patterns at the population level could be produced via sexual reproduction at little or no evolutionary cost and would be as, or potentially more, efficient than individual-level protean (i.e., random) escape behavior. We further argue that, especially under high predation pressure, sexual recombination would be a more rapid, and therefore more effective, means of producing highly variable escape behaviors at the population level than asexual reproduction.
BACK
Nair, S. S, French, R. M., Laroche, D., Ornetti, P., and Thomas, E. (2009). The Application of Machine Learning Algorithms to the Analysis of Electromyographic Patterns from Arthritic Patients.
IEEE Transactions on Neural Systems and Rehabilitation Engineering. (in press)
The main aim of our study was to investigate the possibility of applying machine learning techniques to the analysis of electromyographic patterns (EMG) collected from arthritic patients during gait. The EMG recordings were collected from the lower limbs of patients with arthritis and compared with those of healthy subjects (CO) with no musculoskeletal disorder. The study involved subjects suffering from two forms of arthritis, viz, rheumatoid arthritis (RA) and hip osteoarthritis (OA). The analysis of the data was plagued by two problems which frequently render the analysis of this type of data extremely difficult. One was the small number of human subjects that could be included in the investigation based on the terms specified in the inclusion and exclusion criteria for the study. The other was the high intra- and inter-subject variability present in EMG data. We identified some of the muscles differently employed by the arthritic patients by using machine learning techni!
ques t
o classify the two groups and then identified the muscles that were critical for the classification. For the classification we employed leastsquares kernel (LSK) algorithms, neural network algorithms like the Kohonen self organizing map, learning vector quantification and the multilayer perceptron. Finally we also tested the more classical technique of linear discriminant analysis (LDA). The performance of the different algorithms was compared. The LSK algorithm showed the highest capacity for classification. Our study demonstrates that the newly developed LSK algorithm is adept for the treatment of biological data. The muscles that were most important for distinguishing the RA from the CO subjects were the soleus and biceps femoris. For separating the OA and CO subjects however, it was the gluteus medialis muscle. Our study demonstrates how classification with EMG data can be used in the clinical setting. While such procedures are unnecessary for the diagnosis of the type o!
f arth
ritis present, an understanding of the muscles which are responsible for the classification can help to better identify targets for rehabilitative measures.
BACK
French, R. M. and Perruchet, P. (2009) Generating constrained randomized sequences: Item frequency matters. Behavior Research Methods. (in press).
All experimental psychologists understand the importance of randomizing lists of items. However, randomization is generally constrained and these constraints, in particular, not allowing immediately repeated items, which are designed to eliminate particular biases, frequently engender others. We describe a simple Monte Carlo randomization technique that solves a number of these problems. However, in many experimental settings, we are concerned not only with the number and distribution of items, but also with the number and distribution of transitions between items. The above algorithm provides no control over this. We, therefore, introduce a simple technique using transition tables for generating correctly randomized sequences. We present an analytic method of producing item-pair frequency tables and item-pair transitional probability tables when immediate repetitions are not allowed. We illustrate these difficulties -- and how to overcome them -- with reference to a c!
lassic paper on infant word segmentation. Finally, we make available an Excel file that allows users to generate transition tables with up to ten different item types, and to generate appropriately distributed randomized sequences of any length without immediately repeated elements. This file is freely available at:
http://leadserv.u-bourgogne.fr/IMG/xls/TransitionMatrix.xls
BACK
French, R. M. (2009).
If it walks like a duck and quacks like a duck...The Turing Test, Intelligence and Consciousness. In P. Wilken, T. Bayne, A. Cleeremans (eds.). Oxford Companion to Consciousness, Oxford, UK: Oxford Univ. Press. 641-643.
We could use the Turing Test as a way of providing a graded assessment, rather than an all-or-nothing decision, on the intelligence of machines. Thus, the further the machine’s answers were from average human answers on a series of questions that appeal to subcognitive associations derived from our interaction with the world, the less intelligent it would be (see: Subcognition and the Limits of the Turing Test). In like manner, the Turing Test could potentially be adapted to provide a graded test for human consciousness. The Interrogator would draw up a list of subcognitive question that explicitly dealt with subjective perceptions, like the question about holding Coca-Cola in one's mouth, about sensations, about subjective perceptions of a wide range of etc.. As before, the Interrogator would pose these questions to a large sample of randomly chosen people . And then, as for the graded Turing Test for intelligence, the divergence of the computer�!
�s ans
wers with respect to the average answers of the people in the random sample would constitute a “measure of human consciousness” with respect to our own consciousness. In short, the Turing Test, with an appropriately tailored set of questions, given first to a random sample of people, could be used to provide an operational means of assessing consciousness.
BACK
Thibaut, J.-P., French, R. M., Vezneva, M. (2009). Cognitive Load and Analogy-making in Children: Explaining an Unexpected Interaction. Proceedings of the Thirtieth Annual Cognitive Science Socitey Conference. 1048-1053.
The aim of the present study is to investigate the performance of children of different ages on an analogy-making task involving semantic analogies in which there are competing semantic matches. We suggest that this can be best studied in terms of
developmental changes in executive functioning. We hypothesize that the selection of the common relational structure requires the inhibition of other salient features, such as, semantically related semantic matches. Our results show that children's performance in classic A:B::C:D analogy-making tasks seems to depend crucially on the nature of the distractors and the association strength between the A and B terms, on the one hand, and the C and D terms on the other.
These results agree with an analogy-making account (Richland et al., 2006) based on varying limitations in executive functioning at different ages.
BACK
Thibaut, J.-P., French, R. M., Vezneva, M. (2008). Analogy-Making in Children:
The Importance of Processing Constraints. Proceedings of the Thirty-first Annual Cognitive Science Socitey Conference. 475-480.
The aim of the present study is to investigate children's performance in an analogy-making task involving competing perceptual and relational matches in terms of developmental changes in executive functioning. We hypothesize that the selection of the common relational structure requires the inhibition of more salient perceptual features (such as identical shapes or colors). Most of the results show that children’s performance in analogy-making tasks would seem to depend crucially on the nature of the distractors. In addition, our results show that analogy-making performance depends on the nature of the dimensions involved in the relations (shape or color). Finally, in simple conditions, performance was adversely affected by the presence of irrelevant dimensions. These results are compatible with an analogy-making account (Richland et al., 2006) based on varying limitations in executive functioning at different ages.
BACK
Gill, A. J., French, R. M., Gergle, D., Oberlander, J. (2008). Identifying Emotional Characteristics from Short Blog Texts. In Proceedings of the Thirtieth Annual Cognitive Science Conference, NJ:LEA. (accepted).
Emotion is at the core of understanding ourselves and others, and the automatic expression and detection of emotion could enhance our experience with technologies. In this paper, we explore the use of computational linguistic tools to derive emotional features. Using 50 and 200 word samples of naturally-occurring blog texts, we find that some emotions are more discernible than others. In particular automated content analysis shows that authors expressing anger use the most affective language and also negative affect words; authors expressing joy use the most positive emotion words. In addition we explore the use of co-occurrence semantic space techniques to classify texts via their distance from emotional concept exemplar words: This demonstrated some success, particularly for identifying author expression of fear and joy emotions. This extends previous work by using finer-grained emotional categories and alternative linguistic analysis techniques. We relate our finding t!
o huma
n emotion perception and note potential applications.
BACK
Gill, A. J., French, R. M., Gergle, D., Oberlander, J. (2008). Identifying Emotional Characteristics from Short Blog Texts. In Proceedings of the Thirtieth Annual Cognitive Science Conference, NJ:LEA.
Emotion is at the core of understanding ourselves and others, and the automatic expression and detection of emotion could enhance our experience with technologies. In this paper, we explore the use of computational linguistic tools to derive emotional features. Using 50 and 200 word samples of naturally-occurring blog texts, we find that some emotions are more discernible than others. In particular automated content analysis shows that authors expressing anger use the most affective language and also negative affect words; authors expressing joy use the most positive emotion words. In addition we explore the use of co-occurrence semantic space techniques to classify texts via their distance from emotional concept exemplar words: This demonstrated some success, particularly for identifying author expression of fear and joy emotions. This extends previous work by using finer-grained emotional categories and alternative linguistic analysis techniques. We relate our finding t!
o huma
n emotion perception and note potential applications.
BACK
Gill, A.J., Gergle, D., French, R.M., and Oberlander, J. (2008). Emotion Rating from Short Blog Texts. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2008). 1121-1124. New York: ACM Press.
Being able to automatically perceive a variety of emotions from text alone has potentially important applications in CMC and HCI that range from identifying mood from online posts to enabling dynamically adaptive interfaces. However, such ability has not been proven in human raters or computational systems. Here we examine the ability of naive raters of emotion to detect one of eight emotional categories from 50 and 200 word samples of real blog text. Using expert raters as a ‘gold standard’, naive-expert rater agreement increased with longer texts, and was high for ratings of joy, disgust, anger and anticipation, but low for acceptance and ‘neutral’ texts. We discuss these findings in light of theories of CMC and potential applications in HCI.
BACK
French, R. M. and Kus, E. (2008). KAMA: A Temperature-Driven Model of Mate Choice Using Dynamic Partner Representations. Adaptive Behavior, 16(1), 71-95.
KAMA is a model of mate-choice based on a gradual, stochastic process of building up representations of potential partners through encounters and dating, ultimately leading to marriage. Individuals must attempt to find a suitable mate in a limited amount of time with only partial knowledge of the individuals in the pool of potential candidates. Individuals have multiple-valued character profiles, which describe a number of their characteristics (physical beauty, potential earning power, etc.), as well as preference profiles, that specify their degree of preference for those characteristics in members of the opposite sex. A process of encounters and dating allows individuals to gradually build up accurate representations of potential mates. Individuals each have a “temperature,” which is the extent to which they are willing to continue exploring mate-space and which drives individual decision making. The individual level mechanisms implemented in KAMA produce populatio!
n-leve
l data that qualitatively matches empirical data. Perhaps most significantly, our results suggest that differences in first-marriage ages and hazard-rate curves for men and women in the West may to a large extent be due to the Western dating practice whereby males ask women out and women then accept or refuse their offer.
BACK
Delbé, C., French, R.M., Bigand, E. (2008). Catégorisation asymétrique de séquences de hauteurs musicales. Année Psychologique. (accepted, article in French).
An unusual visual category learning asymmetry in infants was observed by Quinn,
Eimas, & Rosenkrantz (1993). A series of experiments and simulations seemed to show that
this asymmetry was due the perceptual inclusion of the cat category within the dog category
because of the greater perceptual variability of the distributions of the visual features of dogs
compared to cats (Mareschal & French, 1997; Mareschal, French, & Quinn, 2000; French,
Mermillod, Quinn, & Mareschal, 2001; French, Mareschal, Mermillod, & Quinn, 2004). In
the present paper, we explore whether this asymmetric categorization phenomenon
generalizes to the auditory domain. We developed a series of sequential auditory stimuli
analogous to the visual stimuli in Quinn et al. Two experiments on adult listeners using these
stimuli seem to demonstrate the presence of an identical asymmetric categorization effect in
the sequential auditory domain. Furthermore, connectionist simulations confirmed that purely
bottom-up processes were largely responsible for our behavioural results.
BACK
Cowell, R.A. & French, R. M. (2007). An unsupervised, dual-network connectionist model of rule emergence in category learning. In S. Vosniadou, D. Kayser, & A. Protopapas (eds.) Proceedings of the 2007 European Cognitive Science Society Conference. NJ:LEA. 318-323.
We develop an unsupervised "dual-network" connectionist model of category learning in which rules gradually emerge from a standard Kohonen network. The architecture is based on the interaction of a statistical-learning (Kohonen) network and a competitive-learning rule network. The rules that emerge in the rule network are weightings of individual features according to their importance for categorization. Once the combined system has learned a particular rule, it de-emphasizes those features that are not sufficient for categorization, thus allowing correct classification of novel, but atypical, stimuli, for which a standard Kohonen network fails. We explain the principles and architectural details of the model and show how it works correctly for
stimuli that are misclassified by a standard Kohonen network.
BACK
Gill, A. & French, R. M. (2007). Semantic distance and author personality perception through texts. In S. Vosniadou, D. Kayser, A. Protopapas (eds.) Proceedings of the 2007 European Cognitive Science Society Conference. NJ:LEA. 682-687.
We develop an unsupervised "dual-network" connectionist model of category learning in which rules gradually emerge from a standard Kohonen network. The architecture is based on the interaction of a statistical-learning (Kohonen) network and a competitive-learning rule network. The rules that emerge in the rule network are weightings of individual features according to their importance for categorization. Once the combined system has learned a particular rule, it de-emphasizes those features that are not sufficient for categorization, thus allowing correct classification of novel, but atypical, stimuli, for which a standard Kohonen network fails. We explain the principles and architectural details of the model and show how it works correctly for
stimuli that are misclassified by a standard Kohonen network.
BACK
French, R. M. (2008).
Relational priming is to analogy-making as one-ball juggling is to seven-ball juggling. In Behavioral and Brain Sciences (in press)
Relational priming is argued to be a deeply inadequate model of analogy-making because of its intrinsic inability to do analogies where the base and target domains share no common attributes and the mapped relations are different. The authors rely on carefully handcrafted representations to allow their model to make a complex analogy, seemingly unaware of the debate on this issue 15 years ago. Finally, they incorrectly assume the existence of fixed, context-independent relations between objects.
BACK
French, R. M. (2007).
The dynamics of the computational modeling of analogy-making. In The CRC Handbook of Dynamic Systems Modeling. Paul Fishwick (ed.), Boca Raton, FL: CRC Press LLC, ch. 2, 1-18.
In this paper we will begin by introducing a notion of analogy-making that is considerably
broader than the normal construal of this term. We will argue that analogy-making, thus defined,
is one of the most fundamental and powerful capacities in our cognitive arsenal. We will claim
that the standard separation of the representation-building and mapping phases cannot ultimately
succeed as a strategy for modeling analogy-making. In short, the context-specific representations
that we use in short-term memory — and that computers will someday use in their short-term
memories — must arise from a continual, dynamic interaction between high-level knowledgebased
processes and low-level, largely unconscious associative memory-processes. We further
suggest that this interactive process must be mediated by context-dependent computational
temperature, a means by which the system dynamically monitors its own activity, ultimately
allowing it to settle on the appropriate representations for a given context.
BACK
Abreu, A., French, R. M., Cowell, R. A. & de Schonen, S. (2006). Local-Global visual deficits in Williams Syndrome: Stimulus presence contributes to diminished performance on image-reproduction. Psychologica Belgica, 46(4), 269-281.
Impairments in visuospatial processing exhibited by individuals with Williams
Syndrome (WS) have been ascribed to a local processing bias. The imprecise
specification of this local bias hypothesis has led to contradictions between different
accounts of the visuospatial deficits in WS. We present two experiments
investigating visual processing of geometric Navon stimuli by children with WS.
The first experiment examined image reproduction in a visuoconstruction task
and the second experiment explored the effect of manipulating global salience on
recognition of visual stimuli by varying the density of local elements possessed
by the stimuli. In the visuoconstruction task, the children with WS did show a
local bias with respect to controls, but only when the target being copied was present;
when drawing from memory, subjects with WS produced a heterogeneous
pattern of answers. In the recognition task, children with WS exhibited the same
sensitivity to global figures as matched controls, confirming previous findings in
which no local bias in perception was found in WS subjects. We propose that subjects
with WS are unable to disengage their attention from local elements during
the planning stage of image reproduction (a visual-conflict hypothesis).
BACK
Delbé, C.,
Bigand, E., French, R. M. (2006). Asymmetric Categorization in the Sequential
Auditory Domain.
An unusual category learning asymmetry in infants was observed by Quinn et
al. (1993). Infants who were initially exposed to a series of pictures of cats
and then were shown a dog and a novel cat, showed significantly more interest in
the dog than in the cat. However, when the order of presentation was reversed —
i.e., dogs were seen first, then a cat and a novel dog — the cat attracted no
more attention than the novel dog. A series of experiments and simulations
seemed to show that this asymmetry was due the perceptual inclusion of the cat
category within the dog category because of the greater perceptual variability
of dogs compared to cats (Mareschal & French, 1997; Mareschal et al., 2000;
French et al., 2001, 2004). In the present paper, we explore whether this
asymmetric categorization phenomenon generalizes to the auditory domain. We
developed a series of sequential auditory stimuli analogous to the visual
stimuli in Quinn et al. Two experiments on adult listeners using these stimuli
seem to demonstrate the presence of an identical asymmetric categorization
effect in the sequential auditory domain. Furthermore, we simulated these
results with a connectionist model of sequential learning. Together with the
behavioral data, we can conclude from this simulation that, as in the infant
visual categorization experiments, purely bottom-up processes were largely
responsible for our results.
BACK
Delbé, C., Bigand, E., & French, R. M. (2006). Asymmetric Categorization in the Sequential Auditory Domain. In Proceedings of the 28th Annual Cognitive Science Society Conference. NJ:LEA. 1210-1215
An unusual category learning asymmetry in infants was observed by Quinn et al. (1993). Infants who were initially exposed to a series of pictures of cats and then were shown a dog and a novel cat, showed significantly more interest in the dog than in the cat. However, when the order of presentation was reversed — i.e., dogs were seen first, then a cat and a novel dog — the cat attracted no more attention than the novel dog. A series of experiments and simulations seemed to show that this asymmetry was due the perceptual inclusion of the cat category within the dog category because of the greater perceptual variability of dogs compared to cats (Mareschal & French, 1997; Mareschal et al., 2000; French et al., 2001, 2004). In the present paper, we explore whether this asymmetric categorization phenomenon generalizes to the auditory domain. We developed a series of sequential auditory stimuli analogous to the visual stimuli in Quinn et al. Two experiments on adult listene!
rs usi
ng these stimuli seem to demonstrate the presence of an identical asymmetric categorization effect in the sequential auditory domain. Furthermore, we simulated these results with a connectionist model of sequential learning. Together with the behavioral data, we can conclude from this simulation that, as in the infant visual categorization experiments, purely bottom-up processes were largely responsible for our results.
BACK
French, R. M., Kus, E.T., (2006). Modeling Mate-Choice using Computational Temperature and
Dynamically Evolving Representations. In Proceedings of the 28th Annual Cognitive Science Society Conference, NJ:LEA.
We present a model of mate-choice (KAMA) that is based on a gradual, stochastic process of representation-building leading to marriage. KAMA reproduces empirically verifiable population-level mate-selection behavior using individual-level mate-choice mechanisms. Individuals have character profiles, which describe a number of their characteristics (physical beauty, potential earning power, etc.), as well as preference profiles, that specify their degree of preference for those characteristics in members of the opposite sex. A process of encounters and dating serves to exchange information and allows accurate representations of potential mates to be gradually built up over time. Finally, individuals each have a "temperature", which is the extent to which they are willing to continue exploring mate-space. "Temperature" (the inverse of mate "choosiness") drives individual decision-making in this model. We show that the individual-level mechanisms implemented in the model prod!
uce population-level data that qualitatively matches empirical data.
BACK
French, R. M. (2006). The dynamics of the computational modeling of
analogy-making.In this paper we will begin by introducing a notion of
analogy-making that is considerably broader than the normal construal of this
term. We will argue that analogy-making, thus defined, is one of the most
fundamental and powerful capacities in our cognitive arsenal. We will claim that
the standard separation of the representation-building and mapping phases cannot
ultimately succeed as a strategy for modeling analogy-making. In short, the
context-specific representations that we use in short-term memory — and that
computers will someday use in their short-term memories — must arise from a
continual, dynamic interaction between high-level knowledge-based processes and
low-level, largely unconscious associative memory-processes. We further suggest
that this interactive process must be mediated by context-dependent
computational temperature, a means by which the system dynamically monitors its
own activity, ultimately allowing it to settle on the appropriate
representations for a given context.
BACK
Abreu, A. M., French, R. M., Annaz, D., Thomas, M., De Schonen, S. (2005).A "Visual Conflict" Hypothesis for Global-Local Visual Deficits in Williams Syndrome: Simulations and Data
Individuals with Williams Syndrome demonstrate impairments in visuospatial
cognition. This has been ascribed to a local processing bias. More specifically,
it has been proposed that the deficit arises from a problem in disengaging
attention from local features. We present preliminary data from an integrated
empirical and computational exploration of this phenomenon. Using a
connectionist model, we first clarify and formalize the proposal that
visuospatial deficits arise from an inability to locally disengage. We then
introduce two empirical studies using Navon-style stimuli. The first explored
sensitivity to local vs. global features in a perception task, evaluating the
effect of a manipulation that raised the salience of global organization.
Thirteen children with WS exhibited the same sensitivity to this manipulation as
CA-matched controls, suggesting no local bias in perception. The second study
focused on image reproduction and demonstrated that in contrast to controls, the
children with WS were distracted in their drawings by having the target in front
of them rather than drawing from memory. We discuss the results in terms of an
inability to disengage during the planning stage of reproduction due to
over-focusing on local elements of the current visual stimulus.
BACK
French, R. M., Mareschal, D., Mermillod, M., & Quinn, P. C. (2004)
. The Role of Bottom-up Processing in Perceptual Categorization by
3- to 4-month-old Infants: Simulations and Data. Journal of Experimental
Psychology: General.
Disentangling bottom-up and top-down processing in adult category learning
is notoriously difficult. Studying category learning in infancy provides
a simple way of exploring category learning while minimizing the contribution
of top-down information. Three- to four-month-old infants presented with
cat or dog images will form a perceptual category representation for Cat
that excludes dogs and for Dog that includes cats. We argue that an inclusion
relationship in the distribution of features in the images explains the
asymmetry. Using computational modeling and behavioral testing, we show
that the asymmetry can be reversed or removed by using stimulus images that
reverse or remove the inclusion relationship. The findings suggest that categorization
of non-human animal images by young infants is essentially a bottom-up process.
BACK
French, R. M. & Jacquet, M. (2004)
. All cases of word production are not created equal: a reply to Costa
& Santesteban, Trends in Cognitive Sciences.
While we are not necessarily in disagreement with the comment by Costa
and Santesteban, neither are we as convinced as they are of the need for
two modalities, one for word production, the other for word recognition.
Their key claim is that “in word production, it is the speaker who intentionally
chooses the target language.” Perhaps at the moment of actually switching
languages, one could argue for a need for top-down intentional switching
mechanism. But during most language production, simpler, automatic mechanisms
of word activation - identical to those at work in word recognition - would
suffice to keep the bilingual in one or the other language. Each word in
a particular language whether it is spoken or heard, activates a halo of
other words -- virtually all of which are in the same language -- and,
as a result, it requires no particular intentional effort for a bilingual
to remain in that language.
BACK
French, R. M. (2008)
. Review of Neuroconstructivism: How the brain constructs cognition by Mareschal et al (2007).
This book is an excellent manifesto for future work in child development. It presents a multidisciplinary approach that clearly demonstrates the value of integrating modeling, neuroscience, and behavior to explore the mechanisms underlying development and to show how internal context-dependent representations arise and are modified during development. Its only major flaw is to have given short shrift to the study of the role of genetics on development.
BACK
French, R. M. (2004)
. For historians of automated computing only: A review of Who Invented
The Computer? The Legal Battle That Changed Computing History by Alice
Rowe Burks. Endeavour.
John Mauchly and J. Presper Eckert are widely thought to have invented
the electronic computer as we know it today. Burks’s book describes in
(sometimes excruciating) detail the 1971 court case that established the
claim that John V. Atanasoff got their first with the invention of a special-purpose,
electronic computing device, many of whose key features were included Mauchly
and Eckert’s ENIAC, without giving credit to Atanasoff. For this reason
the Mauchly and Eckert patent was invalidated. Unfortunately, the book
doesn’t really deal with the question in its title, but rather gives us
a blow-by-blow analysis of the 1971 court case. While this book provides
a thorough treatment of Honeywell vs. Sperry-Rand and Atanasoff’s
contribution to the modern computer, it is certainly not the place for a
lay reader to find out about who invented the computer.
BACK
French, R. M. & Jacquet, M. (2004)
. Understanding Bilingual Memory: Models and Data. Trends in Cognitive
Sciences, 8(2), 87-93.
The first attempts to put the study of bilingual memory on a sound scientific
footing date only from the beginning of the 1950’s and only in the last
two decades has the field of really come into its own. Our focus is primarily,
if not exclusively, experimental and computational studies of bilingual
memory. Bilingual memory research in the last decade and, particularly,
in the last five years, has developed the experimental, neuropsychological
and computational tools to allow researchers to begin to answer some of the
field’s major outstanding questions, thus effectively bringing to an end
to the endless thrust-counterthrust on these questions that previously characterized
the field. This article is organized along the lines of the conceptual division
suggested by François Grosjean: language knowledge and organization,
on the one hand, and the mechanisms that operate on that knowledge and organization,
on the other. Various connectionist models of bilingual memory that attempt
to incorporate both organizational and operational considerations will serve
to bridge these two divisions.
BACK
Kokinov, B. and French, R. M. (2003)
. Computational Models of Analogy-making. In Nadel, L. (Ed.) Encyclopedia
of Cognitive Science. Vol. 1, pp.113 - 118. London: Nature Publishing
Group.
The field of computer-modeling of analogy-making has moved from the early
models which were intended mainly as existence proofs to demonstrate that
computers could, in fact, be programmed to do analogy-making to complex
models which make nontrivial predictions of human behavior. Researchers
have come to appreciate the need for structural mapping of the base and
target domains, for integration of and interaction between representation-building,
retrieval, mapping and learning, and for building systems that can potentially
scale up to the real world.
BACK
Mermillod, M., French, R. M., Quinn, P. & Mareschal, D., (2003)
. The Importance of Long-term Memory in Infant Perceptual Categorization.
Proc. of the 25th Annual Conference of the Cognitive Science Society. NJ:LEA
804-809.
Quinn and Eimas (1998) reported that young infants include non-human animals
(i.e., cats, horses, and fish) in their category representation for humans.
To account for this surprising result, it was proposed that the representation
of humans by infants functions as an attractor for non-human animals and
is based on infants’ previous experience with humans. We report three simulations
that provide a computational basis for this proposal. These simulations
show that that a “dual-network” connectionist model that incorporates both
bottom-up (i.e., short-term memory) and top-down (i.e., long-term memory)
processing is sufficient to account for the empirical results obtained
with the infants.
BACK
Van Rooy, D., Van Overwalle, F., Vanhoomissen, T., Labiouse, C., &
French, R. M. (2003)
. A Recurrent Connectionist Model of Group Biases. Psychological
Review , 110, 536-563.
Major biases and stereotypes in group judgments are reviewed and modeled
from a recurrent connectionist perspective. These biases are in the areas
of group impression formation (illusory correlation), group differentiation
(accentuation), stereotype change (dispersed versus concentrated distribution
of inconsistent information), and group homogeneity. All these phenomena
are illustrated with well-known experiments, and simulated with an auto-associative
network architecture with linear activation update and delta learning algorithm
for adjusting the connection weights. All the biases were successfully
reproduced in the simulations. The discussion centers on how the particular
simulation specifications compare to other models of group biases and how
they may be used to develop novel hypotheses for testing the connectionist
modeling approach and, more generally, for improving theorizing in the field
of social biases and stereotype change.
BACK
French, R. M. (2003)
Catastrophic Forgetting in Connectionist Networks. In Nadel, L. (Ed.)
Encyclopedia of Cognitive Science. Vol. 1, pp. 431 - 435. London:
Nature Publishing Group.
Unlike human brains, connectionist networks can forget previously learning
information suddenly and completely (i.e., “catastrophically”) when learning
new information. Various solutions for overcoming this problem are discussed.
Article discusses: Catastrophic forgetting vs. normal forgetting, measures
of catastrophic interference, solutions to the problem, rehearsal and pseudorehearsal,
other techniques for alleviating catastrophic forgetting in neural networks,
etc.
BACK
French, R. M. (2002)
Natura non facit saltum: The need for the full continuum of mental
representations. The Behavior and Brain Sciences. 25(3), 339-340.
Our major disagreement with the SOC model proposed by Perruchet and Vinter
is that it requires conscious representations to emerge in a sudden, quantal
leap from the unconscious to the conscious. We suggest that the SOC model
might do well to turn to basic neural network principles that would allow
it, without difficulty, to encompass unconscious representations. These
“unconscious” representations some of which may evolve into representations
that, when activated, would be conscious can affect consciousness
processing, but do so via the same basic associative, excitatory, and inhibitory
mechanisms that we observe in conscious representations. The inclusion of
this type of representation in no way requires the authors to also posit
sophisticated unconscious computational mechanisms.
BACK
Jacquet, M. & French, R. M. (2002)
. The BIA++: Extending the BIA+ to a dynamical distributed connectionist
framework. Bilingualism, 5(3), 202-205.
Dijkstra and van Heuven have made an admirable attempt to develop a new
model of bilingual memory, the BIA+. The BIA+ is, as the name implies,
an extension of the Bilingual Interactive Activation (BIA) model (Dijkstra
& van Heuven, 1998; Van Heuven, Dijkstra & Grainger, 1998; etc),
which was itself an adaptation to bilingual memory of McClelland & Rumelhart’s
(1981) Interactive Activation model of monolingual memory. The authors
provide a wealth of background on bilingual memory cross-lingual interference
and priming effects in what amounts to a veritable review of the literature
in this area. The model that they propose is designed to account for many
of these empirically observed effects. In this article we focus our discussion
around three points related to the design of their model. These issues
are:
- the use of modular vs. distributed representations and the necessity
of using representations that are not simply coded into the program by
hand;
- learning, and, crucially, it’s absence in the BIA+ model
- emergence and self-organization of lexical items.
We hope that the ground-breaking work of these authors will naturally evolve
towards broader-based distributed connectionist network models and related
dynamical models of bilingual memory, capable of learning and being able
to incorporate both the bottom-up and the top-down processing that we know
to be in integral part of bilingual language processing.
BACK
Labiouse, C.,
French, R. M. and Mermillod, M. (2002)
. Using Autoencoders to Model Asymmetric Category Learning in Early
Infancy: Insights from Principal Components Analysis. In J.A. Bullinaria,
& W. Lowe (Eds.).Connectionist Models of Cognition and Perception:
Proceedings of the Seventh Neural Computation and Psychology Workshop
, Singapore: World Scientific, 51-63.
Young infants exhibit intriguing asymmetries in the exclusivity of categories
formed on the basis of visually presented stimuli. For instance, infants
who have previously seen a series of cats show a surge of interest when
looking at dogs, this being interpreted as dogs being perceived as novel.
On the other hand, infants previously exposed to dogs do not exhibit such
an increased interest for cats. Recently, researchers have used simple autoencoders
to account for these effects. Their hypothesis was that the asymmetry effect
is caused by the smaller variances of cats’ features and an inclusion
of the values of the cats’ features in the range of dogs’ values. They
predicted, and obtained, a reversal of asymmetry by reversing dog-cat variances,
thereby inversing the inclusion relationship (i.e. dogs are now included
in the category of cats). This reversal reinforces their hypothesis. We
will examine the explanatory power of this model by investigating in greater
detail the ways by which autoencoders exhibit
such an asymmetry effect. We analyze the predictions made by a linear
Principal Components Analysis. We examine the autoencoder’s hidden-unit
activation levels and, finally, we emphasize various factors that affect
generalization capacities and may play key roles in the observed asymmetry
effect.
BACK
Ans,
B., Rousset, S., French, R. M., & Musca, S. (2002)
. Preventing Catastrophic Interference in Multiple-Sequence Learning
Using Coupled Reverberating Elman Networks. Proceedings of the 24th
Annual Conference of the Cognitive Science Society. NJ:LEA
Everyone agrees that real cognition requires much more than static pattern
recognition. In particular, it requires the ability to learn sequences
of patterns (or actions) But learning sequences really means being able
to learn multiple sequences, one after the other, without the most recently
learned ones erasing the previously learned ones. But if catastrophic interference
is a problem for the sequential learning of individual patterns, the problem
is amplified many times over when multiple sequences of patterns have to
be learned consecutively, because each new sequence consists of many linked
patterns. In this paper we will present a connectionist architecture that
would seem to solve the problem of multiple sequence learning using pseudopatterns.
BACK
French, R. M. & Labiouse, C. (2002)
. Four Problems with Extracting Human Semantics from Large Text Corpora.
Proceedings of the 24th Annual Conference of the Cognitive Science Society
. NJ:LEA.
We present four problems that will have to be overcome by text co-occurrence
programs in order for them to be able to capture human-like semantics.
These problems are: the intrinsic deformability of semantic space, the inability
to detect co-occurrences of (esp. distal) abstract structures, their lack
of essential world knowledge, which humans acquire through learning or
direct experience with the world and their assumption of the atomic nature
of words. By looking at a number of very simple questions, based in part
on how humans do analogy-making, we show just how far one of the best of
these programs is from being able to capture real semantics.
BACK
French,
R. M., Mermillod, M., Quinn, P., Chauvin, A., & Mareschal, D. (2002)
. The Importance of Starting Blurry: Simulating Improved Basic-Level
Category Learning in Infants Due to Weak Visual Acuity. Proceedings
of the 24th Annual Conference of the Cognitive Science Society. NJ:LEA.
At the earliest ages of development, perceptual maturation is generally
considered as a functional constraint to recognize or categorize the stimuli
of the environment. However, using a computer simulation of retinal development
using Gabor wavelets to simulate the output of the V1 complex cells (Jones
& Palmer, 1987), we showed thatucing the range of the spatial frequencies
from the retinal map to V1 decreases the variance distribution within
a category. The consequence of this is to decrease the difference between
two exemplars of the same category, but to increase the difference between
exemplars from two different categories. These results show that reduced
perceptual acuity produces an advantage for differentiating basic-level categories.
Finally, we show that the present simulations using Gabor-filtered input
instead of feature-based input coding provide a pattern of statistical data
convergent with previously published results in infant categorization (e.g.,
Mareschal &
French, 1997
;
Mareschal et al, 2000
;
French et al, 2001
).
BACK
Mareschal, D., Quinn, P., & French, R. M., (2002)
. Asymmetric interference in 3- to 4-month olds’ sequential category
learning. Cognitive Science, 26, 377-389.
Three- to 4-month-old infants show asymmetric exclusivity in the acquisition
of Cat and Dog perceptual categories. We describe a connectionist autoencoder
model of perceptual categorization that shows the same asymmetries as infants.
The model predicts the presence of asymmetric catastrophic interference
(retroactive interference) when infants acquire Cat and
Dog categories sequentially. A subsequent a study with 3- to 4-month-olds
verifies this predicted pattern of behavior. We argue that bottom-up, associative
learning systems with distributed representations are appropriate for
modeling the operation of short-term visual memory in early perceptual
category learning.
BACK
French,
R. M.(2002).
The Computational Modeling of Analogy-Making. Trends in Cognitive
Sciences, 6(5), 200-205.
Our ability to see a particular object or situation in one context as being
“the same as” another object or situation in another context is the essence
of analogy-making. It encompasses our ability to explain new concepts in
terms of already-familiar ones, to emphasize particular aspects of situations,
to generalize, to characterize situations, to explain or describe new phenomena,
to serve as a basis for how to act in unfamiliar surroundings, to understand
many types of humor, etc. Within this framework, the importance of analogy-making
in modeling cognition becomes clear. This article is a survey of (essentially)
all of the computational models of analogy-making developed over the course
of the last four decades.
BACK
French, R. M.
and Chater, N. (2002).
Using Noise to Compute Error Surfaces in Connectionist Networks: A
Novel Means of Reducing Catastrophic Forgetting. Neural Computation,
14 (7), 1755-1769.
In error-driven distributed feedforward networks new information typically
interferes, sometimes severely, with previously learned information. We
show how noise can be used to approximate the error surface of previously
learned information. By combining this approximated error surface with the error surface associated
with the new information to be learned, the network’s retention of previously
learned items can be improved and catastrophic interference significantly
reduced. Further, we show that the noise -generated error surface is produced
using only first-derivative information and without recourse to any explicit
error information.
BACK
French,
R. M. and Labiouse, C. (2001)
. Why co-occurrence information alone is not sufficient to answer subcognitive
questions. Journal of Theoretical and Experimental Artificial Intelligence,
13(4), 419-429.
Turney (2001)
claims that a simple program, PMI-IR, that searches the World Wide
Web for co-occurrences ollion Web pagesused to find human-like answers
to the type of “subcognitive” questions
French (1990)
claimed would invariably unmask computers (that had not lived life
as we humans had) in a Turing Test. In this paper, we show that there are
serious problems with Turney’s claim. We show by example that PMI-IR doesn’t
work for even simple subcognitive questions. We attribute PMI-IR’s failure
to its inability to understand the relational and contextual attributes
of the words/concepts in the queries. And finally, we show that, even if PMI-IR
were able to answer many subcognitive questions, a clever Interrogator
in the Turing Test would still be able to unmask the computer.
BACK
Labiouse, C. & French, R. M. (2001).
A Connectionist Model of Person Perception and Stereotype Formation.
In Connectionist Models of Learning, Development and Evolution.
(R. French & J. Sougné, eds.). London: Springer, 209-218.
Connectionist modeling has begun to have an impact on research in social
cognition. PDP models have been used to model a broad range of social psychological
topics such as person perception, illusory correlations, cognitive dissonance,
social categorization and stereotypes. Smith and DeCoster [28] recently
proposed a recurrent connectionist model of person perception and stereotyping
that accounts for a number of phenomena usually seen as contradictory or
difficult to integrate into a single coherent conceptual framework. While
their model is based on clearly defined and potentially far-reaching theoretical
principles, it nonetheless suffers from certain shortcomings, among them,
the use of misleading dependent measures and the incapacity of the network
to develop its own internal representations. We propose an alternative
connectionist model - an autoencoder - to overcome these limitations.
In particular, the development of stereotypes within the context of this
model will be discussed.
BACK
French, R. M., Ans, B., & Rousset, S. (2001).
Pseudopatterns and dual-network memory models: Advantages and shortcomings.
In Connectionist Models of Learning, Development and Evolution.
(R. French & J. Sougné, eds.). London: Springer, 13-22.
The dual-network memory model is designed to be a neurobiologically plausible
manner of avoiding catastrophic interference. We discuss a number of advantages
of this model and potential clues that the model has provided in the areas
of memory consolidation, category-specific deficits, anterograde and retrograde
amnesia. We discuss a surprising result about how this class of models
handles episodic ("snap-shot") memory - namely, that they seem to be able
to handle both episodic and abstract memory - and discuss two other promising
areas of research involving these models.
BACK
French,
R. M. & Thomas, E. (2001).
The Dynamical Hypothesis in Cognitive Science: A review essay of
Mind As Motion. In Minds and Machines, 11, 1, 101-111.
Sometimes it is hard to know precisely what to think about the Dynamical
Hypothesis, the new kid on the block in cognitive science and described
succinctly by the slogan "cognitive agents are dynamical systems." Is the
DH a radically new approach to understanding human cognition? (No.) Is it
providing deep insights that traditional symbolic artificial intelligence
overlooked? (Certainly.) Is it providing deep insights that recurrent connectionist
models, circa 1990, had overlooked? (Probably not.) Is time, as instantiated
in the DH, necessary to our understanding of cognition? (Certainly.) Is
time, as instantiated in the DH, sufficient for understanding cognition?
(Certainly not.). . . The DH is often touted as being a revolutionary alternative
to the traditional Physical Symbol System Hypothesis (PSSH, renamed in
Mind As Motion , the Computational Hypothesis) (Newell & Simon,
1976) that was the bedrock of artificial intelligence for 25 years. We disagree.
A reasonable evaluation of the contribution to the field
of Mind as Motion would be to say that, even though the contributors
to the book were not the first to attend to the issue of time in cognitive
modeling, their systematic emphasis on studying cognition as a phenomenon
that evolves over time has brought the issue of time to center stage in
cognitive modeling. Their application of dynamical systems tools has provided
the framework with which this approach can be carried out. And therein
lies the importance of this book.
BACK
French,
R. M. (2002).
Review of Daniel Levines Introduction to Neural and Cognitive Modeling.
In Biological Psychology, 60(1). 69-73.
To call Daniel Levine's book an "introduction" to neural and cognitive
modeling is somewhat of a misnomer. In fact, the book is an admixture of
introductory material on neural network models and an overwhelming amount
of material about the particular class of neural network models that the
author favors - namely, those developed by Stephen Grossberg and his colleagues.
The book attempts to lay out a systematic development of neural network
modeling and is reasonably, if not completely, successful in the endeavor.
Many major types of neural network models are at least briefly discussed,
but an outsider reading this book would inevitably come away from it with
a highly non standard view of the field...
BACK
Sougné, J. & French, R. M. (2001)
. Synfire chains and catastrophic interference. Proceedings of the
23nd Annual Conference of the Cognitive Science Society. NJ: LEA, 270-275.
The brain must be capable of achieving extraordinarily precise sub-millisecond
timing with imprecise neural hardware. We discuss how this might be possible
using synfire chains (Abeles, 1991) and present a synfire chain learning
algorithm for a sparsely-distributed network of spiking neurons (Sougn,
1999). Surprisingly, we show that this learning is not subject to catastrophic
interference, a problem that plagues many standard connectionist networks.
We show that the forgetting of synfire chains in this type of network closely
resembles the classic forgetting pattern described by Barnes & Underwood
(1959).
BACK
French, R. M., Mermillod, M., Quinn, P., & Mareschal, D. (2001).
Reversing Category Exclusivities in Infant Perceptual Categorization:
Simulations and Data. In Proceedings of the 23nd Annual Conference of
the Cognitive Science Society, NJ:LEA, 307-312.
Three- to four-month-old infants presented with a series of cat or dog
photographs show an unusual asymmetry in the exclusivity of the perceptual
category representations formed. We have previously accounted for this
asymmetry in terms of an inclusion asymmetry in the distribution of features
present in the cat and dog images used during familiarization (Mareschal,
French, & Quinn, 2000). We use a combination of connectionist modeling
and experimental testing of infants to show that the asymmetry can be reversed
by an appropriate pre-selection and minor image modification of cat and
dog exemplars used for familiarization. The reversal of the asymmetry adds
weight to the feature distribution explanation put forward by Mareschal
et al. (2000).
BACK
French,
R. M. and Thomas, E. (2000).
Why Localist Connectionist Models are Inadequate for Categorization.
(Commentary). The Behavior and Brain Sciences, 23(4), August, 2000,
p. 477.
We claim that two categorization arguments pose particular problems for
localist connectionist models. The internal representations of localist
networks do not reflect the variability within categories in the environment,
whereas networks with distributed internal representations do reflect this
essential feature of categories. Secondly, we provide a real biological
example of perceptual categorization in the monkey that seems to require
population coding (i.e., distributed internal representations).
BACK
French,
R. M. (2000).
The Chinese Room: Just Say "No!" In Proceedings of the 22nd Annual
Cognitive Science Society Conference, 657-662.
It is time to view John Searle's Chinese Room thought experiment in a new
light. The main focus of attention has always been on showing what is wrong
(or right) with the argument, with the tacit assumption being that somehow
there could be such a Room. In this article I argue that the debate should
not focus on the question "If a person in the Room answered all the questions
in perfect Chinese, while not understanding a word of Chinese, what would
the implications of this be for strong AI?" Rather, the question should
be, "Does the very idea of such a Room and a person in the Room who is able
to answer questions in perfect Chinese while not understanding any Chinese
make any sense at all?" And I believe that the answer, in parallel with
recent arguments that claim that it would be impossible for any machine to
pass the Turing Test unless it had experienced the world as we humans have,
is no.
BACK
French, R. M.
(2000).
Peeking Behind the Screen: The Unsuspected Power of the Standard Turing
Test Journal of Experimental and Theoretical Artificial Intelligence,
12, 331-340.
No computer that had not experienced the world as we humans had could pass
a rigorously administered standard Turing Test. We show that the use of
"subcognitive" questions allows the standard Turing Test to indirectly probe
the human subcognitive associative concept network built up over a lifetime
of experience with the world. Not only can this probing reveal differences
in cognitive abilities, but crucially, even differences in physical aspects
of the candidates can be detected. Consequently, it is unnecessary to propose
even harder versions of the Test in which all physical and behavioral aspects
of the two candidates had to be indistinguishable before allowing the machine
to pass the Test. Any machine that passed the "simpler" symbols-in/symbols-out
test as originally proposed by Turing would be intelligent. The problem
is that, even in its original form, the Turing Test is already too hard
and too anthropocentric for any machine that was not a physical, social,
and behavioral carbon copy of ourselves to actually pass it. Consequently,
the Turing Test, even in its standard version, is not a reasonable test for
general machine intelligence. There is no need for an even stronger version
of the Test.
BACK
French,
R. M. (2000).
The Turing Test: the first fifty years. Trends in Cognitive Sciences,
4(3), 115-121.
The Turing Test, originally proposed as a simple operational definition
of intelligence, has now been with us for exactly half a century. It is
safe to say that no other single article in computer science, and few other
articles in science in general, have generated so much discussion. The present
article chronicles the comments and controversy surrounding Turing's classic
article from its publication to the present. The changing perception of
the Turing Test over the last fifty years has paralleled the changing attitudes
in the scientific community towards artificial intelligence: from the unbridled
optimism of 1960's to the current realization of the immense difficulties
that still lie ahead. I conclude with the prediction that the Turing Test
will remain important, not only as a landmark in the history of the development
of intelligent machines, but also with real relevance to future generations
of people living in a world in which the cognitive capacities of machines
will be vastly greater than they are now.
BACK
Mareschal, D. & French, R. M. (2000).
Mechanisms of categorization in infancy. Infancy (ex-Infant Behaviour
and Development), 1, 59-76
This paper presents a connectionist model of correlation-based categorization
by 10-month-old infants (Younger, 1985). Simple autoencoder networks were
exposed to the same stimuli used to test 10-month-olds. The familiarization
regime was kept as close as possible to that used with the infants. The
model's performance matched that of the infants. Both infants and networks
used co-variation information (when available) to segregate items into separate
categories. The model provides a mechanistic account of category learning
with a test session. It demonstrates how categorization arizes as the
product of an inextricable interaction between the subject (the infant)
and the environment (the stimuli). The computational characteristic of
both subject and environment must be considered in conjunction to
understand the observed behaviors.
BACK
French, R. M., Bredart, S., Huart, J. & Labiouse, C. (2000).
The resemblance of one-year-old infants to their fathers: refuting
Christenfeld & Hill (1995). In Proceedings of the 22nd Annual Cognitive
Science Society Conference, 148-153.
In 1995 Christenfeld and Hill published a paper that purported to show
at one year of age, infants resemble their fathers more than their mothers.
Evolution, they argued, would have produced this result since it would
ensure male parental resources, since the paternity of the infant would
no longer be in doubt. We believe this result is false. We present the results
of two experiments (and mention a third) which are very far from replicating
Christenfeld and Hill's data. In addition, we provide an evolutionary explanation
as to why evolution would not have favored the result reported by
Christenfeld and Hill.
BACK
Mareschal, D., French, R. M. & Quinn, P. (2000).
A Connectionist Account of Asymmetric Category Learning in Early Infancy.
Developmental Psychology, 36, 635-645..
Young infants show unexplained asymmetries in the exclusivity of categories
formed on the basis of visually presented stimuli. We describe a connectionist
model that shows similar exclusivity asymmetries when categorizing the
same stimuli presented to the infants. The asymmetries can be explained
in terms of an associative learning mechanism, distributed internal representations,
and the statistics of the feature distributions in the stimuli. We use
the model to explore the robustness of this asymmetry. The model predicts
that the asymmetry will persist when a category is acquired in the presence
of mixed category exemplars and when two categories are acquired sequentially.
Two studies with 3- to 4-month-olds show that asymmetric exclusivity continues
to persist in the presence of either a mixed familiarization set or sequential
acquisition of Cat and Dog categories, thereby corroborating the model's
prediction. By interpreting asymmetric exclusivity effects as manifestations
of interference in an associative memory system, the model can be extended
to also account for interference effects in early infant visual memory.
BACK
French, R.
M. (2000).
Creation and Discovery: Opposite Ends of a Continuum of Constraints
Robert Boyle, the great 19th century English chemist, discovered the law
of gases that bears his name; he did not create it. T. S. Eliot, the great
20th century American poet, created a poem called the "The Wasteland";
he most assuredly did not discover it. Nothing could seem less controversial.
And yet, the twin notions of discovery and creation are nonetheless very
closely related in our minds. The main difference between them seems to
depend on the ineluctable nature of discovery compared to the unique character
of the creation. Scientific laws are thought of as being found, discovered,
stumbled on, or uncovered, whereas plays, poems, novels, paintings, and
symphonies are all thought of as having been created. However, in this essay
I hope to show that discovery and creation are, rather counter-intuitively,
largely manifestations of the same underlying cognitive mechanisms.
BACK
French, R. M.
(2000).
Review of Terry Regier's (1996) The Human Semantic Potential: Spatial
Language and Constrained Connectionism. (The MIT Press, Cambridge:
MA) In Philosophical Psychology, 12, 515-523.
Taking to heart Massaro's (1988) criticism that multi-layer perceptrons
are not appropriate for modeling human cognition because they are too powerful
(i.e., they can simulate just about anything, which gives them little
explanatory power), Regier develops the notion of constrained connectionism
. The model that he discusses is a distributed network but with numerous
constraints added that are (more or less) motivated by real psychophysical
and neurophysical constraints. His model learns "static" prepositions
of spatial location such as in, above, to the left of, to the right
of, under, etc., as well as "dynamic" prepositions such as through
and the Russian iz-pod , meaning "out from under." The network
learns these prepositions by viewing a number of examples of them. Very
importantly, this book tackles -- and goes a long way towards resolving
-- the problem of the lack of negative exemplars (i.e., we are only very
rarely told when something is not above something else), which
should lead to overgeneralization, but does not. This book is a significant
contribution to connectionist literature.
BACK
French, R. M. & Ferrara, A. (1999).
Modeling time perception in rats: Evidence for catastrophic interference
in animal learning. In Proceedings of the 21st Annual Cognitive Science
Society Conference. NJ:LEA, 173-178.
For all intents and purposes, catastrophic interference, the sudden and
complete forgetting of previously stored information upon learning new
information, does not exist in healthy adult humans. But does it exist other
animals? In light of recent research done by McClelland, McNaughton, &
O'Reilly (1995) and McClelland & Goddard (1996) on the role of the
hippocampal-neocortical interaction in alleviating catastrophic interference,
it is of particular interest to ascertain whether catastrophic interference
occurs in non-human higher animals, especially in those animals with a hippocampus
and a neocortex, such as the rat. In this paper, we describe experimental
evidence to support our claim that this type of radical forgetting does,
in fact, exist for certain types of learning in some higher animals, specifically,
in the rat's learning of time-durations. We develop a connectionist model
that could provide an insight into how the rat might be encoding time-duration
information.
BACK
Mareschal, D. & French, R. M. (1999).
A Connectionist Account of Perceptual Category-Learning in Infants.
In Proceedings of the 21st Annual Cognitive Science Society Conference
NJ:LEA, 337-342.
This is a short version of the paper of "Mechanisms of categorization in
infants" that appeared in Infancy.
BACK
French,
R. M. (1999).
Catastrophic Forgetting in Connectionist Networks. Trends in Cognitive
Sciences, 3(4), 128-135.
All natural cognitive systems, and, in particular, our own, gradually forget
previously learned information. Consequently, plausible models of human
cognition should exhibit similar patterns of gradual forgetting old information
as new information is acquired. Only rarely does new learning in natural
cognitive systems completely disrupt or erase previously learned information.
In other words, natural cognitive systems do not, in general, forget catastrophically.
Unfortunately, however, this is precisely what occurs under certain circumstances
in distributed connectionist networks. It turns out that the very features
that give these networks their much-touted abilities to generalize, to function
in the presence of degraded input, etc., are the root cause of catastrophic
forgetting. The challenge is how to keep the advantages of distributed connectionist
networks while avoiding the problem of catastrophic forgetting. In this
article, we examine the causes, consequences and numerous solutions to the
problem of catastrophic forgetting in neural networks. We consider how the
brain might have overcome this problem and explore the consequences of this
solution.
BACK
French,
R. M. & Anselme, P. (1999).
Interactively converging on context-sensitive representations: A solution
to the frame problem. Revue Internationale de Philosophie, 3, 365-385.
While we agree that the frame problem, as initially stated by McCarthy
and Hayes (1969), is a problem that arises because of the use of representations,
we do not accept the anti-representationalist position that the way around
the problem is to eliminate representations. We believe that internal representations
of the external world are a necessary, perhaps even a defining feature,
of higher cognition. We explore the notion of dynamically created context-dependent
representations that emerge from a continual interaction between working
memory, external input, and long-term memory. We claim that only this
kind of representation, necessary for higher cognitive abilities such
as counterfactualization, will allow the combinatorial explosion inherent
in the frame problem to be avoided.
BACK
Bredart,
S. & French, R. M. (1999).
Do Babies Resemble their Fathers More Than their Mothers? A Failure
to Replicate Christenfeld and Hill (1995). Evolution and Human Behavior,
20(3) 129-135.
Contrary to Christenfeld & Hill (1995), we find that children aged
1, 3, and 5 do not appear to resemble their fathers significantly
more than their mothers. We provide an evolutionary explanation as to why
this is, in fact, as it should be. In addition, we note that any father-child
resemblance that does exist, while indeed better than chance, is far from
overwhelming.
BACK
French,
R. M. & Cleeremans, A. (1998).
Function, Sufficiently Constrained, Implies Form. Psycoloquy 9
(21) psyc.98.9.21.connectionist-explanation.18.french
Christopher Green's (1998) article is an attack on most current connectionist
models of cognition. Our commentary will suggest that there is an essential
component missing in his discussion of modeling - namely, the idea that
the appropriate level of the model needs to be specified. We will further
suggest that the precise form (size, topology, learning rules, etc.) of
connectionist networks will fall out as ever more detailed constraints are
placed on their function.
BACK
French, R. M. & Thomas, E. (1998).
The Dynamical Hypothesis: One Battle Behind. In The Behavior and
Brain Sciences, 21:5, 640-641.
What new implications does the dynamical hypothesis have for cognitive
science? The short answer is: None. The Behavior and Brain Sciences target
article, "The dynamical hypothesis in cognitive science" by Tim Van Gelder
is basically an attack on traditional symbolic AI and differs very little
from prior connectionist criticisms of it. For the past ten years, the
connectionist community has been well aware of the necessity of using (and
understanding) dynamically evolving, recurrent network models of cognition.
BACK
French, R.
M. & Mareschal, D. (1998).
Could Category-Specific Semantic Deficits Reflect Differences in the
Distributions of Features Within a Unified Semantic Memory? In Proceedings
of the Twentieth Annual Cognitive Science Society Conference. NJ:LEA.
374-379.
Category-specific semantic deficits refer to the inability to name objects
from a particular category while the naming of words outside that category
remains relatively unimpaired. We suggest that such semantic deficits arise
from the random lesioning of a unified semantic network in which internal
category representations reflect the variability of the categories themselves.
This is demonstrated by lesioning networks that have learned to categorize
butterflies and chairs. The model shows category-specific semantic deficits
of the narrower category (butterfly) with the occasional reverse semantic
deficits of the relatively broader category (chair).
BACK
French,
R. M. (1998).
A Simple Recurrent Network Model of Bilingual Memory. In Proceedings
of the Twentieth Annual Cognitive Science Society Conference. NJ:LEA.
368-373.
This paper draws on previous research that strongly suggests that bilingual
memory is organized as a single distributed lexicon rather than as two
separately accessible lexicons corresponding to each language. Interactive-activation
models provide an effective means of modeling many of the cross-language
priming and interference effects that have been observed. However, one
difficulty with these models is that they do not provide a plausible way
of actually acquiring such an organization. This paper shows that a simple
recurrent connectionist network (SRN) (Elman, 1990) might provide an insight
into this problem. An SRN is first trained on two micro-languages and the
hidden-unit representations corresponding to those languages are studied.
A cluster analysis of these highly distributed and operlapping representations
shows that they accurately reflect the overall separation of the two languages,
as well as the word categories in each language In addition, random and
extensive lesioning of the SRN hidden layer is shown, in general, to have
little effect on this organization. This is in agreement with the observation
that most bilinguals who suffer brain damage do not lose their ability to
distinguish their two languages. On the other hand, an example is given where
the removal of a single node does radically disrupts this internal representational
organization, similar to clinical cases of bilingual language mixing and
bilingual aphasia following brain trauma. The issue of scaling-up is also
discussed briefly.
BACK
French, R.
M. (1997).
Selective memory loss in aphasics: An insight from pseudo-recurrent
connectionist networks. In J. Bullinaria, G. Houghton, D. Glasspool (eds.).
Connectionist Representations: Proceedings of the Fourth Neural Computation
and Psychology Workshop. Springer-Verlag 183-195.
McClelland, McNaughton, O'Reilly (1995) suggest that the brain's way of
overcoming catastrophic interference is by means of the hippocampus-neocortex
separation. French (1997) has developed a memory model incorporating this
separation into distinct areas, using pseudopatterns (1995) to transfer
information from one area to the other of the memory. This network gradually
produces highly compact representations which, while they ensure efficient
processing, are also highly susceptible to damage. Internal representations
of categories must reflect the variance within the categories. Because the
variance within biological categories is, in general, smaller than that
in artificial categories and because memory compaction gradually makes all
representations proportionately less distributed, representations of low-variance
biological categories are likely to be the most adversely affected by random
damage to the network. This may help explain the selective memory loss in
aphasics of natural categories compared to artificial categories.
BACK
Sougné,
J. & French, R. M., (1997).
A Neurobiologically Inspired Model of Working Memory Based on Neuronal
Synchrony and Rythmicity. In J. Bullinaria, G. Houghton, D. Glasspool (eds.).
Connectionist Representations: Proceedings of the Fourth Neural Computation
and Psychology Workshop. Springer-Verlag. 155-167.
The connectionist model of reasoning presented here, INFERNET, implements
a working memory that is the activated part of long-term memory. This is
achieved by making use of temporal properties of the node spikes. A particular
solution of the problem of multiple instantiation is proposed. This model
makes predictions that have been tested experimentally and the results
of these experiments are reported here. These results would seem to challenge
modular models of memory.
BACK
French, R. M.
(1997).
Pseudo-recurrent connectionist networks: An approach to the "sensitivity-stability"
dilemma. Connection Science, 9(4). 353-379
In order to solve the "sensitivity-stability" problem - and its immediate
correlate, the problem of sequential learning - it is crucial to develop
connectionist architectures that are simultaneously sensitive to, but not
excessively disrupted by, new input. French (1992) suggested that to alleviate
a particularly severe form of this disruption, catastrophic forgetting,
it was necessary for networks to dynamically separate their internal representations
during learning. McClelland, McNaughton, & O'Reilly (1995) went even
further. They suggested that nature's way of implementing this obligatory
separation was the evolution of two separate areas of the brain, the hippocampus
and the neocortex. In keeping with this idea of radical separation, a
"pseudo-recurrent" memory model is presented here that partitions a connectionist
network into two functionally distinct, but continually interacting areas.
One area serves as a final-storage area for representations; the other
is an early-processing area where new representations are first learned
by the system. The final-storage area continually supplies internally generated
patterns (pseudopatterns, Robins (1995)), which are approximations of its
content, to the early-processing area, where they are interleaved with
the new patterns to be learned. Transfer of the new learning is done either
by weight-copying from the early-processing area to the final-storage
area or by pseudopattern transfer. A number of experiments are presented
that demonstrate the effectiveness of this approach, allowing, in particular,
effective sequential learning with gradual forgetting in the presence
of new input. Finally, it is shown that the two interacting areas automatically
produce representational compaction and it is suggested that similar representational
streamlining may exist in the brain.
BACK
French,
R. M. and Weaver, M. (1997).
New-feature learning: How common is it? The Behavior and Brain
Sciences, 21:1, New Jersey: LEA, p. 26.
The fixed-feature viewpoint the authors are opposing not a widely held,
theoretical position, but rather a working assumption of cognitive psychologists
- and thus a straw man. We accept their demonstration of new-feature acquisition,
but question its ubiquity in category learning. We suggest that new-feature
learning (at least in adults) is rarer and more difficult than the authors
suggest.
BACK
French, R.
M. and Ohnesorge, C. (1997).
Homographic self-inhibition and the disappearance of priming: More
evidence for an interactive-activation model of bilingual memory. In
Proceedings of the 19th Annual Cognitive Science Society Conference
, New Jersey: LEA, 241-246.
This paper presents two experiments providing strong support for an interactive-activation
interpretation of bilingual memory. In both experiments French-English
interlexical noncognate homographs were used, i.e., words like fin (= "end"
in French), pain (= "bread" in French), that have a distinct meaning in
each language. An All-English condition, in which participants saw only
English items (word and non-words) and a Mixed condition, with half English
and half French items, were used. For a set of English target words that
were strongly primed by the homographs in the All-English condition (e.g.,
shark, primed by the homograph fin), this priming was found to disappear
in the Mixed condition. We suggest that this is because the English "component"
of the homograph is inhibited by the French component which only becomes
active in the Mixed condition. Further, recognition times for these homographs
as words in English were significantly longer in the Mixed condition and
the amount of this increase was related to the relative strength (in terms
of printed-word frequency) of the French meaning of the homograph. We see
no reasonable independent-access dual-lexicon explanation of these results,
whereas they fit easily into an interactive-activation framework.
BACK
Mareschal, D.
and French, R. M. (1997).
A connectionist account of interference effects in early infant memory
and categorization. In Proceedings of the 19th Annual Cognitive Science
Society Conference, New Jersey: LEA, 484-489.
An unusual asymmetry has been observed in natural category formation in
infants (Quinn, Eimas, and Rosenkrantz, 1993). Infants who are initially
exposed to a series of pictures of cats and then are shown a dog and a
novel cat, show significantly more interest in the dog than in the cat.
However, when the order of presentation is reversed - dogs are seen first,
then a cat and a novel dog - the cat attracts no more attention than the
dog. We show that a simple connectionist network can model this unexpected
learning asymmetry and propose that this asymmetry arises naturally from
the asymmetric overlaps of the feature distributions of the two categories.
The values of the cat features are subsumed by those of dog features, but
not vice-versa. The autoencoder used for the experiments presented in this
paper also reproduces exclusivity effects in the two categories as well
the reported effect of catastrophic interference of dogs on previously learned
cats, but not vice-versa. The results of the modeling suggest connectionist
methods are ideal for exploring early infant knowledge acquisition.
BACK
French, R. M.
(1997).
Using pseudo-recurrent connectionist networks to solve the problem
of sequential learning. In Proceedings of the 19th Annual Cognitive Science
Society Conference, New Jersey: LEA. 921.
A major problem facing connectionist models of memory is how to make them
simultaneously sensitive to, but not disrupted by, new input. This paper
describes one solution to this problem. The resulting connectionist architecture
is capable of sequential learning and exhibits gradual forgetting where
standard connectionist architectures may forget catastrophically. The proposed
architecture relies on separating previously learned representations from
those that are currently being learned. Crucially, a method is described
in which approximations of the previously learned data, called pseudopatterns,
will be extracted from the network and mixed in with the new patterns to
be learned, thereby alleviating sudden forgetting caused by new learning
and allowing the network to forget gracefully.
BACK
French, R.
M. (1997).
When coffee cups are like old elephants or Why representation modules
don't make sense. In Proceedings of the International Conference New
Trends in Cognitive Science, A. Riegler & M. Peschl (eds.), Austrian
Society for Cognitive Science, 158-163.
I argue against a widespread assumption of many current models of cognition
- namely, that the process of creating representations of reality can be
separated from the process of manipulating these representations. I hope
to show that any attempt to isolate these two processes will inevitably
lead to programs that are either basically guaranteed to succeed ahead of
time due to the (usually carefully hand-crafted) representations given to
the program or that that would experience combinatorial explosion if they
were scaled up. I suggest that the way out of this dilemma is a process
of incremental representational refinement achieved by means of a continual
interaction between the representation of the situation at hand and the
processing that will make use of that representation.
BACK
French, R. M.
and Cleeremans, A. (1996).
Probing and Prediction: A pragmatic view of cognitive modeling.
This paper examines the role of computational modeling in psychology. Since
any model of a real world phenomenon will fail at some level of description,
we suggest that models can only be understood (and evaluated) with respect
to a given level of description and a specific set of criteria associated
with that level. We suggest a pragmatic view of the main advantages of
instantiating psychological theories as computer simulations. We suggest
that the main function of computational modeling is to support a process
of "probing and prediction" by which models can be interacted with in a
way that provides both guidance for empirical research as well as sufficient
depth to support interactive modification of the underlying theory. Within
this framework we briefly develop a way of comparing the quality of different
models of the same phenomenon. We argue that models gain explanatory power
as well as practical usefulness when they are emergent, that is, when they
provide an account of how the principles of organization at a given level
of description constrain and define structure at a higher level of description.
For this reason, connectionist models would appear to provide the most fruitful
modeling framework today.
BACK
French, R.
M. (1996).
The Inverted Turing Test: How a simple (mindless) program could pass
it. Psycoloquy, 7(39) turing-test.6.french.
This commentary attempts to show that the inverted Turing Test (Watt 1996)
could be simulated by a standard Turing test and, most importantly, claims
that a very simple program with no intelligence whatsoever could be written
that would pass the inverted Turing test. For this reason, the inverted
Turing test in its present form must be rejected. For related work on which
this commentary is based, see
French (1990)
.
BACK
French, R.
M. & Ohnesorge, C. (1996).
Using interlexical nonwords to support an interactive-activation model
of bilingual memory. In Proceedings of the Eighteenth Annual Cognitive
Science Society Conference, New Jersey: LEA. 318-323.
Certain models of bilingual memory based on parallel, activation-driven
self-terminating search through independent lexicons can reconcile both
interlingual priming data (which seem to support an overlapping organization
of bilingual memory) and homograph recognition data (which seem to favor
a separate-access dual-lexicon approach). But the dual-lexicon model makes
a prediction regarding recognition times for nonwords that is not supported
by the data. The nonwords that violate this prediction are produced by changing
a single letter of non-cognate interlexical homographs (words like appoint
and mince that are words in both French and English, but have completely
different meanings in each language), thereby producing regular nonwords
in both languages (e.g., appaint and monce). These nonwords are then classified
according to the comparative sizes of their orthographic neighborhoods
in each language. An interactive-activation model, unlike the dual-lexicon
model, can account for reaction times to these nonwords in a relatively
straightforward manner. For this reason, it is argued that an interactive-activation
model is the more appropriate of the two models of bilingual memory.
BACK
French, R.
M. (1996).
Review of Paul M. Churchland, The Engine of Reason, the Seat of the
Soul (The MIT Press, Cambridge, MA). In Minds & Machines, 6
(3), 416-421.
This review criticizes the overly evangelical tone of Churchland's paean
to connectionism. Churchland makes little attempt to present both the successes
as well as the failures of connectionist modeling. It is a good book if
one wants to know some of the areas in which connectionism has succeeded
well; it is certainly not an accurate picture of the field.
BACK
French, R.
M. and Ohnesorge C. (1995).
Using non-cognate interlexical homographs to study bilingual memory
organization. In Proceedings of the Seventeenth Annual Conference of
the Cognitive Science Society, NJ: LEA. 31-36.
Non-cognate French-English homographs, i.e., identical lexical items with
distinct meanings in French and in English, such as pain, four, main, etc.,
are used to study the organization of bilingual memory. Bilingual lexical
access is initially shown to be compatible with a parallel search through
independent lexicons, where the search speed through each lexicon depends
on the level of activation of the associated language. Particular attention
is paid to reaction times to "unbalanced" homographs, i.e., homographs
with a high frequency in one language and a low frequency in the other.
It is claimed that the independent dual-lexicon model is functionally equivalent
to an activation-based competitive-access model that can be used to account
for priming data that the dual-lexicon model has difficulty handling.
BACK
French, R.
M., (1995).
Refocusing the Debate on the Turing Test. Behavior and Philosophy,
23 (1), 61-62.
Dale Jacquette's "Who's Afraid of the Turing Test" (Jacquette, 1993) is
a criticism of my article "Subcognition and the Limits of the Turing Test"
(French, 1990
). Unfortunately, Jacquette transforms the main point of my article
into something that it was never meant to be and then directs his criticisms
against this interpretation of my arguments rather than against my arguments
as I meant them to be understood.
BACK
French, R. M.
& Messinger, A. (1994).
Genes, Phenes and the Baldwin Effect: Learning and Evolution in a
Simulated Population. In R. Brooks & P. Maes (eds.) Artificial Life
IV. 277-282. (The MIT Press, Cambridge:MA)
The Baldwin Effect is the gradual evolution of a population's genotypic
characteristics over many generations due to learning at the phenotypic
level. We show in this paper why this is absolutely not a form of discredited
Lamarckian evolution. We then empirically demonstrate a pronounced Baldwin
Effect in a simulated population of naturally evolving agents. In other
words, the ability to learn at the phenotypic level does have a significant
effect on the genotypic evolution of this population. In addition, certain
factors, in particular, the amount of phenotypic plasticity and the benefit
associated with the learned phenotypic behavior or characteristic, have a
significant influence on the amount of genotypic change produced. It also
appears that excessively high or low levels of phenotypic plasticity have
the same effect -- namely, they are significantly less successful at promoting
genotypic evolution than moderate levels of plasticity. Finally, we have
shown more pronounced Baldwin Effect in sexually reproducing organisms than
in otherwise comparable asexually reproducing organisms.
BACK
French, R.
M., (1994).
Dynamically constraining connectionist networks to produce distributed,
orthogonal representations to reduce catastrophic interference. In
Proceedings of the 16th Annual Cognitive Science Conference. NJ:LEA.
335-340.
It is well known that when a connectionist network is trained on one set
of patterns and then attempts to add new patterns to its repertoire, catastrophic
interference may result. The use of sparse, orthogonal hidden-layer representations
has been shown to reduce catastrophic interference. The author demonstrates
that the use of sparse representations may, in certain cases, actually
result in worse performance on catastrophic interference. This paper argues
for the necessity of maintaining hidden-layer representations that are both
as highly distributed and as highly orthogonal as possible. The author presents
a learning algorithm, called context-biasing, that dynamically solves the
problem of constraining hidden-layer representations to simultaneously
produce good orthogonality and distributedness. On the data tested for this
study, context-biasing is shown to reduce catastrophic interference by
more than 50% compared to standard backpropagation. In particular, this
technique succeeds in reducing catastrophic interference on data where sparse,
orthogonal distributions failed to produce any improvement.
BACK
French, R.
M., (1991).
Using semi-distributed representations to overcome catastrophic interference
in connectionist networks. In Proceedings of the 13th Annual Cognitive
Science Society Conference. NJ:LEA, 173-178.
In connectionist networks, newly-learned information destroys previously-learned
information unless the network is continually retrained on the old information.
This behavior, known as catastrophic forgetting, is unacceptable both for
practical purposes and as a model of mind. This paper advances the claim
that catastrophic forgetting is a direct consequence of the overlap of
the systemís distributed representations and can be reduced by reducing
this overlap. A simple algorithm called node-sharpening is presented that
allows a standard feedforward backpropagation network to develop semi-distributed
representations, thereby significantly reducing the problem of catastrophic
forgetting.
BACK
French, R. M.,
(1990).
Subcognition and the Limits of the Turing Test. Mind 99(393)
53-65.
The imitation game originally proposed by Alan Turing provides a very powerful
means of probing human-like cognition. But when the Test is actually used
as a real test for intelligence, as certain philosophers propose, its very
strength becomes a weakness. Turing invented the imitation game only as
a novel way of looking at the question "Can machines think?". But it turns
out to be so powerful that it is really asking: "Can machines think exactly
like human beings?". As a real test for intelligence, the latter question
is significantly less interesting than the former. The Turing Test provides
a sufficient condition for human intelligence but does not address the more
important issue of intelligence in general. I have tried to show that only
a computer that had acquired adult human intelligence by experiencing the
world as we have could pass the Turing Test. In addition, I feel that any
attempt to "fix" the Turing Test so that it could test for intelligence in
general and not just human intelligence is doomed to failure because of
the completely interwoven and inter-dependent nature of the human physical,
subcognitive and cognitive levels. To gain insight into intelligence, we
will be forced to consider it in the more elusive terms of the ability
to categorize, to generalize, to make analogies, to learn, and so on. It
is with respect to these abilities that the computer will always be unmasked
if it has not experienced the world as a human being has.
BACK
Page last updated: September 21, 2009
For questions or comments, please contact:
robert dot french round-at-sign u-bourgogne dot fr no-spaces
|