Beyond transitional probability computations: Extracting word-like units when only statistical information is available
Category
Journal Article
Authors
Perruchet, P., Poulin-Charronnat, B.
Year
2012
Title
Beyond transitional probability computations: Extracting word-like units when only statistical information is available
Journal / book / conference
Journal of Memory and Language
Abstract
Endress and Mehler (2009) reported that when adult subjects are exposed to an unsegmented artificial language composed from trisyllabic words such as ABX, YBC, and AZC, they are unable to distinguish between these words and what they coined as the ‘‘phantomword’’ ABC in a subsequent test. This suggests that statistical learning generates knowledge about the transitional probabilities (TPs) within each pair of syllables (AB, BC, and A C), which are common to words and phantom-words, but, crucially, does not lead to the extraction of genuine word-like units. This conclusion is definitely inconsistent with chunk-based models of word segmentation, as confirmed by simulations run with the MDLChunker (Robinet, Lemaire, & Gordon, 2011) and PARSER (Perruchet & Vinter, 1998), which successfully discover the words without computing TPs. Null results, however, can be due to multiple causes, and notably, in the case of Endress and Mehler, to the reduced level of intelligibility of their synthesized speech flow. In three experiments, we observed positive results in conditions similar to Endress and Mehler after only 5 min of exposure to the language, hence providing strong evidence that statistical information is sufficient to extract word-like units.
Volume
66
Pages
807-818
Keywords
Statistical learning, Artificial language, Word segmentation, Chunking, Modeling