Beyond transitional probability computations: Extracting word-like units when only statistical information is available

Catégorie

Journal Article

Auteurs

Perruchet, P., Poulin-Charronnat, B.

Année

2012

Titre

Beyond transitional probability computations: Extracting word-like units when only statistical information is available

Journal / Livre / Conférence

Journal of Memory and Language

Résumé

Endress and Mehler (2009) reported that when adult subjects are exposed to an unsegmented artificial language composed from trisyllabic words such as ABX, YBC, and AZC, they are unable to distinguish between these words and what they coined as the ‘‘phantomword’’ ABC in a subsequent test. This suggests that statistical learning generates knowledge about the transitional probabilities (TPs) within each pair of syllables (AB, BC, and A  C), which are common to words and phantom-words, but, crucially, does not lead to the extraction of genuine word-like units. This conclusion is definitely inconsistent with chunk-based models of word segmentation, as confirmed by simulations run with the MDLChunker (Robinet, Lemaire, & Gordon, 2011) and PARSER (Perruchet & Vinter, 1998), which successfully discover the words without computing TPs. Null results, however, can be due to multiple causes, and notably, in the case of Endress and Mehler, to the reduced level of intelligibility of their synthesized speech flow. In three experiments, we observed positive results in conditions similar to Endress and Mehler after only 5 min of exposure to the language, hence providing strong evidence that statistical information is sufficient to extract word-like units.

Volume

66

Pages

807-818

Mots-clés

Statistical learning, Artificial language, Word segmentation, Chunking, Modeling

Téléchargement

Télécharger cette publication au format PDF

‹ Retour à la page précédente