A majority of research on language development has focused on examining spoken word learning and its role in the development of language, but there has been more limited research investigating the role of gesture and signed words in language development. The current study investigates the role of modality of communication (spoken versus signed words) and modality of presentation of semantic information (written definitions versus pictures) in a word learning task. This study aims to address the following research questions:
(1) How does the modality of word presentation (spoken word vs. signed word) impact the learning of an associated meaning?
(2) How does the modality of a meaning presentation (picture vs. written definition) impact the learning of an associated word?
(3) How does crossing modalities (spoken word–>picture, signed word–>written definition) impact speed and accuracy of learned associations? Participants will see short videos of a person saying a non-word or making a sign paired with either a picture or a written definition.
Outcome variables are accuracy and response times for recall of meanings. This study will address the relative speed of word learning when words are represented by signs versus phonological strings.