Triadic Ontogenetic Ritualization: An Overlooked Possibility

Ekaterina Abramova
Radboud University Nijmegen

Keywords: ontogenetic ritualization, gestures, protolanguage, tool use

Short description: Triadic ontogenetic ritualization could lead to gestural protolanguage in the context of joint tool use. No need for communicative intentions.

Abstract:

no abstract: 2-page paper

Citation:

Abramova E. (2016). Triadic Ontogenetic Ritualization: An Overlooked Possibility. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/37.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Abramova 2016

Brain Mechanisms Of Human Acoustic Communication: A Phylogenetic Approach And Its Ontogenetic Implications

Hermann Ackermann1 and Wolfram Ziegler2
1 Hertie Institute for Clinical Brain Research, University of Tuebingen
2 Institute of Phonetics and Speech Processing, Ludwig-Maximilians-University Munich

Keywords: primate acoustic communication, evolution articulate speech, ontogentic vocal development, basal ganglia, corticostriatal loops

Abstract:

Human clinical data suggest the basal ganglia provide a platform for the integration of primate-general mechanisms of acoustic communication with human-specific motor capacities during the production of articulate speech. Besides monosynaptic refinement of cortical projections to the brainstem nuclei that steer the laryngeal muscles (Kuypers/Jürgens hypothesis; Fitch et al., 2010), thus, vocal-laryngeal elaboration of cortico–basal ganglia circuits - driven, conceivably, by human-specific FOXP2 mutations – has been considered a further prerequisite for the evolution of spoken language (Ackermann et al., 2014). This concept helps to elucidate the deep entrenchment of articulate speech into a nonverbal acoustic matrix of emotional prosody. Though the notion that ontogeny recapitulates the adult stages of phylogeny has been refuted decades ago, nevertheless, constraints that canalized spoken language evolution must also be expected to have an impact upon speech acquisition (e.g., Bates et al., 1991). Indeed, the available data on the functional-neuroanatomic underpinnings of preverbal vocal communication support this notion. First, infants combine phylogenetically different, i.e., mammalian-general and human-specific circuits in order to emulate the mature behavior of spoken language. Second, the maturation of the human-specific components appear to evolve across two levels: (i) neonates already master the operation of a glottal sound source, (ii) the subsequent myelogenetic and cytoarchitectural elaboration of corticobulbar tracts and corticostriatal networks then allows for the implementation of syllabic vocal tract movement sequences, based upon the precise adjustment of laryngeal functions and supralaryngeal articulatory excursions. Conceivably, canonical babbling as the final stage of preverbal vocal development (see Oller 2000, table 1.3) then helps children to segment the utterances of an ambient language into syllable sequences. These observations point at a tinkering nature of acoustic communication during infancy, i.e., the opportunistic deployment of the resources available at any given moment (Thelen, 1981).

Citation:

Ackermann H. and Ziegler W. (2016). Brain Mechanisms Of Human Acoustic Communication: A Phylogenetic Approach And Its Ontogenetic Implications. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/11.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Ackermann, Ziegler 2016

Towards A Rigorous Motivation For Zipf's Law

Phillip M. Alday
University of South Australia

Keywords: empirical law, neurobiology, information theory

Short description: A principled, rigorous motivation for Zipf's Law via a processing strategy based on entropy maximization!

Abstract:

Language evolution can be viewed from two viewpoints: the development of a communicative system and the biological adaptations necessary for producing and perceiving said system. The communicative-system vantage point has enjoyed a wealth of mathematical models based on simple distributional properties of language, often formulated as empirical laws. However, beyond vague psychological notions of "least effort", no principled explanation has been proposed for the existence and success of such laws. Meanwhile, psychological and neurobiological models have focused largely on the computational constraints presented by incremental, real-time processing. In the following, we show that information-theoretic entropy underpins successful models of both types and provides a more principled motivation for Zipf's Law.

Citation:

Alday P. M. (2016). Towards A Rigorous Motivation For Zipf's Law. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/178.html


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© Alday 2016

Pre And Post Partum Whistle Production Of A Bottlenose Dolphin (Tursiops Truncatus) Mother-calf Dyad

Audra Ames1 , Sara Wielandt2 , Dianne Cameron2 and Stan Kuczaj1
1 University of Southern Mississippi
2 Six Flags: Discovery Kingdom

Keywords: vocal learning, dolphin communication, signature whistle

Abstract:

Vocal learning has been defined as the process by which an individual modifies its vocal repertoire as a result of the sound environment (Marler, 1976a; McCowan & Reiss, 1997). Vocal learning has been found in a number of species, including the Atlantic bottlenose dolphin (Tursiops truncatus)(see review: Janik & Slater, 1997). Dolphins mimic sounds heard in their sound environment, and modify their vocal repertoire based on this mimicry. Much of the research on vocal learning in bottlenose dolphins has focused on whistle development in dolphin calves (Fripp et al., 2005; McCowan & Reiss 1995; Morisaka, Shinohara, & Taki, 1995; Tyack & Sayigh, 1997). In particular, this literature focuses on the sounds calves may select as part of their signature whistles and has suggested that a calf’s signature whistle is strongly influenced by the sounds it hears, especially those sounds produced by conspecifics (Bojanowski, Veit, & Todt, 2000; Fripp et al., 2005; McCowan & Reiss, 1995; Sayigh, Tyack, Wells, & Scott, 1990; Sayigh, Tyack, Wells, Scott, & Irvine, 1995). Few studies have addressed the ontogeny of whistle development during a calf’s early life (McCowan & Reiss, 1995; Morisaka et al., 1995). Studies that have attempted to discuss the early development of calf acoustic repertoires, rarely address the roles of adult dolphin signature whistles in this development. For example, bottlenose dolphin mothers increase their signature whistle rates surrounding the birth of their calf (Fripp & Tyack, 2008; Gnone & Moriconi, 2010; Mello & Amundin, 2005), and it has been suggested that this increase could serve as a model for dolphin calves in the development of their own signature whistle (Fripp & Tyack, 2008). Our findings suggest that this may not be the case, and that increased signature whistle rates in bottlenose dolphin mothers must exist for an alternate reason. In this study, we investigate a calf’s developing whistle repertoire as a function of the adult whistles it hears. The signature whistles of a group of five adult bottlenose dolphin females in managed care were recorded from a four-month period (two months prior to and two months after the birth of a calf to one of the group members). We gathered video recordings with hydrophone input, which allowed us to observe sound in the environment as it was simultaneously produced with behavior. As has been previously reported for other pregnant dolphins, the mother dolphin exhibited a significant increase (p<.05) in whistle production before the birth of the calf. The mother also produced whistles at an additional significant rate (p<.05) following the birth of the calf, but this rate decreased over time, consistent with findings from similar studies (Fripp & Tyack, 2008; Gnone & Moriconi, 2010). The remaining four adult females produced whistles at relatively low levels pre-partum, showing an increase in whistle rates as the mother dolphin gradually decreased her vocal production over the first two months of the calf’s life. These findings are more supportive of additional hypotheses regarding this phenomenon (e.g., imprinting; Mann & Smuts, 1998; Fripp & Tyack, 2008), not with whistle modeling. We used a discriminate analysis to determine which signature whistles present in the calf’s environment, if any, existed in her early repertoire. Parameters from whistles identified as belonging to the calf or as a match to an adult signature contour, were extracted from the data using sound analysis software (RavenPro 1.5). These parameters included the beginning, end, minimum, maximum and delta frequencies of the whistle, along with the whistle duration and inflection points. We did not find that the calf developed a predominant whistle type in the early months of her life, but instead used each of the adults’ signature whistles in addition to several whistles that were dissimilar to the adults’ signature sounds through the course of the study. Based on our findings, it does not appear that repeated exposure to a sound guarantees that sound’s use by the calf. The adult contours most commonly mimicked by the calf were that of a female who produced moderate levels of her signature whistle during the course of the study. This is consistent with findings that suggest a calf may select their signature whistle based on sounds that are not over abundant in their environment (Bojanowski et al., 2000; Fripp et al., 2005). The selection process for sounds that calves include in their vocal repertoire is still largely unknown, but studies that add to the dearth of information regarding this developmental process may help us piece together how dolphins develop their communication system.

Citation:

Ames A., Wielandt S., Cameron D. and Kuczaj S. (2016). Pre And Post Partum Whistle Production Of A Bottlenose Dolphin (Tursiops Truncatus) Mother-calf Dyad. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/102.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Ames, Wielandt, Cameron, Kuczaj 2016

Noise In Phonology Affects Encoding Strategies In Morphology

David Ardell , Noelle Anderson and Bodo Winter
University of California, Merced

Keywords: transmission noise, compositionality, structure preservation, iterated learning model, cultural evolution, population structure, morphology, phonology, Linguistic Niche Hypothesis

Short description: Noise in Phonology Affects Encoding Strategies in Morphology: exploring effects of noise, population size and structure using Iterated Learning Models

Abstract:

As with the evolution of a population’s genetic variability, the evolution of human linguistic variability must be shaped by multiple interacting forces. The iterated learning paradigm (for overview, see Kirby, Griffiths & Smith, 2014) demonstrates that languages can evolve compositional structure when there is a learning bottleneck: Learners infer a linguistic system from limited input, requiring them to generalize beyond what they observe. Through this, linguistic patterns that are systematically structured become more frequent in the process of cultural evolution.

Besides the ‘transmission bottleneck’ (Hurford, 2002), the social composition of languages has been argued to be another force acting upon language structure. The 'Linguistic Niche Hypothesis' (Lupyan & Dale, 2010) proposes that morphological complexity is inversely correlated with population size. The mechanism behind this correlation is commonly assumed to be a learning difficulty of adult second language learners in acquiring specifically morphology (Bentz & Winter, 2013; Trudgill, 2011). However, crucially, the major share of evidence for the Linguistic Niche Hypothesis is correlational, leaving the underlying mechanism underspecified (Nettle, 2012).

An additional mechanism explaining the loss of morphological complexity in larger populations may be phonological variability. Adult learners introduce heterogeneity (effectively noise) into the phonological system (Nettle, 2012: 1833-1835). Larger populations harbor more pronunciation variants, paralleling the higher ‘noise’ present in large populations in the form of stochastic genetic variation. In a large population of speakers, noise is incorporated via contact with other dialects or because of second language learners with different accents. Because morphological markers generally rest on limited phonetic material, they are susceptible to ambiguity if phonological turnover in a population of speakers is high. Using a sequential strategy (i.e., different words/ word order changes) to mark the same contrast in meaning will be a more robust encoding strategy in high-noise signaling channels (Nettle, 2012).

A signal space in an iterated learning framework in principal has multiple dimensions by which they could evolve to preserve the structure of a meaning space. We wish to demonstrate clearly that ILM chains evolve so as to be robust to transmission noise by allocating important differences in meaning to the most reliable dimensions of transmission in signal space. We argue that perhaps the presence of noise causes the self-organization in encoding known as structure-preservation, as is also seen in genetic codes (Sella & Ardell, 2002).

Although effects of dimensionality and noise have been discussed (e.g., Little, Eryilmaz & de Boer, 2015), systematic quantitative study of how meanings get embedded in signal spaces of different sizes and structures in the ILM is still missing. Integrating ideas from the evolution of the genetic code, we propose a computational architecture that addresses the role of noise in the ILM framework when dimensions of the signal space and population size are modulated. We aim specifically to demonstrate the transition from a morphological/paradigmatic to a syntagmatic/sequential strategy as phonological turnover increases. We predict that within parameter regions without added noise, ILM chains break evenly across these two orthogonal dimensions of compositionality. Under our hypothesis, the introduction of noise into the transmission of one of these dimensions will disrupt the stability of induction and expression and the languages will evolve robustness to this noise. We discuss our hypothesis in light of recent contradictory experimental results (Atkinson, Kirby & Smith, 2015),. Through our model, we attempt to demonstrate that noise in phonology biases against paradigmatic systems with morphological markers relying on minimal phonological elements. Rather than contradicting the Linguistic Niche Hypothesis, the proposed results from our study will provide an alternative mechanism for population-dependent effects on the evolution of language structure.

References

Atkinson, M., Kirby, S. & Smith, K. (2015). Speaker Input Variability Does Not Explain Why Larger Populations Have Simpler Languages. PLoS One, 10(6), e0129463

Bentz, C., & Winter, B. (2013). Languages with more second language learners tend to lose nominal case. Language Dynamics & Change, 3, 1-27.

Brighton, H., Smith, K., & Kirby, S. (2005). Language as an evolutionary system. Physics of Life Reviews, 2, 177-226.

De Boer, B., & Verhoef, T. (2012). Language dynamics in structured form and meaning spaces. Advances in Complex Systems, 15, 1150021.

Hurford, J.R. (2002). Expression/induction models of language evolution: dimensions and issues. In Briscoe, E.J. (ed.), Linguistic evolution through language acquisition (pp. 301–344). Cambridge: Cambridge University Press.

Kirby, S., Griffiths, T.L., & Smith, K. (2014). Iterated learning and the evolution of language. Current Opinion in Neurobiology, 28, 108-114.

Little, H., Eryilmaz, K. & de Boer. B. (2015). Linguistic Modality Affects the Creation of Structure and Iconicity in Signals. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Conference of the Cognitive Science Society (pp. 1392-1398). Austin, TX: Cognitive Science Society.

Lupyan, G., & Dale, R. (2010). Language structure is partly determined by social structure. PloS one, 5(1), e8559.

Nettle, D. (2012). Social scale and structural complexity in human languages. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1597), 1829–1836.

Citation:

Ardell D., Anderson N. and Winter B. (2016). Noise In Phonology Affects Encoding Strategies In Morphology. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/165.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Ardell, Anderson, Winter 2016

Evolution Of Language From The Aphasia Perspective

Alfredo Ardila
Florida International University

Keywords: Aphasia, Brain evolution, Lexicon, Grammar

Short description: To relate the origins of human language with contemporary cognitive neuroscience data, particularly with the area of aphasias.

Abstract:

In spite of the potentially significant contribution that aphasia knowledge can make towards understanding the origin of human language, limited interest has been observed in using the aphasia model to approach language evolution. Some authors (e.g., Bickerton, 2007) have emphasized that there are two most central issues in language evolution: (a) how did symbolic units (words or manual signs) evolve? (b) how did syntax evolve? It has been suggested that symbolic units (i.e. lexicon) and syntax (i.e. grammar) are the only real novelties in human communication systems, and are therefore the most important points to approach in a theory on language evolution. That is, a theory of language evolution should explain how the lexicon and how the grammar appeared in human history.



Aphasia is generally defined as the loss or impairment of language caused by brain damage. Different subtypes of aphasia syndromes are often mentioned in neurology and cognitive neurosciences, including Broca’s aphasia, Wernicke’s aphasia, conduction aphasia, amnesic aphasia, transcortical aphasia, etc. The exact number of aphasia subtypes depends on the particular classification, but usually between four and seven different aphasic syndromes are mentioned. Seemingly, this suggested diversity of aphasic syndromes has obscured the major and basic distinction in aphasia: there are only two major aphasic syndromes.

These two fundamental aphasic syndromes are associated with a disturbance at the level of the language elements (lexical/semantic) in the Wernicke’s aphasia, or at the level of the association between the language elements (morphosyntactic/ grammatical) in Broca’s aphasia. It has been further observed that these two basic dimensions of language (lexical/semantic and grammatical) are related to two basic linguistic operations: selecting (that means, the language as a paradigm) and sequencing (that means, language as syntagm). Lexicon and grammar not only depend on different brain circuitries and areas (temporal and frontal-subcortical), are impaired in different types of brain pathologies (Wernicke’s and Broca’s aphasia), but also are mediated by different types of learning (declarative and procedural)



It can be proposed that three stages in language evolution could be distinguished: (a) Primitive communication systems similar to those observed in other animals, including non-human primates; (b) initial communication systems using sound combinations (lexicon), probably appearing thousands and even millions of years ago, correlated with the enlargement of the temporal lobe; (c) complex communication systems including not only a lexicon but also word-combinations (grammar). Most likely, this last stage in language evolution is only observed in the Homo sapiens. Grammar probably originated from the internal representation of actions, resulting in the creation of verbs. Grammar, on the other hand, may represent the basic ability for the development of so-called metacognitive executive functions (such as abstracting, problem solving, temporality of behavior, etc).



Bickerton, D. (2007). Language evolution: A brief guide for linguists. Lingua;117: 510–526.

Citation:

Ardila A. (2016). Evolution Of Language From The Aphasia Perspective. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/8.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Ardila 2016

Towards An Action-oriented Approach To The Evolution Of Language And Music

Rie Asano
University of Cologne

Keywords: Language, Music, Action, Broca's area, Dorsal auditory stream, Cognitive phylogeny, Musical rhythm, Motor control, Sensory-motor integration

Abstract:

Language and music considered as cognitive systems form a mosaic, consisting of multiple components as parts with different evolutionary origins (Boeckx, 2013; Fitch, 2006). From a comparative language-music perspective, some of these components might be shared and based on the same evolutionary genesis, the others might be different and emerged independently in the course of evolution. Till now, theoretical as well as empirical research worked out several candidates for shared and distinct components (Jackendoff, 2009; Koelsch, 2012; Patel, 2008; Peretz, 2013). However, their evolutionary origins and the way how those several components work together are still unclear. In the current paper, this issue is discussed within an action-oriented framework. An action-oriented approach is promising because of the following reasons: First, current findings suggest that shared structural and processing aspects are not specific to language and music, i.e. complex action is also organized in an asymmetrical hierarchy, comprises temporal integration process, and involves neural resources (e.g. Broca’s area) shared with language and music (Jackendoff, 2009; Koelsch, 2012). Second, action-based research provides us with an opportunity to consider the issue of evolutionary continuity. Based on findings from cognitive neuroscience (Bornkessel-Schlesewsky & Schlesewsky, 2013; Hickok & Poeppel, 2007; Patel & Iversen, 2014; Rauschecker & Scott, 2009), especially by focusing on the function of Broca’s area and the dorsal auditory stream, I suggest, in line with other hypotheses (Boeckx & Fujita, 2014; Fitch & Martins, 2014), action as cognitive phylogeny of the capacity for processing hierarchical structures of temporal sequences in language and music. Moreover, the way how this domain-general capacity is put into use in different cognitive systems is investigated within an action-based framework exploring language and music in terms of goal of action, action planning, motor control, and sensory-motor integration. This framework enables us to investigate similarities and differences of different cognitive systems at the same time. I focus on domain-general, action-based mechanisms for motor control and sensory-motor integration playing an important role in the evolution of musical rhythm (Merchant et al., 2015; Patel & Iversen, 2014) and discuss their relationship to the evolution of speech and language.



References

Boeckx, C. (2013). Biolinguistics: forays into human cognitive biology. Journal of Anthropological Sciences, 91, 1–28.

Boeckx, C., & Fujita, K. (2014). Syntax, action, comparative cognitive science, and Darwinian thinking. Frontiers in Psychology, 5, 627.

Bornkessel-Schlesewsky, I., & Schlesewsky, M. (2013). Reconciling time, space and function: a new dorsal-ventral stream model of sentence comprehension. Brain and Language, 125(1), 60–76.

Fitch, W. T. (2006). The biology and evolution of music: a comparative perspective. Cognition, 100(1), 173–215.

Fitch, W. T., & Martins, M. D. (2014). Hierarchical processing in music, language, and action: Lashley revisited. Annals of the New York Academy of Sciences, 1316(1), 87–104.

Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing. Nature Reviews. Neuroscience, 8, 393–402.

Jackendoff, R. (2009). Parallels and Nonparallels between Language and Music. Music Perception, 26(3), 195–204.

Koelsch, S. (2012). Brain and Music. Chichester, West Sussex; Hoboken, NJ: Wiley-Blackwell.

Merchant, H., Grahn, J., Trainor, L., Rohrmeier, M., & Fitch, W. T. (2015). Finding the beat : a neural perspective across humans and non-human primates.

Patel, A. D. (2008). Music, language, and the brain. Oxford, New York: Oxford University Press.

Patel, A. D., & Iversen, J. R. (2014). The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP) hypothesis. Frontiers in Systems Neuroscience, 8, 57.

Peretz, I. (2013). The Biological Foundations of Music: Insights from Congenital Amusia. In D. Deutsch (Ed.), The Psychology of Music (3rd ed., pp. 551–564). London: Academic Press.

Rauschecker, J. P., & Scott, S. K. (2009). Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. Nature Neuroscience, 12(6), 718–724.

Citation:

Asano R. (2016). Towards An Action-oriented Approach To The Evolution Of Language And Music. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/146.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Asano 2016

On A Music-ready Brain: Neural Basis, Mechanisms, And Their Contribution To The Language Evolution

Rie Asano1 and Edward Ruoyang Shi2
1 University of Cologne
2 Universitat de Barcelona

Keywords: Music-ready brain, Language-ready brain, Vocal learning, Social learning, Basal ganglia, Dorsal auditory pathway, Mirror neurons

Abstract:

In the past decades, the concept of a ‘language-ready’ brain has stimulated language research from the perspective of evolution and development. However, parallels in research on the music-ready brain is still in its infancy (Arbib & Iriki, 2013; Seifert & Kim, 2006). To promote a comparison at the level of music- and language-readiness, we suggest beat induction as a promising starting point. In the current paper, first, we propose four fundamental mechanisms for beat induction, namely hierarchical structure processing, auditory-motor coupling, prediction, and social interaction. Second, we discuss two approaches investigating evolutionary origins and neurocognitive mechanisms of beat induction. Their relation to components of a language-ready brain is discussed in terms of two (out of seven) criteria introduced by Arbib (2005). One approach emphasizes the role of basal ganglia and dorsal pathway as well as the motor cortico-basal ganglia-thalamo-cortical circuit giving rise to the domain-general properties involved in beat induction and vocal learning, for example, prediction and auditory-motor coupling (Merchant et al., 2015; Merchant & Honing, 2014; Patel & Iversen, 2014; Patel, 2006). Concerning a component of the language-ready brain, these neural mechanisms are hypothesized to be involved in sequence processing, i.e. mapping hierarchical structure to temporal order. The other approach stresses mechanisms of social interaction as central to investigate the nature of beat induction (Fitch, 2012). We propose to extend the social approach by pointing out the relevance of social learning (Tomasello, 1996) with our Social Learning Hypothesis which claims that imitation-based social learning mechanisms which emerged on the scaffolding of mirror neuron systems shared with monkeys and apes are involved in beat induction. Concerning another component of the language-ready brain, complex imitation and its neural correlates in connection with mirror neuron systems is suggested to get more attention in future research on beat induction. An integrative approach of biological and social perspectives introduced in our paper provides important implications for the growing field of social cognitive neuroscience as well as cultural neuroscience (Lieberman, 2007), playing a significant role in research of language and music evolution.



References

Arbib, M. A. (2005). From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics. Behavioral and Brain Sciences, 28(02), 105–124.

Arbib, M. A., & Iriki, A. (2013). Evolving the Language- and Music-Ready Brain. In M. A. Arbib (Ed.), Language, Music, and the Brain (pp. 481–497). Cambridge, MA: The MIT Press.

Fitch, W. T. (2012). The biology and evolution of rhythm: unravelling a paradox. In P. Rebuschat, M. Rohrmeier, & I. Cross (Eds.), Language and music as cognitive systems (pp. 73–95). Oxford, New York: Oxford University Press.

Lieberman, M. D. (2007). Social Cognitive Neuroscience: A Review of Core Processes. Annual Review of Psychology, 58(1), 259–289.

Merchant, H., Grahn, J., Trainor, L., Rohrmeier, M., & Fitch, W. T. (2015). Finding the beat : a neural perspective across humans and non-human primates.

Merchant, H., & Honing, H. (2014). Are non-human primates capable of rhythmic entrainment? Evidence for the gradual audiomotor evolution hypothesis. Frontiers in Neuroscience, 7(274).

Patel, A. D. (2006). Musical rhythm, linguistic rhythm, and human evolution. Music Perception, 24(1), 99–104.

Patel, A. D., & Iversen, J. R. (2014). The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP) hypothesis. Frontiers in Systems Neuroscience, 8, 57.

Seifert, U., & Kim, J. H. (2006). Musical meaning : Imitation and empathy. In Proceedings of the 9th International Conference on Music Perception & Cognition (pp. 1061–1070). Bologna: ICMPC & ESCOM.

Tomasello, M. (1996). Do Apes Ape? In C. M. Heyes (Ed.), Social Learning in Animals: The Roots of Culture (pp. 319–346). London: Academic Press.

Citation:

Asano R. and Shi E. R. (2016). On A Music-ready Brain: Neural Basis, Mechanisms, And Their Contribution To The Language Evolution. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/200.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Asano, Shi 2016

Adult Language Learning And The Evolution Of Linguistic Complexity

Mark Atkinson1 , Kenny Smith2 and Simon Kirby2
1 University of Stirling
2 University of Edinburgh

Keywords: linguistic complexity, language acquisition, language adaptation, cultural evolution, sociolinguistic typology, experiments

Abstract:

The pressures shaping languages may differ in different physical, demographic, and sociocultural environments. In other words, non-linguistic factors may systematically determine linguistic features. Identification of such factors, and the mechanisms by which they operate, will provide valuable insights into how languages evolve to exhibit differing degrees of grammatical complexity, and also shed light on the structural properties of the earliest languages. We present three experiments which investigate the mechanisms linking sociocultural factors and linguistic structure: specifically, we attempt to explain why languages of small social groups tend to be morphologically complex and opaque, while the languages of larger groups tend to be morphologically simpler, more regular and transparent.

Citation:

Atkinson M., Smith K. and Kirby S. (2016). Adult Language Learning And The Evolution Of Linguistic Complexity. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/48.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Atkinson, Smith, Kirby 2016

Evolution Towards An Optimal Management Of Linguistic Information

Lluis Barcelo-Coblijn
University of the Balearic Islands

Keywords: small-world networks, connectome, atypical development, language disorders, linguistic phenotype

Short description: Human brains evolved into small-world networks in order to deal with a huge quantity of linguistic information

Abstract:

Network science shows great applicability, also in linguistic areas. Network-based

approaches to the brain are able to extract a pattern, called the connectome. Typically

humans develop a small-world pattern, an optimal network in terms of management of

information. However, in atypical development the pattern changes. Syntactic networks

show that children develop their language capacity until reaching the stage of smallworld

network. It is suggested that during evolution human brains evolved in order to

develop an optimal brain able to deal with a huge quantity of linguistic information.

However, there is still no information about atypical linguistic networks. The present

work three different linguistic disorders (down syndrome, hearing impairment and

language specific impairment) and hence it gives way to compare between different

biological conditions that affect the global patterns of the linguistic phenotypes.

Citation:

Barcelo-Coblijn L. (2016). Evolution Towards An Optimal Management Of Linguistic Information. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/148.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Barcelo-Coblijn 2016

A Lotka-volterra Model Of The Evolutionary Dynamics Of Compositionality Markers

Andreas Baumann1 , Christina Prömer1 , Kamil Kazmierski2 and Nikolaus Ritt1
1 University of Vienna
2 Adam Mickiewicz University in Poznan

Keywords: Compositionality, Population dynamics, Evolutionary invasion analysis, Phonotactics, Morphonotactics, Diachronic phonology

Abstract:

Morpho-syntactic boundaries can either be signaled by alignment to boundaries in regular prosodic patterns or by being ‘irregularly’ misaligned, in which case they are often signaled instead through highly dispreferred, or marked, structures such as consonant clusters. In some languages these structures additionally appear in simple forms, which compromises their compositionality-signaling function. This paper models the dynamics of such structures in complex and simple forms by means of a Lotka-Volterra model, which is analyzed evolutionarily. Finally, the evolutionary dynamics of the model are tested against diachronic language data.

Citation:

Baumann A., Prömer C., Kazmierski K. and Ritt N. (2016). A Lotka-volterra Model Of The Evolutionary Dynamics Of Compositionality Markers. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/17.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Baumann, Prömer, Kazmierski, Ritt 2016

The Antiquity Of Musicality And Its Role In Prehistoric Culture

Ted Bayne
independent researcher

Keywords: Musicality, Ritual, Hunter-gatherer, Dance-song, Semiosis

Short description: The antiquity of the underpinnings of musicality and the role of musicality in prehistoric hunter-gatherer culture.

Abstract:

Musicality consists of a distributed array of faculties and general substrates, each with its own deep history. In part one, a few key components of this array are reviewed to appreciate their antiquity including the salient neurobiological precursors to the underlying sociality. In part two: the role of this musicality in hunter-gatherer culture is examined whose peoples also have deep histories and leverage these faculties in polyphonic-polyrhythmic proficiency (Arom, 2004). It is assumed that between 200 and 100 kya, this array of faculties morphed into enculturated musical forms without leaving a trace. But in extant hunter-gatherers a semiotic plasticity is found where spoken language is just one modality blended with others. Musicality is critical to rituals that form the praxis of social memory. In collective performance, song/dance integrates the worlds of the spirits, the forest, morality, the hunt, and social homeostasis. Words alone could not achieve the affective and symbolic efficacy required. Very old anthropomorphic structures coalesce the seen natural world and the unseen (but “experienced”) spirit world. Ritualized musical forms demand a semiotic channel to bind human care to this humanized cosmology. A final section considers the implications of these topics for language evolution.

Citation:

Bayne T. (2016). The Antiquity Of Musicality And Its Role In Prehistoric Culture. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/78.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Bayne 2016

Evolution Of What?

Christina Behme
Mount Saint Vincent University

Keywords: conceptual clarity, language organ, animal communication, analogy

Abstract:

Abstract:

In this paper I suggest that substantial progress eludes us because there is no broadly accepted consensus about the ‘object’ of evolution: language. While virtually everyone assumes something in our biology accounts for our ability to use language, the exact nature of the “language organ” or “language instinct” remains a matter of controversy and many questions about the ontological status of language and the exact relationship between language and biology still await satisfactory answers. I discuss three areas which could contribute greatly to overall progress.

Citation:

Behme C. (2016). Evolution Of What?. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/140.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Behme 2016

The Low-complexity-belt: Evidence For Large-scale Language Contact In Human Prehistory?

Christian Bentz
University of Cambridge

Keywords: language complexity, entropy, information-theory, geography, language contact

Short description: Languages around the equator tend to have lower information-theoretic complexity.

Abstract:

The quantitative measurement of language complexity has witnessed a recent rise of interest, not least because language complexities reflect the learning constraints and pressures that shape languages over historical and evolutionary time. Here, an information-theoretic account of measuring language complexity is presented. Based on the entropy of word frequency distributions in parallel text samples, the complexities of overall 646 languages are estimated. A large-scale finding of this analysis is that languages just above the equator exhibit lower complexity than languages further away from the equator. This geo-spatial pattern is here referred to as the Low-Complexity-Belt (LCB). The statistical significance of the positive latitude/complexity relationship is assessed in a linear regression and a linear mixed-effects regression, suggesting that the pattern holds between different families and areas, but not within different families and areas. The lack of systematic within-family effects is taken as potential evidence for a phylogenetically "deep" explanation. The pressures shaping language complexities probably pre-date the expansion of language families from their proto-languages. Large-scale prehistoric contact around the equator is tentatively given as a possible factor involved in the evolution of the LCB.

Citation:

Bentz C. (2016). The Low-complexity-belt: Evidence For Large-scale Language Contact In Human Prehistory?. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/93.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Bentz 2016

Redundant Features Are Less Likely To Survive: Empirical Evidence From The Slavic Languages

Aleksandrs Berdicevskis and Hanne Eckhoff
UiT The Arctic University of Norway

Keywords: language change, linguistic complexity, redundancy, functionality, Slavic languages, computational modeling

Short description: We predict the survival and death of morphological features across modern Slavic languages by measuring their redundancy in Common Slavic

Abstract:

We test whether the functionality (non-redundancy) of morphological features can serve as a predictor of the survivability of those features in the course of language change. We apply a recently proposed method of measuring functionality of a feature by estimating its importance for the performance of an automatic parser to the Slavic language group. We find that the functionality of a Common Slavic grammeme, together with the functionality of its category, is a significant predictor of its survivability in modern Slavic languages. The least functional grammemes within the most functional categories are most likely to die out.

Citation:

Berdicevskis A. and Eckhoff H. (2016). Redundant Features Are Less Likely To Survive: Empirical Evidence From The Slavic Languages. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/85.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Berdicevskis, Eckhoff 2016

A Scientometric Analysis Of Evolang: Intersections And Authorships

Till Bergmann and Rick Dale
University of California, Merced

Keywords: scientometrics, content analysis, network analysis

Abstract:

Research on the evolution of language has grown rapidly, and is now a large and diverse field. Because of this growing complexity as a scientific domain, seeking new methods for exploring the field itself may help synthesize knowledge, compare theories, and identify conceptual inter- sections. Using computational methods, we analyze the scientific content presented at EvoLang conferences. Drawing on 365 abstracts, publication patterns are quantified using Latent Dirichlet Allocation (LDA), which extracts a semantic summary from individual abstracts. We then cluster these semantic summaries to reveal the frameworks and different domains present at EvoLang. Our results show that EvoLang is an interdisciplinary field, attracting research from various fields such as linguistics and animal studies. Furthermore, we show that the framework of iterated learning and cultural evolution is among the most influential topics at EvoLang. An interactive website of our results is available here.

Citation:

Bergmann T. and Dale R. (2016). A Scientometric Analysis Of Evolang: Intersections And Authorships. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/182.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Bergmann, Dale 2016

Spontaneous Dialect Formation In A Population Of Locally Aligning Agents

Richard A. Blythe1 , Alistair H. Jones2 and Jessica Renton2
1 School of Physics and Astronomy and Centre for Language Evolution, University of Edinburgh
2 School of Physics and Astronomy, University of Edinburgh

Keywords: Modeling, Biases in language acquisition and use, Social interactions, Variation in language use, Dialect formation

Abstract:

See attached PDF

Citation:

Blythe R. A., Jones A. H. and Renton J. (2016). Spontaneous Dialect Formation In A Population Of Locally Aligning Agents. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/19.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Blythe, Jones, Renton 2016

How The Brain Got Grammaticalized: Globularization And (self-)domestication

Cedric Boeckx1 , Constantina Theofanopoulou2 and Antonio Benítez-Burraco3
1 ICREA/Universitat de Barcelona
2 Universitat de Barcelona
3 University of Huelva

Keywords: language-ready brain, cultural evolution, grammaticalization, domestication, neural crest cells

Short description: The molecular changes bringing about a globularized (and a language-ready) brain also contributed to a domestic phenotype and thus to full-fledged languages.

Abstract:

1. Introduction

This paper seeks to explore a potential connection between two evolutionary hypotheses recently put forward linking the language phenotype and genotype. Both are aimed to cast light on the pre-requisites for more complex communication systems; in the case of our species, grammatical systems. One (Boeckx and Benitez-Burraco 2014 et seq., Theofanopoulou 2015) is the idea that the globularization of the braincase that characterizes our species is the reflex of a genetically-regulated specific brain growth pattern that provided the neural scaffolding for "cognitive modernity", most distinctively our 'language-readiness'. This distinctive growth process is well established (Hublin et al. 2015), and clinical evidence suggests that deviations from this growth trajectory entail cognitive/language deficits (see, e.g., Knight et al. 2014). The other idea concerns (self-)domestication. As Thomas 2014 discussed extensively, self-domestication in our species can prove extremely valuable in understanding the central role played by cultural learning, giving rise to the grammaticalization of our mind. It is now clear that cultural learning appears to be key in capturing all the grammatical paraphernalia that was usually (and misleadingly) assigned to the "Universal Grammar". But as Thomas 2014 points out, a major problem facing any attempt to account for language structure through a cultural mechanism is that the required processes are only possible if we assume the existence of a range of preconditions, which we may call the “cultural niche” (the ‘cooperative’ niche, as Tomasello would call it). Thomas 2014 thinks that this niche may have been formed by the behavioral, cognitive and temperamental outcomes of self-domestication. Interestingly, Wilkins et al. (2014) have recently put forth the hypothesis that the hypofunction of the neural crest cells (NCCs) during embryonic development in response to external stimuli may result in a constellation of distinctive traits (the “domestication syndrome”).

2. Our hypothesis

Our hypothesis is that the genetic changes that have been claimed to bring about globularization affected the NCC too, thereby fueling the emergence of the (self-)domestication syndrome in our species. To test this hypothesis, we did an exhaustive literature search to determine whether (some of) the “domestication syndrome” genes highlighted by Wilkins et al. are also important for globularization and/or have changed in our lineage compared to Neanderthals and Denisovans (see, e.g., Pääbo 2014). We have also proceeded the other way around: we made extensive search of the literature to learn how many of (and to what extent) the candidates for globularization are involved in the development and function of the neural crest and could be also regarded as “neural crest genes”. The intersection of the two set of genes (encompassing SOX10, SOX9, SOX2, MTIF, MAGOH, FGF8, EDNRB, RET, TCOF1, BMP7, BMP2, CDC42, CTNNB1, DLX5, DLX6, FGFR1, PAX6, SHH, VCAN among others) strikes us as particularly promising, given that most of these genes have been implicated in aspects of language and cognition.

3. Conclusions

The data we have gathered suggest to us that a globularized brain and brain case may be intimately connected to the developmental/genetic context for a domestic phenotype, which could then have been selected for the reasons Thomas 2014 discussed. Put another way, the language-ready brain which a globularized brain(case) gave rise to led to full-fledged modern linguistic behavior, the grammaticalization and (self-)domestication of mind.

References

Boeckx, C., & Benítez-Burraco, A. (2014). The shape of the human language-ready brain. Frontiers in Psychology, 5, 282.

Hublin, J.-J., Neubauer, S., and Gunz, P. 2015. Brain ontogeny and life history in Pleistocene hominins. Philosophical Transactions of the Royal Society, B, 370, 20140062.

Knight, S., Anderson, V., and Spencer-Smith, M. & Da Costa, A. (2014). Neurodevelopmental outcomes in infants and children with single-suture craniosynostosis: A systematic review. Developmental neuropsychology, 39, 159-186.

Pääbo, S. (2014). The human condition – A molecular approach. Cell, 157, 216-226.

Theofanopoulou, C. (2015). Brain asymmetry in the white matter making and globularity. Frontiers in Psychology, 6,1355.

Thomas, J. G. (2014). Self-domestication and Language Evolution. Ph.D. Thesis. University of Edinburgh.

Wilkins, A. S, Wrangham, R., W & Fitch, W. T. (2014). The “domestication syndrome” in mammals: a unified explanation based on neural crest cell behavior and genetics. Genetics, 197, 795–808

Citation:

Boeckx C., Theofanopoulou C. and Benítez-Burraco A. (2016). How The Brain Got Grammaticalized: Globularization And (self-)domestication. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/41.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Boeckx, Theofanopoulou, Benítez-Burraco 2016

Signature Whistles In An Introduction Context

Megan Broadway1 , Jamie Klaus2 , Billie Serafin2 and Heidi Lyn3
1 Institute for Marine Mammal Studies, University of Southern Mississippi
2 Institute for Marine Mammal Studies
3 University of Southern Mississippi

Keywords: signature whistle, bottlenose dolphin, communication, introduction

Short description: Monitoring the use of bottlenose dolphin signature whistles (individualized calls) during the introduction of a new dolphin to an existing group.

Abstract:

One key distinction that has been drawn between human language and animal communication concerns the increased flexibility in human language. In general, communicative systems in animals are considered associative and tied to specific contexts (Scott-Phillips, 2015). Specifically, complex and dynamic vocal communication systems are rare in the animal kingdom, being limited primarily to humans, birds, and delphinids (Janik, 2009). Because these flexible systems are so rare, comparisons between the taxa are important to understand the evolutionary pressures that have led to these systems. Traits which are present across species, such as vocal learning or the ability to reference objects such as the self and others, which are traits shared by humans and dolphins, may be key factors in the evolution of more complex communication systems. Most of what we know about the communication system of delphinids comes from the study of bottlenose dolphins (Tursiops truncatus). Although researchers have attempted to decode the communication system of these animals for more than 60 years, the discovery of signature whistles (Caldwell, Caldwell, & Tyack, 1990) has been one of the most promising findings. Signature whistles are distinctive calls that are unique for each individual, and like human language, are a product of vocal learning. Signature whistles primarily act as cohesion calls, and are used in sophisticated contexts such as when groups of dolphins encounter one another in the wild (Quick & Janik, 2012). These calls likely developed due to the limited visibility of the underwater environment and the highly social nature of these animals. It has been suggested that signature whistles may be used self-referentially and to reference others (King & Janik, 2013), similarly to how humans use names (see Janik & Sayigh, 2013). If so, this would be one of the only species to use names to identify individuals and would allow researchers to study the conditions under which reference to self and others arise. Still, the nuances of signature whistle usage remain largely unknown with only a few, un-replicated experimental studies.

One context where these whistles are likely to be used is during the introduction of a new dolphin to an established group. For this study, a new dolphin was introduced to two established residents over an extended period of time by first adding the new individual to an adjoining pool where he was housed for several months and then allowing all three dolphins to swim together freely. Vocalizations and behavioral data were collected before, during, and after the introduction. Underwater vocalizations were recorded using an array of hydrophones to determine if and when signature whistles were used over the course of the extended introduction period (Fig. 1). These data will later be compared to a follow-up study where an additional dolphin was introduced to this group. Further studies based on context dependent interpretations of signature whistles will help to clarify the social and environmental factors that contribute to the evolution of flexible communication systems like human language.

Citation:

Broadway M., Klaus J., Serafin B. and Lyn H. (2016). Signature Whistles In An Introduction Context. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/166.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Broadway, Klaus, Serafin, Lyn 2016

How Do Laughter And Language Interact?

Greg Bryant
UCLA

Keywords: laughter, pragmatics, indirect speech, vocal communication

Short description: Studies examining the interaction of human laughter with spontaneous language use reveal aspects of laughter's evolved function

Abstract:

1. Introduction

Laughter is a universal social vocalization characterized by rhythmic laryngeal and superlaryngeal activity. The sound of laughter varies within and between speakers, but maintains a reasonably stereotyped form, and follows rather specific production rules (Provine, 2000). Acoustic analyses of play vocalizations across several primate species suggest that human laughter is derived from a homolog dating back at least 20 MYA (Davila-Ross et al., 2009). Human laughter has evolved increased proportions of voiced components, and these features contribute to perceptual judgments of affiliation between speakers and positive affect (Bachorowski & Owren, 2001). But rhythmic characteristics also play an important role in judgments of spontaneity and playfulness (Bryant & Aktipis, 2014). Laughter features might interact in interesting ways with language use that can reveal important aspects of its evolved function.

People laugh in conversation to achieve a variety of pragmatic goals (Flamson & Bryant, 2013), and laughter plays a complex role in negotiating relationships that goes well beyond its connection to humor (Provine, 2000). But the production of spontaneous laughter is likely generated by an emotional vocal system that is separate from the control of articulators during speech production—the so-called dual pathway model of vocal production.



2. Studies exploring the interaction of laughter and language



2.1. Laughter signals play in discourse

People tend to laugh immediately before and after using indirect speech in which speaker intentions are not explicitly stated but rich meaning is strategically conveyed. Here I will describe recent research documenting the effect of laughter on the interpretation of verbal irony, a common form of indirect speech. Verbal irony utterances that included adjacent laughter were culled from natural conversations between friends, and were then manipulated to either include the laughter or not. These utterances were played for listeners (no listener heard the same utterance twice) and they were asked to rate the indirectness of the speakers’ meaning. The presence of laughter increased listeners’ judgments of indirectness (Exp. 1). The isolated laughs from these recordings were then played to a different group of listeners and rated for playfulness (Exp. 2). Judgments of playfulness were positively associated with the degree to which laughter increased judgments of indirectness across utterances in the first experiment. These data suggest that spontaneous laughter functions to signal play in social interaction, and sheds light on the relationship between pragmatics and nonhuman animal communication.



2.2. Laughter and speech production



During conversation, the relationship between interlocutors shapes the way people laugh. For example, the interaction between speech production and laughter production is affected by affiliative status. I will describe recent work using the same corpus of spontaneous conversation recordings showing that, compared to established friends, people who have just recently met will embed laughter into their speech much more frequently, suggesting a greater tendency to produce laughs from the speech system as opposed to the phylogenetically older vocal emotion system. The speech system generates laughs with highly recognizable features, and is potentially indicative of social manipulation.



3. Conclusion

These studies represent attempts to explore the role of laughter in signaling social intentions, and potentially cueing social manipulation. The function of human laughter is clearly connected to homologs in other primate species, and its incorporation into human linguistic communication, including pragmatic signaling, provides a fascinating example of how an ancestrally old trait can be integrated with more recent communicative abilities.



References



Bachorowski, J. A., & Owren, M. J. (2001). Not all laughs are alike: Voiced but not unvoiced laughter readily elicits positive affect. Psychological Science, 12(3), 252-257.

Bryant, G. A., & Aktipis, C. A. (2014). The animal nature of spontaneous human laughter. Evolution and Human Behavior, 35(4), 327-335.

Davila-Ross, M., Owren, M., & Zimmermann, E. (2009). Reconstructing the evolution of laughter in great apes and humans. Current Biology, 19, 1106–1111.

Flamson, T. J., & Bryant, G. A. (2013). Signals of humor: Encryption and laughter in social interaction. In M. Dynel (Ed.). Developments in linguistic humour theory (pp. 49-74). Amsterdam: John Benjamins.

Provine, R. R. (2000). Laughter: A scientific investigation. New York, NY: Penguin Press.

Citation:

Bryant G. (2016). How Do Laughter And Language Interact?. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/203.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Bryant 2016

Cultural Evolution And Communication Yield Structured Languages In An Open-ended World

Jon W. Carr1 , Kenny Smith1 , Hannah Cornish2 and Simon Kirby1
1 University of Edinburgh
2 University of Stirling

Keywords: categorization, communication, compositionality, continuous, cultural evolution, expressivity, iterated learning, meaning space, morphology, open-ended, shape bias, sound symbolism, structure

Short description: Iterated learning + communication in open-ended meaning space gives rise to languages with sublexical structure like that in real languages

Abstract:

Language maps signals onto meanings through two types of structure. Firstly, the space of meanings is structured into shared categories. Secondly, the signals employed by a language are structured such that the meaning of the whole is a function of the meanings of its parts and the way in which those parts are combined. Previous work has demonstrated that structured categories (e.g., Xu, Dowman, and Griffiths 2013) or structured signals (e.g., Kirby, Cornish, and Smith 2008) can arise through iterated learning. However, the simultaneous emergence of these two types of structure has not been shown experimentally, leading to concerns that one type of emergent structure is simply an artefact of the other.

To explore this issue, we conducted a series of iterated learning experiments using a vast, open-ended, continuous meaning space. The first participant in a transmission chain was trained on 48 randomly generated signals paired with 48 triangles generated by selecting three coordinates at random in a 480×480-pixel box, yielding a space of 6 × 10^15 possible stimuli. In a test phase, the participant was asked to label 96 novel triangles, none of which had been seen during training. The output from this test phase became the input to the next generation.

In the first experiment, we ran four chains, each with ten generations. Over time, the space of triangles was categorized into an increasingly small number of discrete regions, which consecutive participants increasingly align on. These emergent categories, labelled with holistic signals, typically discretized the space of triangles based on their shape and size, ignoring features such as location and orientation (cf., Landau, Smith, and Jones 1988). There was a cumulative increase in structure, showing that category systems can arise through iterated learning.

Our second experiment used the same experimental design, except at each generation a pair of participants were trained on the language separately and then entered a communication phase in which they took turns to communicate about triangles to each other. One participant was presented with a triangle and was asked to label it for their partner; the communicative partner then had to select the correct target triangle from a selection of six. The output from this communication phase became the input to the next generation in the chain.

The languages in this experiment, where there is a natural pressure for expressivity (Kirby et al. 2015), contained more unique signals. Despite these higher levels of expressivity, there was also a cumulative increase in structure in the languages. Furthermore, we found evidence that two of the four chains contained sublexical structure in addition to the categorical structure we observed in the non-communicative experiment. This sublexical structure was driven by shape-based sound symbolism and had morphological features similar to those found in natural languages (such as cranberry morphs; Aronoff 1976).

Whereas previous iterated learning experiments have been limited to two types of result — categorical structure in meanings or compositional structure in signals — these experiments demonstrate that an alternative is possible. When the space of meanings is open-ended, and lacks clear pre-existing boundaries, then more subtle morphological structure, lacking straightforward compositionality, may evolve as a solution to the joint pressures from learning and communication.

Citation:

Carr J. W., Smith K., Cornish H. and Kirby S. (2016). Cultural Evolution And Communication Yield Structured Languages In An Open-ended World. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/74.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Carr, Smith, Cornish, Kirby 2016

Lasting Impacts Of The Code Model On Primate Communication Research

Erica Cartmill
UCLA

Keywords: Ostension, Primate, Gesture

Abstract:

The ability to mark a particular behavior as a communicative act, rather than relying on a small set of phylogenetically-shaped signals, exponentially expands the potential of a communicative system. Essentially any behavior could be made communicative through ostension. When paired with the ability to infer meaning from novel contexts and behaviors, this generates a powerful communicative engine. Human language is arguably built upon just such an ostensive-inferential engine (Sperber & Wilson, 1995; Origgi and Sperber 2000; Scott-Phillips, 2014). The ability to take an action or sound and imbue it with meaning through “performing” it as a signal is undoubtedly an integral part of modern human language. But is it uniquely human?

Recently, Scott-Phillips extended the discussion of the O-I system by systematically contrasting the communication systems of great apes and humans in regards to their properties as codes (Scott-Phillips, 2014, 2015). He argues that ape communication is a “natural code,” relying on associative mechanisms and expanded by metapsychological abilities. Human language is a “conventional code,” built upon metapsychological abilities (the O-I system) and made more powerful by associative mechanisms. This contrast between natural and conventional codes makes important predictions about the communicative behavior of apes and humans, particularly that only humans possess and recognize communicative intentions (an act that provides information that it is communicative – i.e., signaling its own signalhood). This capacity, in turn, lies at the heart of the O-I system. However, the evidence that ape communication is a natural code (and not based on communicative intentions) comes from the published literature on ape communication. This is problematic, because the code model itself has had a dramatic impact on the work that is conducted (and published) in primate communication.

Primate communication studies search for and highlight predictable forms and contingencies that might be interpretable as codes. Ape gestural communication is less predictable and more flexible than the communication systems of many other animals, but while presence of communicative flexibility is used as evidence of intentionality in ape gesture literature (Call & Tomasello, 2007), the ambiguities themselves are often discarded or overlooked. Apes use their gestures flexibly, modifying them in response to their communicative partners: they direct their signals towards others, account for their partner’s gaze, and wait for a response after gesturing, and yet the majority of published papers focus on predictability of signal to response (i.e. searching for codes).

I will review common data analysis procedures in ape gesture research (e.g., Cartmill & Byrne, 2010) and discuss the ways in which they influence the perception that gestures are natural communicative codes. I will present gestures that are typically deemed “unanalyzable.” Many of these are rare or ambiguous gestures that do not show a simple one form to one meaning mapping. These examples have the greatest potential to demonstrate ostensive communication in great apes. The theory that human communication is built on a framework of ostension and inference is compelling, but to determine whether humans are unique in these abilities we must assess the lasting impact of the code model framework on studies of primate communication. Primatologists should tackle this challenge head on. Emerging meta-analytic tools may facilitate this analysis by pooling rare events across studies and detecting complex regularities. These approaches would make significant advances in our understanding of the relationship between primate communication and human language.

References

Call J, Tomasello M (2007) The gestural communication of apes and monkeys. Lawrence Erlbaum Associates, Inc, Mahwah.

Cartmill, E., & Byrne, R. (2010). Semantics of primate gestures: intentional meanings of orangutan gestures. Animal cognition, 13(6), 793-804.

Origgi, G., and D. Sperber. (2000). Evolution, communication and the proper function of language. In Evolution and the human mind: language, modu- larity and social cognition. P. Carruthers and A. Chamberlain, eds. Cambridge: Cambridge University Press. Pp. 140–169.

Scott-Phillips, T. (2014). Speaking Our Minds: Why human communication is different, and how language evolved to make it special. Palgrave MacMillan.

Scott-Phillips, T. (2015). Nonhuman primate communication, pragmatics, and the origins of language. Current Anthropology, 56(1), 56-80.

Sperber, D., and D. Wilson. (1995). Relevance: communication and cognition. 2nd edition. Oxford: Blackwell.

Citation:

Cartmill E. (2016). Lasting Impacts Of The Code Model On Primate Communication Research. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/170.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Cartmill 2016

Are Emotional Displays An Evolutionary Precursor To Compositionality In Language?

Federica Cavicchio , Livnat Leemor , Simone Shamay-Tsoory and Wendy Sandler
University of Haifa

Keywords: Compositionality, Emotion expression, Body and face

Abstract:

Compositionality is a basic property of language, spoken and signed, according to which the meaning of a complex structure is determined by the meanings of its constituents and the way they combine (e.g., Jackendoff, 2011 for spoken language; Sandler & Lillo-Martin, 2006 for sign language; Kirby & Smith, 2012 for experimental results). Here we seek the foundations of this property in a more basic, and presumably prior, form of communication: the spontaneous expression of emotion. To this end, we ask whether features of facial expressions and body postures are combined and recombined to convey different complex meanings in extreme displays of emotions. Our study, based on detailed coding and analysis of spontaneous responses of athletes to victory or loss, isolates specific features typical of each, as well as features commonly present in both. We suggest that these features contribute to the interpretation of the complex emotions experienced in these contexts. Our findings are compatible with a compositional model of communicative emotional displays.

Citation:

Cavicchio F., Leemor L., Shamay-Tsoory S. and Sandler W. (2016). Are Emotional Displays An Evolutionary Precursor To Compositionality In Language?. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/132.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Cavicchio, Leemor, Shamay-Tsoory, Sandler 2016

Functionally Flexible Vocalizations In Wild Bonobos (pan Pansicus)

Zanna Clay1 , Jahmaira Archbold2 and Klaus Zuberbuhler2
1 University of Birmingham
2 University of Neuchatel

Keywords: functional flexibility, speech evolution, great ape, pre-linguistic infant, protophone, context, emotional valence

Short description: A hallmark of human speech is the ability to produce vocalizations independent of context and emotional state. We show that wild bonobos also possess this capacity.

Abstract:

1. Introduction

A core component underlying the evolution of language and the development of speech in human infants is the emergence of functional flexibility, the capacity to produce vocalizations independent of a fixed function in order to express a full range of emotional content across different situations (Griebel & Oller, 2008). Research has demonstrated that, even before speech emerges in infancy, 3-to-4 month old human infants produce a class of vocalizations—known as ‘protophones’— in functionally flexible ways to express a full range of emotional content (Oller et al., 2013). This finding has been contrasted with an apparent absence of evidence for this capacity in non-human primates, leading to the conclusion that the functional flexibility of human infant vocalizations marks an evolutionary transition between primate vocal communication and human speech (Oller et al., 2013). Here, we addressed this proposed evolutionary discontinuity by examining evidence for functional flexibility in the vocalizations of wild bonobos (Pan paniscus), one of our closest living relatives. We focussed on the ‘peep’, a commonly-produced vocalization specific to bonobos. The ‘peep’ is a closed mouth vocalization that is high in frequency, short in duration and flat in acoustic form.

2. Methods & Results

We conducted behavioral observations and recorded vocalizations of wild adult bonobos at Lui Kotale in DR Congo using focal animal sampling. We analysed the acoustic structure of peeps produced in different behavioural contexts relating to the three principal valence dimensions (positive-neutral-negative) to explore acoustic cues relating to the inferred affective valence. We used Discriminant Function Analyses to examine if peep structure varied across valence contexts and caller identity.

Acoustic analyses revealed that wild bonobos produce a specific call type —the ‘peep’— across the full valence range in every major aspect of their daily lives, including feeding, travel, rest, aggression, alarm, nesting and grooming. Despite differences in the eliciting contexts, peep acoustic structure did not vary between contexts associated with neutral and positive valence. However, peeps produced in negative valence contexts were acoustically distinct, suggesting that vocal flexibility is more constrained by vocal production mechanisms in negatively charged situations. Peeps could be distinguished based on caller identity alone.

3. Discussion

In contrast to earlier conclusions (Oller et al., 2013), our results indicate that functionally flexible vocal signaling is a capacity shared with our closest living ape relatives, demonstrating its deep evolutionary roots. The finding of greater flexibility present in some contexts but not others suggests an evolutionary transition in hominids from functionally fixed to functionally flexible vocalizations. Identifying non-human primate vocalizations that are used in flexible ways, rather than being tied to fixed biological function, can provide relevant insights for the evolution of human speech. We will discuss these results in light of on-going analyses examining the pragmatic responses of receivers to peeps when combined in sequences with other calls.

Acknowledgments

We thank Gottfried Hohmann for supporting our research. We are grateful to the Institut Congolaise pour la Conservation de la Nature (ICCN) for granting permission to conduct research at the Salonga National Park (MIN.0242/ICCN/DG/GMA/013/2013). Our methods comply with the requirements and guidelines of the ICCN and legal requirements of the DR Congo. We are grateful to Isaac Schamberg for his support, to Barbara Fruth and to all local staff supporting Lui Kotale. We thank members of the Department of Comparative Cognition at the University of Neuchatel, especially Christof Neumann. This research was financially supported by the L.S.B. Leakey Foundation, the National Geographic Society: Committee for Research and Exploration Grant, the British Academy Small Research Grant, the European Union Seventh Framework Programme for research, technological development, and demonstration under grant agreement 283871 and private donors associated with the British Academy and the Leakey Foundation.

References

Griebel, U., & Oller, D. (2008). Evolutionary forces favoring communicative flexibility. In D. K. Oller and U. Griebel (Eds.), Evolution of communicative flexibility: complexity, creativity, and adaptability in human and animal communication, (pp. 9–40). Cambridge: MIT Press.

Oller, D. K., Buder, E. H., Ramsdell, H. L., Warlaumont, A. S., Chorna, L., Bakeman, R. (2013). Functional flexibility of infant vocalization and the emergence of language. Proceedings of the National Academy of Sciences of the United States of America, 110(16), 6318–6323.

Citation:

Clay Z., Archbold J. and Zuberbuhler K. (2016). Functionally Flexible Vocalizations In Wild Bonobos (pan Pansicus). In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/73.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Clay, Archbold, Zuberbuhler 2016

Relationship Between Nonverbal Social Skills And Language Development

Hélène Cochet1 and Richard Byrne2
1 University Toulouse Jean Jaures
2 University of St Andrews, School of Psychology and Neuroscience

Keywords: child development, language acquisition, nonverbal communicative abilities, communicative gestures

Abstract:

See the 2-page-abstract.

Citation:

Cochet H. and Byrne R. (2016). Relationship Between Nonverbal Social Skills And Language Development. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/32.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Cochet, Byrne 2016

Dwarf Mongooses Combine Meaningful Alarm Calls

Katie Collier1 , Andrew N. Radford2 , Balthasar Bickel3 , Marta B. Manser1 and Simon W. Townsend4
1 Institute of Evolutionary Biology and Environmental Studies, University of Zurich
2 School of Biological Sciences, University of Bristol
3 Department of Comparative Linguistics, University of Zurich
4 Department of Psychology, University of Warwick

Keywords: Animal communication, Call combinations, Syntax, Comparative approach

Abstract:

Syntax, the combination of meaningful words into larger meaningful structures, is a key feature of language that is responsible for much of language’s generative power. Comparative data from animal communication studies can help unpack the evolution of syntax, which in turn is a necessary step towards better understanding the evolution of language as a whole. While syntax is present in all human languages, it is rare in animal communication, though examples of call combinations that can be described as rudimentary syntax exist (Collier at al., 2014). These syntax-like combinations can be compositional, where the meaning of the combination is derived from the meaning of the component calls, as seen in Campbell monkey (Cercopithecus campbelli campbelli) alarm calls (Ouattara et al., 2009) or idiomatic or combinatorial, where the meaning of the combination is not related to the meaning of the component calls, as for putty-nosed monkey (Cercopithecus nictitans) alarm calls (Arnold & Zuberbühler, 2006).

Whilst comparative data outside of primates remains scarce, it can provide insights into the ecological or social factors that may be important in promoting the emergence of syntax. In this study we investigated alarm call combinations in dwarf mongooses (Helogale parvula), small, social living, cooperative breeding mammals. Similarly to other terrestrial mammals, recordings of natural predator encounters and experimental predator presentations suggest that dwarf mongooses produce, among others, one type of alarm call to aerial predators and another structurally distinct variant to terrestrial predators. Interestingly, dwarf mongooses also seem to combine these aerial and terrestrial calls into a third combination alarm, consisting of an aerial alarm followed immediately by a terrestrial alarm. Contextual data suggests these combination alarms are more often produced after the group had already been alerted to the presence of an aerial predator by an aerial alarm and so the function of the combination does not seem to be directly related to the independent functions of both the comprising aerial and terrestrial calls.

In order to verify if the combination alarm really is composed of independent aerial and terrestrial alarms, we first used acoustic analysis to compare whether there were structural differences between the alarm calls occurring alone and those comprising the combination. Secondly, we implemented playback experiments of synthetically constructed combination alarms (aerial + terrestrial alarm) and assessed receiver responses in relation to playbacks of naturally produced combination alarms.

Acoustic analyses revealed that the aerial component of the combination was not structurally different from the independently occurring aerial alarm, whereas the terrestrial component of the combination, on average, differed from the terrestrial alarm. However, the mongooses demonstrated similar behavioural reactions when hearing playbacks of both natural and synthetic combination alarm stimuli.

Thus, dwarf mongooses combine two meaningful alarm calls into a third alarm call whose meaning is not, a priori, a function of the meanings of its component calls, making this a potential example of an combinatorial call combination in a non-primate species. Interestingly, we did find acoustic variation between the terrestrial call within the combination alarm and the independent terrestrial alarm. Given these acoustic differences, yet the similar behavioural responses to artificial and natural versions of the combination alarm, it is possible that these subtle structural variations may not be perceived by the mongooses or relevant for their communication. The second half of the combination alarm and the terrestrial alarm call could therefore represent “allomorphs”: two acoustically distinct variants of the same call type that are perceived identically by the mongooses but used in different contexts.

In conclusion, dwarf mongooses combine meaningful alarm calls suggesting that concatenation of semantic units may be more widespread in animal communication than previously thought. Given the relatively large phylogenetic distance between dwarf mongooses and humans, these data can begin to unpack candidate selective pressures driving the emergence of a syntactical combinatorial level.

Citation:

Collier K., Radford A. N., Bickel B., Manser M. B. and Townsend S. W. (2016). Dwarf Mongooses Combine Meaningful Alarm Calls. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/114.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Collier, Radford, Bickel, Manser, Townsend 2016

Word Order Universals Reflect Cognitive Biases: Evidence From Silent Gesture

Jennifer Culbertson , Simon Kirby and Marieke Schouwstra
University of Edinburgh

Keywords: cognitive biases, word order, language universals, silent gesture

Short description: word order universals are reflected in the spontaneous gestures of non-signers

Abstract:

Research in linguistic typology has identified many cases in which particular patterns appear to be over- or under-represented in the world's languages. However, the extent to which these so-called typological universals reflect universal properties of human cognition remains heavily debated. In this paper, we provide empirical evidence connecting universals of word order to cognitive biases using a silent gesture experiment. The silent gesture paradigm allows us to capture spontaneous, untrained responses in a modality distinct from participants’ previous language experience (Goldin-Meadow et al., 2008).

Citation:

Culbertson J., Kirby S. and Schouwstra M. (2016). Word Order Universals Reflect Cognitive Biases: Evidence From Silent Gesture. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/39.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Culbertson, Kirby, Schouwstra 2016

The Emergence Of Rules And Exceptions In A Population Of Interacting Agents

Christine Cuskley1 and Vittorio Loreto2
1 Institute for Scientific Interchange
2 University of Rome La Sapienza

Keywords: language dynamics, irregularity, nativeness, language structure

Short description: Investigating the effect of nativeness on the emergence of (ir)regularity using an agent-based model

Abstract:

Recent studies in language evolution have identified important roles for frequency (Cuskley et al., 2014), phonology (Bybee, 2001), and speaker population (Lupyan & Dale, 2010) in the dynamics of linguistic regularity. We present a model which integrates frequency, phonology, and speaker demographics to inves- tigate how and why regularity and irregularity persist together given the general bias to eliminate unpredictable variation (i.e., irregularity), especially in experi- mental contexts (e.g., Hudson Kam & Newport, 2005; Smith & Wonnacott, 2010, among others). Kirby (2001) points out that while many models aim to represent how regular structure emerges in language, very few models explain how irregu- larity emerges. Using the iterated learning framework, Kirby (2001) showed that a skewed frequency of meanings and a general pressure for least effort in production can lead to the emergence of both stable regulars and irregulars in a vocabulary.

The current work aims to extend this finding by investigating the role of non- native speakers and phonological similarity in regularity dynamics. A recent study showed that non-native speakers irregularize novel forms more than native speak- ers. For example, non-native speakers are more likely than native speakers to apply ‘rules’ inferred from existing irregulars with a high token frequency (i.e., to provide the past tense of spling as splung, as an analogy with spring Cuskley et al., 2015). A potential mechanism underlying this result is that native and non-native speakers extend rules in different ways, depending on how rules are represented in their input. In other words, since native speakers have more experience with the ‘long-tail’ of regular verb types (Cuskley et al., 2014), they are more likley to extend the ‘regular’ rule. On the other hand, non-natives’ input is skewed towards irregular types with high token frequency, and thus they are more likely extend quasiregularity when inflecting novel forms, especially when novel forms exhibit phonological similarity with existing irregulars (Cuskley et al., 2015).

We model the dynamics of regularity in a language evolving among a population of agents engaging in repeated communicative interactions (modelled after the Naming Game, hereafter NG; Loreto & Steels, 2007). The model broadly consists of repeated speaker (S) hearer (H) interactions. Unlike the NG, agents do not evolve labels for meanings, but inflections for forms: instead of naming meanings, the task of the S within the communicative interaction is to inflect an existing form, and success of the interaction is evaluated depending on whether the H shares the same inflection for the same form (see also Colaiori et al., 2015).

Agents begin with no inflections, but have an inventory of shared meanings labelled by strings randomly generated from a set of 10 characters. Meanings are chosen for each interaction based on a skewed, pre-deterimined frequency distribution. In early interactions, speaker agents choose a random two character string as an inflection; thus, at the outset, success is low, but agents nonetheless store inflections with weighted success (number of interactions/number of successes). Once agents acquire some inflections in their vocabulary as a result of interac- tion, they choose inflections for uninflected meanings in their vocabulary based on different “native” and “non-native” strategies. Both agent types have a first preference for extending inflections based on phonological similarity above a certain threshold: in other words, if the label for meaning A has a highly weighted inflection and a edit distance ≤ 0.5 away from the label for meaning B, they will generalise the inflection for meaning A to meaning B. Where this strategy fails, natives extend inflections based on type frequency (i.e., apply the inflection used across most items in the vocabulary), while non-natives extend inflections based on token frequency (i.e., apply the inflection from the most frequent item in the vocabulary).

Populations arrive at stable inflectional paradigms which include both regular and irregular forms. By altering the proportion of type and token preference agents in different iterations of the model, we are able to examine how these different strategies affect the structure of language over long timescales, and how changing proportions of token and type extension agents changes languages over time. Results from this framework support recent theories that the relative proportion of native and non-native speakers in a population has the potential to affect the structure of language.

Citation:

Cuskley C. and Loreto V. (2016). The Emergence Of Rules And Exceptions In A Population Of Interacting Agents. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/119.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Cuskley, Loreto 2016

The Evolution Of Collaborative Stories

Christine Cuskley1 , Bernardo Monechi1 , Pietro Gravino2 and Vittorio Loreto3
1 Institute for Scientific Interchange
2 University of Rome La Sapeinza
3 University of Rome La Sapienza

Keywords: narrative, stories, collaboration, selection, experimental games

Short description: How do stories evolve and what makes them successful? Using an experimental game to investigate the evolution of stories.

Abstract:

Studies in literature and narrative have begun to argue more forcefully for considering human evolution as central to understanding stories and storytelling more generally (Sugiyama, 2001; Hernadi, 2002). However, empirical studies in language evolution have focused primarily on language structure or the language faculty, leaving the evolution of stories largely unexplored (although see Von Heiseler, 2014). Stories are unique products of human culture enabled principally by human language. Given this, the dynamics of creativity in stories, and the traits which make successful stories, are of crucial interest to understanding the evolution of language in the context of human evolution more broadly.

The current work aims to illuminate how stories emerge, evolve, and change in the context of a collaborative cultural effort. We present results from a novel experimental paradigm centered around a story game where players write short continuations (between 60 and 120 characters) of existing stories. These continuations then become open to other players to continue in turn. Stories are subject to player selection, allowing for variation and ‘speciation’ of the resulting narratives, and evolve as a result of collaborative effort between players.

The game starts with a seed of over 60 potential stories, and players choose which stories to continue, providing a player-driven story selection mechanism. In this way, stories which are creative, intriguing, and open ended spawn more stories, and eventually lead to longer story paths as play continues. The game also introduces further limitations by constraining a players’ view of the story path: players have access only to a story and its parent, meaning knowledge of the existing narrative is limited. We present data from hundreds of players and

stories, creating large ‘story trees’ which explore the space of different possible narratives which grow out of a confined set of starting points.

This data allows us to investigate several aspects of the growing story trees to illuminate not only what makes a story successful, but how creative stories trigger new stories, and what makes individual storytellers successful. Given the selection mechanism central to game play, we identify the most successful stories by their number of offspring. Particularly successful stories emerge measured both by how many children their stories have spawned, and also how long their story path extends. Coherent stories often emerge, despite the fact that they are authored by several different players, and any given player only sees a limited snapshot of the story path.

We contextualise the results of the game and connect it to language evolution using quantitative and qualitative analysis. We look for detectable triggers of innovation and creativity within the story trees, and identify these as expanding the ‘adjacent possible’ (e.g., new adaptations open the space of other possible adap- tations in the future; Tria, Loreto, Servedio, & Strogatz, 2014). We argue that this concept can be extended to stories, using evidence from the game bolstered by evidence from more traditional literature (the Gutenberg Corpus). We frame a more qualitative analysis of the results in terms of recurring themes found in storytelling cross-culturally (Tehrani, 2013). We suggest that the most successful triggers of innovation in stories combine original novelty and a firm grounding in existing recurring story frameworks in human culture. This indicates that much like other cultural and biological systems, stories are subject to competing pressures for stability and conservation on the one hand, and innovation and novelty on the other.

Citation:

Cuskley C., Monechi B., Gravino P. and Loreto V. (2016). The Evolution Of Collaborative Stories. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/133.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Cuskley, Monechi, Gravino, Loreto 2016

Empirically Assessing Linguistic Ability With Stone Tools

Cory Cuthbertson
Centre for the Archaeology of Human Origins, University of Southampton

Keywords: stone tools, theory of mind, cultural transmission, archaeology, cognition

Short description: Stone tool variability indicates cultural transmission style, indicating theory of mind level, indicating necessary linguistic abilities.

Abstract:

[Submitted version is abstract]

Citation:

Cuthbertson C. (2016). Empirically Assessing Linguistic Ability With Stone Tools. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/26.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Cuthbertson 2016

Anatomical Biasing Of Click Learning And Production: An MRI And 3d Palate Imaging Study

Dan Dediu and Scott Moisik
Max Planck Institute for Psycholinguistics

Keywords: anatomical biases, phonetic learning, clicks, speech production, vocal tract imaging

Abstract:

The current paper presents results for data on click learning obtained from a larger imaging study (using MRI and 3D intraoral scanning) designed to quantify and characterize intra- and inter-population variation of vocal tract structures and the relation of this to speech production. The aim of the click study was to ascertain whether and to what extent vocal tract morphology influences (1) the ability to learn to produce clicks and (2) the productions of those that successfully learn to produce these sounds. The results indicate that the presence of an alveolar ridge certainly does not prevent an individual from learning to produce click sounds (1). However, the subtle details of how clicks are produced may indeed be driven by palate shape (2).

Citation:

Dediu D. and Moisik S. (2016). Anatomical Biasing Of Click Learning And Production: An MRI And 3d Palate Imaging Study. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/57.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Dediu, Moisik 2016

Using Causal Inference To Detect Directional Tendencies In Semantic Evolution

Johannes Dellert
Universität Tübingen

Keywords: Semantic Evolution, Semantic Shift, Isolectic Sets, Colexification, Causal Inference

Short description: How causal inference can be used to extract hypotheses about semantic evolution from synchronic polysemies.

Abstract:

This paper proposes a novel application of causal inference in the area of semantic language evolution, which attempts to infer unidirectional trends of lexical change exclusively from massively cross-linguistic dictionary data.

First, we show how colexification between concepts can be modeled mathematically as mutual information between concept variables. Core notions of causal inference (most prominently, the unshielded collider criterion) are then applied to predict the dominant directionality in pathways of semantic change. The paper concludes by revisiting a few well-known examples of synchronic polysemies, and illustrating how the method succeeds in building hypotheses about their historical development.

Citation:

Dellert J. (2016). Using Causal Inference To Detect Directional Tendencies In Semantic Evolution. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/139.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Dellert 2016

The Fidelity Of Iterated Vocal Imitation

Pierce Edmiston , Marcus Perlman and Gary Lupyan
University of Wisconsin-Madison

Keywords: imitation, nonverbal, iteration

Short description: We implemented the childhood game of telephone online to conduct large experiments on the process by which iterated imitation becomes language.

Abstract:

How are spoken words created from scratch? At least a subset of words found across all languages – directly iconic words for sounds (“onomatopoeia”) – appear to originate from the imitation of environmental sounds (Dingemanse, 2012). Presumably, over time and repetition, these imitations become increasingly word-like, as they take on phonological and syntactic properties of their associated language and become less faithful to their original source (Perlman, Dale, & Lupyan, 2015). Yet, the process of word formation is not easily observed, and theories of precisely how it happens are largely speculative.

To study spoken word formation, we developed a web-based “telephone” application to conduct iterative vocal imitation experiments.

Citation:

Edmiston P., Perlman M. and Lupyan G. (2016). The Fidelity Of Iterated Vocal Imitation. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/189.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Edmiston, Perlman, Lupyan 2016

Meaningful Call Combinations And Compositional Processing In A Social Bird

Sabrina Engesser1 , Amanda R. Ridley2 and Simon W. Townsend3
1 Department of Evolutionary Biology and Environmental Studies, University of Zurich
2 School of Animal Biology, The University of Western Australia
3 Department of Psychology, University of Warwick Coventry

Keywords: animal communication, call combinations, syntax, compositionality, language evolution

Abstract:

A defining feature of language is its generative nature, but elucidating how this capacity evolved is a non-trivial task (Christiansen & Kirby, 2003). Language derives its expressive power from its combinatorial nature: meaningless acoustic elements are phonologically combined into meaningful words, which at a higher syntactic layer can be assembled into phrases, where the meaning of the whole is a product of its parts (Hockett, 1960). While recent work on birds has provided evidence for the phonological level (Engesser et al., 2015; Lachlan & Nowicki, 2015), evidence for basic compositional syntax outside of humans is less clear (Arnold & Zuberbühler, 2008; Hurford, 2011; Ouattara et al., 2009). In particular, experimental data demonstrating a compositional understanding of information are rare (Collier et al., 2014). Here we provide strong evidence for compositionality in the discrete vocal system of the cooperatively breeding pied babbler (Turdoides bicolor). Natural observations revealed pied babblers produce acoustically distinct alert-calls in response to close, low urgency threats, and recruitment calls when recruiting group members during locomotion. Upon encountering terrestrial predators, both vocalizations are combined into a sequence (hereafter ‘mobbing-sequence’), potentially to recruit group members in a dangerous situation. To investigate whether babblers process these mobbing-sequences in a compositional way, we conducted systematic playback manipulations, playing back the individual calls in isolation, as well as naturally occurring and artificial sequences. Our results show babbler groups reacted most strongly to mobbing-sequence playbacks, showing a greater attentiveness and a quicker approach to the sound source, compared to individual calls or control sequences. We conclude pied babbler mobbing-sequences communicate information on both the context and the requested action, with receivers computing the combination of the two, functionally distinct, calls in a compositional way. Given the babblers’ constrained vocal repertoire, paired with the extensive number of social and ecological contexts that require communication (Ridley & Raihani, 2007), such compositional production and processing of vocalizations is likely adaptive for pied babblers, allowing them to coordinate key additional events than would otherwise be possible with a non-syntactic system (Arnold & Zuberbühler, 2008). Ultimately, our work indicates that the ability to combine and process meaningful vocal structures, a basic syntax, may be more widespread than previously thought.

Arnold, K., & Zuberbühler, K. (2008). Meaningful call combinations in a non-human primate. Current Biology, 18, R202-R203.

Christiansen, M. H., & Kirby, S. (2003). Language evolution: consensus and controversies. Trends in Cognitive Sciences, 7, 300-307.

Collier, K., Bickel, B., van Schaik, C. P., Manser, M. B., & Townsend, S. W. (2014). Language evolution: syntax before phonology? Proceedings of the Royal Society B: Biological Sciences, 281, 20140263.

Engesser, S., Crane, J. M., Savage, J. L., Russell, A. F., & Townsend, S. W. (2015). Experimental Evidence for Phonemic Contrasts in a Nonhuman Vocal System. PloS Biology, 13, e1002171.

Hockett, C. F. (1960). The Origin of Speech. Scientific American, 203, 88-111.

Hurford, J. (2011). The origins of grammar. Oxford: Oxford University Press.

Lachlan, R. F., & Nowicki, S. (2015). Context-dependent categorical perception in a songbird. Proceedings of the National Academy of Sciences, 112, 201410844.

Ouattara, K., Lemasson, A., & Zuberbühler, K. (2009). Campbell's Monkeys Use Affixation to Alter Call Meaning. PLoS ONE, 4, e7808.

Ridley, A. R., & Raihani, N. J. (2007). Variable postfledging care in a cooperative bird: causes and consequences. Behavioral Ecology, 18, 994-1000.

Citation:

Engesser S., Ridley A. R. and Townsend S. W. (2016). Meaningful Call Combinations And Compositional Processing In A Social Bird. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/3.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Engesser, Ridley, Townsend 2016

The Emergence Of The Progressive To Imperfective Diachronic Cycle In Reinforcement-learning Agents

Dankmar Enke1 , Roland Mühlenbernd2 and Igor Yanovich3
1 Department of German Philology, University of Munich
2 Universität Tübingen
3 Universität Tübingen / Carnegie Mellon University

Keywords: semantic change, reinforcement learning, game-theoretic modeling

Abstract:

Deo (2015) offers a model within the framework of evolutionary game theory for the analysis of an attested phenomenon in semantic change: the progressive to imperfective cycle of shifts. While Deo studies the evolutionary dynamics of four preselected types of progressiveimperfective grammars, we investigate which types of grammars would emerge from the first principles in a population of agents under reinforcement learning. In our model, the actual progressive-to-imperfective cycle arises from such atomic interactions between learner agents after the addition of several simple assumptions to the basic game-theoretic model. The most important such addition concerns the problem of why the progressive but never the habitual generalizes to the broad imperfective. Deo (2015) conjectured that this might be due to children being more frequently exposed to progressive-type contexts than habitual-type ones. Our model vindicates Deo’s conjecture: early asymmetrical exposure derives the asymmetry between the progressive and the habitual, wherein only the former gives rise to a diachronic cycle.

Citation:

Enke D., Mühlenbernd R. and Yanovich I. (2016). The Emergence Of The Progressive To Imperfective Diachronic Cycle In Reinforcement-learning Agents. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/191.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Enke, Mühlenbernd, Yanovich 2016

Using HMMs To Attribute Structure To Artificial Languages

Kerem Eryilmaz , Hannah Little and Bart de Boer
VUB

Keywords: artificial language, combinatorial structure, hidden markov model, iconicity, communication game

Short description: We investigated the use of Hidden Markov Models (HMMs) as a way of representing repertoires of continuous signals from artificial languages in order to infer their building blocks.

Abstract:

We investigated the use of Hidden Markov Models (HMMs) as a way of representing repertoires of continuous signals in order to infer their building blocks. We tested the idea on a dataset from an artificial language experiment. The study demonstrates using HMMs for this purpose is viable, but also that there is a lot of room for refinement such as explicit duration modeling, incorporation of autoregressive elements and relaxing the Markovian assumption, in order to accommodate specific details.

Citation:

Eryilmaz K., Little H. and de Boer B. (2016). Using HMMs To Attribute Structure To Artificial Languages. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/125.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Eryilmaz, Little, de Boer 2016

Stick Or Switch: A Simple Selection Heuristic May Drive Adaptive Langauge Evolution

Nicolas Fay1 and Shane Rogers2
1 University of Western Australia
2 Edith Cowan University

Keywords: Selection Dynamics, Perspective-Taking, Egocentric Communication, Egocentric-bias, Content-bias, Interpersonal Communication, Cumulative Cultural Evolution

Short description: Stick or Switch: A Simple Selection Heuristic Drives Adaptive Language Evolution

Abstract:

If you describe shape (h) from Figure 1 as “the arrow”, but your addressee describes it as the “sleepwalker”, will this information change how you communicate the shape to your addressee? Will you stick with your original shape description, or switch to your addressee’s description? The experiment reported forces participants to choose between the two competing shape descriptions (personal or addressee), and uses participants’ ratings of description informativeness to predict their choice (stick or switch).

Classic theories, which emphasize the role of audience design to effective interpersonal communication, predict that people will adopt their addressee’s perspective. By contrast, minimalist theories suggest egocentric communication is common (for reviews see Brennan & Hanna, 2009; Shintel & Keysar, 2009). Tamariz et al (2014), modeling Fay et al’s (2010) empirical data, show that the spread of communication variants in a population can be explained via the interplay between an egocentric-bias and a content-bias. When people encounter a new sign-to-meaning mapping they tend to reuse the sign they had used before (egocentric-bias) unless the newly encountered sign is perceived to be superior (content-bias).

The present study empirically tests this simple selection heuristic. It also sheds light on the situational factors that cause people to take their addressee’s perspective or communicate egocentrically.

Citation:

Fay N. and Rogers S. (2016). Stick Or Switch: A Simple Selection Heuristic May Drive Adaptive Langauge Evolution. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/66.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Fay, Rogers 2016

Processing Preferences Shape Language Change

Maryia Fedzechkina1 , Becky Chu2 , T. Florian Jaeger2 and John Trueswell1
1 University of Pennsylvania
2 University of Rochester

Keywords: Language acquisition, Language processing, Linguistic universals, Language change, Miniature artificial language learning

Abstract:

Why do languages share structural commonalities? One long-standing tradition has argued that linguistic universals reflect pressures on language use: languages have evolved to better suit the needs of human information processing and communication (Bates & MacWhinney, 1982). By what means these pressures come to shape language evolution, however, remains unknown. In a series of experiments, we explore the possibility that processing pressures operate during language acquisition, biasing learners to deviate from the input they receive, thus changing the input to the subsequent generation of learners and ultimately causing a shift towards a linguistic system that explicitly expresses these biases.



We modeled the situation of language change in the laboratory using a miniature language learning paradigm (Hudson Kam & Newport, 2005; Kirby et al., 2008). In all experiments, we exposed participants (adult monolingual native speakers of English) to miniature languages with several competing forms that expressed the same meaning. In training (administered over 3x1h-sessions on consecutive days), participants heard utterances in a novel language paired with videos depicting simple transitive actions performed by male actors. Participants first learned novel nouns, and then heard sentences using these nouns along with novel verbs. At the end of each session, learners described novel videos in the new language. We studied the deviations from the input in learners’ productions.



Experiment 1 tested whether learners are biased against longer dependencies (words that depend on each other for interpretation) as they are associated with greater processing difficulty. Different groups of learners were exposed to two miniature languages that were either head-initial (VSO/VOS word order) or head-final (SOV/OSV word order). All utterances were disambiguated through obligatory case-marking on objects (never subjects). In exposure sentences, subjects and objects were either both long (i.e., modified by a prepositional phrase in the head-initial language or by a postpositional phrase in head-final language, as cross-linguistically common) or both short (no modification). Balanced word order (SO/OS 50/50%) was maintained in all sentence types. Videos in the production test manipulated constituent length by requiring modification of one constituent (subject or object) or neither of the constituents. We find that despite receiving only unbiased (short-short, long-long) input, learners of the head-initial language followed the short-before-long ordering (p<0.05), but learners of the head-final language showed the inverse long-before-short preference (p<0.001). These results suggest that learners are indeed biased towards shorter linguistic dependencies.



Experiment 2 explored whether learners are biased to provide informative cues early as it permits faster parsing decisions. The two miniature languages had SOV/OSV word order variation (50/50%) and optional case-marking (present 67%), but differed in its locus. In the subject-marking language, subjects but not objects were optionally case-marked independently of word order. In the object-marking language, objects were case-marked independently of word order (never subjects). Thus, the languages differed in the word order that allowed earliest disambiguation in case-marked sentences (SOV in the subject-marking and OSV in the object-marking language). We found that only learners of the object-marking language preferentially used case-marking at the earliest point of disambiguation in OSV sentences (p<0.001). Learners of the subject-marking language marked both SOV and OSV orders equally often (p>0.7) and significantly more frequently than the input on the final day of training (p<0.05). We argue that this behavior is indicative of two preferences influencing language production – a bias to provide informative cues early and a bias to case-mark the less expected (i.e., non-English object-before-subject order) since the two pressures work in the same direction for the object-marking language and in opposite directions for the subject-marking language.



Our results suggest that biases in acquisition are reflected in typologically frequent patterns and can account for cross-linguistic structural similarities in natural languages. At least some of these biases stem from pressures of incremental processing: Even though our languages allowed several alternatives, learners consistently preferred structures that increased processing efficiency.



References

Bates, E., & MacWhinney, B. (1982). Functionalist approaches to grammar. In E. Wanner & L. Gleitman (Eds.), Language Acquisition: the State of the Art (pp. 173-218). Cambridge: Cambridge University Press.

Hudson Kam, C., & Newport, E. (2005). Regularizing unpredictable variation: The roles of adult and child learners in language formation and change. Lang Learn Dev, 1(2), 151-195.

Kirby, S., Cornish, H., & Smith, K. (2008). Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language. Proc Natl Acad Sci USA, 105(31), 10681.

Citation:

Fedzechkina M., Chu B., Jaeger T. F. and Trueswell J. (2016). Processing Preferences Shape Language Change. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/101.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Fedzechkina, Chu, Jaeger, Trueswell 2016

Communicative Interaction Leads To The Elimination Of Unpredictable Variation

Olga Feher1 , Kenny Smith1 , Elizabeth Wonnacott2 and Nikolaus Ritt3
1 University of Edinburgh
2 University College London
3 University of Vienna

Keywords: unpredictable variation, communicative interaction, artificial language paradigm, convergence, alignment

Abstract:

We report experimental studies of the regularization of unpredictable variation during interaction in artificial languages.

Citation:

Feher O., Smith K., Wonnacott E. and Ritt N. (2016). Communicative Interaction Leads To The Elimination Of Unpredictable Variation. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/137.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Feher, Smith, Wonnacott, Ritt 2016

Word Learners Regularize Synonyms And Homonyms Similarly

Vanessa Ferdinand1 and Matt Spike2
1 Santa Fe Institute
2 University of Edinburgh

Keywords: linguistic regularization, cognitive biases, variation within lexicon

Abstract:

Unpredictable variation is rare in language. Explanations for this include a language-specific regularization bias (e.g. Reali & Griffiths, 2009), general constraints on memory (e.g. Hudson Kam & Newport, 2005), or both (e.g. Perfors, 2012; Ferdinand, Thompson, Kirby, & Smith, 2013). Experiments on lexical regularization typically study how words in free synonymous relationships become increasingly deterministic through use (e.g. some words drop from use). There is also experimental evidence that learners regularize homonymous relationships (Vouloumanos, 2010), but to date no experimental design has directly compared the relative regularization of synonyms versus homonyms. This is an important comparison to make because synonyms and homonyms have asymmetrical functional roles in communication (Hurford, 2003) and the jury is still out as to which of these two regularization biases are better for evolving effective communication systems. On one hand, Hurford proposes that there is less bias against homonyms because they are more common in language than synonyms and Piantadosi, Tily, and Gibson (2012) argue for the communicative function of ambiguous lexicons. On the other hand, Doherty (2004) demonstrates children’s difficulty in learning homonyms and Spike, Stadler, Kirby, and Smith (2013) show that self-organizing novel lexicons require a bias against homonymy but not synonymy.

We extend the experimental paradigm of Ferdinand et al. (2013) to investigate the relative regularization of synonyms versus homonyms. 128 participants were trained on one of two artificial mini-languages with identically matched distributions of variation. In the synonyms condition, this variation was over word forms and in the homonyms condition it was over referents. Regularization is quantified by the drop in entropy of the words and referents that participants produced when tested on their mini-language. Participants regularized 67% of the variation among homonyms (t(63) = −12.8169, p < .001) and 56% of the variation among synonyms (t(63) = −10.5526, p < .001). However, there was no significant difference between these conditions (t(126) = 1.3518, p = 0.18), suggesting that learners compress synonymous and homonymous variation similarly.

This experiment was repeated with non-linguistic stimuli, where participants learned the mappings between marbles and the different containers they were drawn from. Participants also regularized the non-linguistic stimuli, eliminating 42% of the variation among containers (t(63) = −7.277, p < .001) and 32% of the variation among marbles (t(63) = −6.6908, p < .001), again with no significant difference between conditions (t(126) = 1.5049, p = 0.13). This suggests a domain-general component to linguistic regularization. However, participants in the linguistic conditions regularized significantly more than those in the non-linguistic conditions (F(252) = 11.259,p < .001). We conclude that regularization results from general-purpose compression during learning, which can be ramped up for effective communication with linguistic stimuli, and operates similarly on synonyms and homonyms.

Citation:

Ferdinand V. and Spike M. (2016). Word Learners Regularize Synonyms And Homonyms Similarly. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/82.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Ferdinand, Spike 2016

Kauffman's Adjacent Possible In Word Order Evolution

Ramon Ferrer-I-Cancho
Universitat Politècnica de Catalunya

Keywords: word order evolution, typology, Kauffman's adjacent possible

Short description: A new model of the variation and evolution of word order beats the traditional dual two-way approach of standard typology.

Abstract:

Word order evolution has been hypothesized to be constrained by a word order permutation ring: transitions involving orders that are closer in the permutation ring are more likely. The hypothesis can be seen as a particular case of Kauffman's adjacent possible in word order evolution.

Here we consider the problem of the association of the six possible orders of S, V and O to yield a couple of primary alternating orders as a window to word order evolution. We evaluate the suitability of various competing hypotheses to predict one member of the couple from the other with the help of information theoretic model selection. Our ensemble of models includes a six-way model that is based on the word order permutation ring (Kauffman's adjacent possible) and another model based on the dual two-way of standard typology, that reduces word order to basic orders preferences (e.g., a preference for SV over VS and another for SO over OS). Our analysis indicates that the permutation ring yields the best model when favoring parsimony strongly, providing support for Kauffman's general view and a six-way typology.

Citation:

Ferrer-I-Cancho R. (2016). Kauffman's Adjacent Possible In Word Order Evolution. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/83.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Ferrer-I-Cancho 2016

Multimodal Processing Of Emotional Meanings: A Hypothesis On The Adaptive Value Of Prosody

Piera Filippi1 , Sebastian Ocklenburg2 , Daniel Liu Bowling3 , Larissa Heege4 , Albert Newen2 , Onur Güntürkün2 and Bart de Boer5
1 Vrije Universiteit Brussel, Ruhr University Bochum
2 Ruhr University Bochum
3 University of Vienna
4 University of Wuppertal
5 Vrije Universiteit Brussel

Keywords: emotion, prosody, multimodal communication

Short description: Prosody interferes with emotion word processing, eliciting automatic responses even when conflicting with both verbal content and facial expressions.

Abstract:

Humans combine multiple sources of information to comprehend meanings. These sources can be characterized as linguistic (i.e., lexical units and/or sentences) or paralinguistic (e.g. body posture, facial expression, voice intonation, pragmatic context). Emotion communication is a special case in which linguistic and paralinguistic dimensions can simultaneously denote the same, or multiple incongruous referential meanings. Think, for instance, about when someone says “I’m sad!”, but does so with happy intonation and a happy facial expression. Here, the communicative channels express very specific (although conflicting) emotional states as denotations. In such cases of intermodal incongruence, are we involuntarily biased to respond to information in one channel over the other? We hypothesize that humans are involuntary biased to respond to prosody over verbal content and facial expression, since the ability to communicate socially relevant information such as basic emotional states through prosodic modulation of the voice might have provided early hominins with an adaptive advantage that preceded the emergence of segmental speech (Darwin 1871; Mithen, 2005). To address this hypothesis, we examined the interaction between multiple communicative channels in recruiting attentional resources, within a Stroop interference task (i.e. a task in which different channels give conflicting information; Stroop, 1935). In experiment 1, we used synonyms of “happy” and “sad” spoken with happy and sad prosody. Participants were asked to identify the emotion expressed by the verbal content while ignoring prosody (Word task) or vice versa (Prosody task). Participants responded faster and more accurately in the Prosody task. Within the Word task, incongruent stimuli were responded to more slowly and less accurately than congruent stimuli. In experiment 2, we adopted synonyms of “happy” and “sad” spoken in happy and sad prosody, while a happy or sad face was displayed. Participants were asked to identify the emotion expressed by the verbal content while ignoring prosody and face (Word task), to identify the emotion expressed by prosody while ignoring verbal content and face (Prosody task), or to identify the emotion expressed by the face while ignoring prosody and verbal content (Face task). Participants responded faster in the Face task and less accurately when the two non-focused channels were expressing an emotion that was incongruent with the focused one, as compared with the condition where all the channels were congruent. In addition, in the Word task, accuracy was lower when prosody was incongruent to verbal content and face, as compared with the condition where all the channels were congruent. Our data suggest that prosody interferes with emotion word processing, eliciting automatic responses even when conflicting with both verbal content and facial expressions at the same time. In contrast, although processed significantly faster than prosody and verbal content, faces alone are not sufficient to interfere in emotion processing within a three-dimensional Stroop task. Our findings align with the hypothesis that the ability to communicate emotions through prosodic modulation of the voice – which seems to be dominant over verbal content - is evolutionary older than the emergence of segmental articulation (Mithen, 2005; Fitch, 2010). This hypothesis fits with quantitative data suggesting that prosody has a vital role in the perception of well-formed words (Johnson & Jusczyk, 2001), in the ability to map sounds to referential meanings (Filippi et al., 2014), and in syntactic disambiguation (Soderstrom et al., 2003). This research could complement studies on iconic communication within visual and auditory domains, providing new insights for models of language evolution. Further work aimed at how emotional cues from different modalities are simultaneously integrated will improve our understanding of how humans interpret multimodal emotional meanings in real life interactions.

Citation:

Filippi P., Ocklenburg S., Bowling D. L., Heege L., Newen A., Güntürkün O. and de Boer B. (2016). Multimodal Processing Of Emotional Meanings: A Hypothesis On The Adaptive Value Of Prosody. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/90.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Filippi, Ocklenburg, Bowling, Heege, Newen, Güntürkün, de Boer 2016

Humans Recognize Vocal Expressions Of Emotional States Universally Across Species

Piera Filippi1 , Jenna V. Congdon2 , John Hoang2 , Daniel Liu Bowling3 , Stephan Reber3 , Andrius Pašukonis3 , Marisa Hoeschele3 , Sebastian Ocklenburg4 , Bart de Boer5 , Christopher B. Sturdy2 , Albert Newen4 and Onur GÜntÜrkÜn4
1 Vrije Universiteit Brussel, Ruhr University Bochum
2 University of Alberta
3 University of Vienna
4 Ruhr University Bochum
5 Vrije Universiteit Brussel

Keywords: arousal perception, prosody, animal communication

Short description: Humans recognize emotion in the vocalizations of mammals, birds, reptilians and amphibians using a frequency-related acoustic parameter.

Abstract:

The perception of danger in the environment can induce physiological responses (such as a heightened state of arousal) in animals, which may cause measurable changes in the prosodic modulation of the voice (Briefer, 2012). The ability to interpret the prosodic features of animal calls as an indicator of emotional arousal may have provided the first hominins with an adaptive advantage, enabling, for instance, the recognition of a threat in the surroundings. This ability might have paved the ability to process meaningful prosodic modulations in the emerging linguistic utterances.

Research has shown that humans are able to recognize different levels of arousal in mammalian calls. However, to our knowledge, no study has ever examined humans’ cross-cultural ability to identify different arousal levels in calls of species belonging to several phyologenetically distant taxa, including, for instance, mammals and birds. We addressed this issue by developing a task in which human participants of three different cultures (Canadian, German, Mandarin) listened to ten pairs of vocalizations for each of nine different vertebrate taxa. We used amplitude-controlled calls from the following species: hourglass treefrog, American alligator, black-capped chickadee, common raven, domestic pig, giant panda, African elephant, Barbary macaque, and human. Calls within each pair differed in arousal level, which was assessed based on the behavioral context of call production (Avey et al., 2011; Bowling et al., 2012; Fischer et al. 1995; Linhart et al., 2015; Reichert 2013; Stoeger et al., 2011, 2012). For each pair of vocalizations, participants were asked to identify the call with the higher level of arousal. Accuracy rate in identifying arousal in each species was higher than expected by chance in all three cultures. No significant differences were observed between cultures. This finding provides empirical support for Darwin’s hypothesis on the universality of vocal emotional communication. In order to better understand the mechanisms underlying emotional intensity recognition in our set of calls, we investigated which acoustic parameters correlate with participants’ correct responses. We performed this analysis in two steps. First, we identified two acoustic features measurable in all calls of our stimuli set: duration and a frequency related measure, the spectral center of gravity. Second, we calculated the duration and the spectral center of gravity ratio for each pair of calls and correlated these two feature comparisons with the percentage of correct responses across pairs. Our data indicate that the spectral center of gravity is the only feature found across our animal species calls, which significantly correlates with the ability to discriminate high arousal calls. Further work within this research paradigm will provide quantitative data on shared mechanisms involved in emotional vocalizations’ production and perception across animal taxa, investigating the perception of arousal in nonhuman species. This may improve our understanding of the semantic value of prosody in animal communication, and of its role in the emergence of human language.

Citation:

Filippi P., Congdon J. V., Hoang J., Bowling D. L., Reber S., Pašukonis A., Hoeschele M., Ocklenburg S., de Boer B., Sturdy C. B., Newen A. and GÜntÜrkÜn O. (2016). Humans Recognize Vocal Expressions Of Emotional States Universally Across Species. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/91.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Filippi, Congdon, Hoang, Bowling, Reber, Pašukonis, Hoeschele, Ocklenburg, de Boer, Sturdy, Newen, GÜntÜrkÜn 2016

Do Lab Attested Biases Predict The Structure Of A New Natural Language?

Molly Flaherty , Katelyn Stangl and Susan Goldin-Meadow
The University of Chicago

Keywords: sign language, learner biases, harmonic ordering, Nicaraguan Sign Language, natural language emergence

Abstract:

Typological analysis clearly shows that the world’s languages are not evenly distributed among all logically possible patterns. Recent studies (i.e. Culbertson, Smolensky, Legendre, 2012; Fedzechkina, Jaeger, Newport, 2012; Culbertson & Newport, 2015) on the emergence of language structure in the lab find that the most common typological patterns in languages around the world are generally the patterns adults prefer when learning an artificial language. Accordingly, the researchers conclude that these most common patterns are the product of learner biases (cognitive or communicative) toward certain types of structure. Here we explore this question in a new natural language: Nicaraguan Sign Language (NSL). We investigate whether signers of this new language will use the most typologically common orders for the elements of a noun phrase.

NSL, one of the youngest languages known to science, was born in the late 1970s with the founding of a new school for special education. The first students to enter the school were homesigners: isolated deaf individuals who develop their own gesture systems in order to communicate with the individuals around them. When these homesigners came together in the 1970s, the stage was set for the creation of a new language, and the first cohort of NSL was formed. Though instruction was in written and spoken Spanish, students soon began to communicate with one another manually. As succeeding cohorts of students learn NSL, the language itself is changing rapidly.

Following Culbertson et al. (2012) and Culbertson & Newport (2015), we examine the ordering of noun, adjective, and number elements within noun phrases in NSL. Culbertson and colleagues find that harmonic orders (in which the adjective and number are either both prenominal or both postnominal) were preferred over non-harmonic orders (in which the noun comes between the other two elements), consistent with the typological pattern reported by Dryer (2008). We showed participants a series of cards depicting a set of objects (e.g., dogs or cars); set size varied from 1 to 4, and objects were either large or small. We asked participants to describe the content of each card, and determined the ordering of noun phrase elements produced by signers in three successive age cohorts of NSL: Cohort 1 (n=9) who came together in the 1970s and formed NSL; Cohort 2 (n=9) and Cohort 3 (n=6) who were exposed to NSL upon school entry between the early 1980s and early 2000s. NSL signers have been shown to build increasingly complex linguistic structure over successive cohorts (Senghas & Coppola, 2001). Data collection was carried out in 2009 and again in 2015. The 2009 data collection included 6 participants (3 from Cohort 1, and 3 from Cohort 2); data was collected from all 24 participants in 2015, including the original 6.

In 2009 (Figure 1), we found that Cohort 1 signers and Cohort 2 signers preferentially produced harmonic orders (either noun-adjective-number or noun-number-adjective), with no significant difference between cohorts (β=-.90, p=.45). This pattern is in keeping with Culbertson et al.’s predictions––individuals creating a new language prefer harmonic orders, potentially reflecting the same biases that have shaped the attested typological pattern.

Interestingly, in 2015 (Figure 2), the pattern we observed was the same for Cohort 1, but not for Cohorts 2 and 3. Cohort 1 signers still preferred harmonic orders. In fact, all three Cohort 1 signers tested at both time points preferred harmonic orders in both 2009 and 2015. However, Cohort 2 signers now more often used non-harmonic number-noun-adjective order, and for Cohort 3 signers this was the most preferred order. The preference for non-harmonic order increases significantly with later cohort (β=-5.24, p<.02). Strikingly, this means that individual signers in the second cohort moved away from the harmonic pattern. Of the 3 Cohort 2 signers tested in both 2009 and 2015, all preferred harmonic orders in 2009 but only 1 of 3 still showed a harmonic preference in 2015.

We thus see harmonic ordering in the earliest stages of this new language, as the typological and experimental data would predict. The intriguing result is the relatively quick transition from a harmonic pattern to a non-harmonic pattern in Cohorts 2 and 3. Future work is needed to explore pressures leading NSL away from the typologically robust harmonic pattern (e.g., influences from Spanish, which has a non-harmonic pattern, that might be transmitted through co-speech gesture).

Citation:

Flaherty M., Stangl K. and Goldin-Meadow S. (2016). Do Lab Attested Biases Predict The Structure Of A New Natural Language?. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/96.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Flaherty, Stangl, Goldin-Meadow 2016

Phoneme Inventory Size Distributions And The Origins Of The Duality Of Patterning

Luke Fleming
Département d'anthropologie, Université de Montréal

Keywords: duality of patterning, out-of-Africa, phonology

Short description: Extra-large phoneme inventories of southern African languages are the trace of the origin of the duality of patterning.

Abstract:

Atkinson 2011 claims that phoneme inventories are largest in Africa and smaller elsewhere, and that this clinal distribution reflects a ‘founder-effect’ of human migrations ‘out-of-Africa’. Because of the way in which velaric ingressive and pulmonic egressive airstream mechanisms combine to create extra-large consonant inventories, click languages have the largest phoneme inventories of all. Critics question why phoneme inventory size, but not other properties of language, should leave a trace of the origin and dispersal of natural language. This paper argues that large phoneme inventories would likely have been characteristic of the first fully modern languages if we assume, following Hockett 1960, that duality of patterning was the last ‘design feature’ of language to emerge. The diachronic trajectories of sign languages and writing systems illustrate that dually patterned phonologies—where minimal units of linguistic form (or phonemes) capable of distinguishing semantic units (or morphemes) are not meaningful in themselves—are often preceded by a stage in which minimal units of form map directly onto semantic functions. Click articulations would have been essential in elaborating large inventories, and thus large vocabularies, in spoken languages lacking duality of patterning. The contemporary distribution of phonemic clicks offers support for the hypothesis, as genetic studies increasingly point to an eastern or southern African origin for modern humans.

Citation:

Fleming L. (2016). Phoneme Inventory Size Distributions And The Origins Of The Duality Of Patterning. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/12.html


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
© Fleming 2016

Cooperative Communication And Communication Styles In Bonobos And Chimpanzees In The Wild: Same Same But Different?

Marlen Fröhlich1 , Paul H Kuchenbuch1 , Gudrun Müller1 , Barbara Fruth2 , Takeshi Furuichi3 , Roman M Wittig4 and Simone Pika1
1 Max Planck Institute for Ornithology
2 Ludwig Maximilian University of Munich
3 Kyoto University
4 Max Planck Institute for Evolutionary Anthropology

Keywords: Pan troglodytes, Pan paniscus, bonobo-chimpanzee dichotomy, gesture, cooperation, conversation analysis

Abstract:

Human language is manifested by fast-paced and extensive social interactions, thereby representing an essentially cooperative endeavour. It has been repeatedly claimed that the cognitive skills related to participation in cooperative communication are unique to the human species (Levinson, 1995; Tomasello, 2008). One way to enable a better understanding of the factors and pressures triggering the evolution language is the comparative approach, which uses empirical evidence from living species to draw inferences about communicative abilities in our ape-like ancestors. Rossano (2013) recently provided evidence that the structure of communicative interactions between mother-infant dyads of captive bonobos is strikingly similar to the sequential structure of social action in human conversation. Using parameters established in human conversation analysis, he found that two dyads frequently established participation frameworks, engaged in cooperative adjacency-pair structures, and communicated at a pace that strongly resembled the timing of ordinary human conversation (Stivers et al., 2009). In the present study, we aimed to investigate and expand some of the parameters used by Rossano (2013) in situ, that is in mother-infant dyads of chimpanzees (Pan troglodytes) and bonobos (Pan paniscus) living in their natural environments. Although previous behavioural comparisons of the two sister species revealed a remarkable dichotomy in crucial aspects of their social matrix, a direct systematic comparison of their communicative skills is to date non-existent.

Since true differences in communicative abilities between two species can only be proposed if within-species variability is taken into account (Boesch, 2007), we compared communicative interactions of 25 mother-infant dyads in two different chimpanzee and two different bonobo communities: Taï South in Taï National Park, Côte d’Ivoire (Pan t. verus), Kanyawara in Kibale National Park, Uganda (P. t. schweinfurthii), Wamba in the Luo Scientific Reserve, DRC, and LuiKotale in Salonga National Park, DRC. We focused on the single communicative function of mother-infant joint travel, since previous studies suggested that this is a fruitful context enabling the observation of frequent communicative exchanges in mother-infant dyads about a distinct goal: leaving a location (Rossano, 2013). The following criteria of human communicative interactions were analysed: (i) formation of participation frameworks before signal production, by analysis of gaze, body orientation and initiation distance, (ii) adjacency pair-like sequences, by analysis of gestural pursuits and response waiting after each pursuit; and (iii) the timing between signal and response. We analysed a total of 415 chimpanzee and 316 bonobos joint travel interactions filmed during 2200 hours of observation. Overall, our results showed that both bonobo and chimpanzee mother-infant dyads showed the capacity and motivation to engage in cooperative communication. Moreover, the two species differed significantly in terms of all three investigated criteria. While gaze, close initiation distance and fast-paced responses were features of bonobo mother-infant interactions, chimpanzees performed a larger number of gestural pursuits, more response waiting and more ‘delayed’ responses. Notably, none of these findings could be explained by mere within-species variability.

Taken together, we provided compelling evidence that our two closest living relatives differ regarding temporal patterns and styles of their gestural communication. Bonobos seem to anticipate and respond to signals before they have even been entirely executed, while chimpanzees frequently engage in more prolonged communicative negotiations. Nevertheless, both Pan species use sequentially organised, cooperative social interactions to achieve a mutual goal: leaving together to another location. Communicative interactions of bonobos and chimpanzees thus reflect crucial features of human social action during conversation, implying that cooperative communication emerged as a means to efficiently coordinate collaborative activities. Our study thus corroborates the hypothesis that the cognitive prerequisites for human language as a collaborative enterprise must have evolved in the primate lineage long before speech arose in modern humans (Levinson, 2006; Seyfarth & Cheney, 2008). Hence, our findings add a crucial facet to the Pan dichotomy and, as such, aid in pinpointing some of the crucial factors influencing language evolution.

References

Boesch, C. (2007). What makes us human (homo sapiens)? The challenge of cognitive cross-species comparison. Journal of Comparative Psychology, 121(3), 227-240.

Levinson, S. C. (1995). Interactional biases in human thinking. In E. N. Goody (Ed.), Social intelligence and interaction (pp. 221-260). Cambridge: Cambridge University Press.

Levinson, S. C. (2006). On the human “interaction engine”. In N. J. Enfield & S. C. Levinson (Eds.), Roots of human sociality: Culture, cognition and interaction (pp. 39-69). Oxford: Berg.

Rossano, F. (2013). Sequence organization and timing of bonobo mother-infant interactions. Interaction Studies, 14(2), 160-189.

Seyfarth, R. M., & Cheney, D. L. (2008). Primate social knowledge and the origins of language. Mind & Society, 7(1), 129-142.

Stivers, T., Enfield, N. J., Brown, P., Englert, C., Hayashi, M., & Heinemann, T. (2009). Universals and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences of the United States of America (PNAS), 106(26), 10587-10592.

Tomasello, M. (2008). Origins of human communication (Vol. 2008): MIT press.

Citation:

Fröhlich M., Kuchenbuch P. H., Müller G., Fruth B., Furuichi T., Wittig R. M. and Pika S. (2016). Cooperative Communication And Communication Styles In Bonobos And Chimpanzees In The Wild: Same Same But Different?. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/75.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Fröhlich, Kuchenbuch, Müller, Fruth, Furuichi, Wittig, Pika 2016

Integration Or Disintegration?

Koji Fujita and Haruka Fujita
Kyoto University

Keywords: Integration Hypothesis, Disintegration Hypothesis, motor control origin of Merge, separation of affect, exocentric compounds

Short description: Human language evolved by disintegrating one system into two, not by integrating two systems into one.

Abstract:

Contra Miyagawa et al.’s (2013, 2014) Integration Hypothesis, we propose the Disintegration Hypothesis of human language evolution, which says that the E and L systems are mixed up in nonhuman animal communication systems, which only became separated in human language. We support our hypothesis by carefully examining and answering three relevant questions. (1) Do other animals really have E and L systems? (2) If yes, are these nonhuman versions exactly the same as the human counterparts? (3) How did these two systems get combined only in the human lineage?

Citation:

Fujita K. and Fujita H. (2016). Integration Or Disintegration?. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/16.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Fujita, Fujita 2016

Migration As A Window Into The Coevolution Between Language And Behavior

Victor Gay*1 , Daniel Hicks*2 and Estefania Santacreu-Vasut*3
* These authors contributed equally to the work
1 University of Chicago
2 University of Oklahoma
3 ESSEC BUSINESS SCHOOL AND THEMA

Keywords: Language, Epidemiological approach, Migrants, Correlations, Cultural change

Short description: We propose migrations to the same country as a microevolutionary step that may allow us to uncover if language must have evolved partly as a result of cultural change.

Abstract:

Understanding the causes and consequences of language evolution in relation to social factors is challenging as we generally lack a clear picture of how languages coevolve with historical social processes. Research analyzing the relation between language and socio-economic factors relies on contemporaneous data. Because of this, such analysis may be plagued by spurious correlation concerns coming from the historical co-evolution and dependency of the relationship between language and behavior to the institutional environment. To solve this problem, we propose migrations to the same country as a microevolutionary step that may uncover constraints on behavior. We detail strategies available to other researchers by applying the epidemiological approach to study the correlation between sex-based gender distinctions and female labor force participation. Our main finding is that language must have evolved partly as a result of cultural change, but also that it may have directly constrained the evolution of norms. We conclude by discussing implications for the coevolution of language and behavior, and by comparing different methodological approaches.

Citation:

Gay V., Hicks D. and Santacreu-Vasut E. (2016). Migration As A Window Into The Coevolution Between Language And Behavior. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/120.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Gay, Hicks, Santacreu-Vasut 2016

Effects Of Task-specific Variables On Auditory Artificial Grammar Learning And Generalization

Andreea Geambasu1 , Michelle J. Spierings2 , Carel ten Cate2 and Clara C. Levelt1
1 Leiden University Centre for Linguistics
2 Institute for Biology Leiden

Keywords: acquisition, artificial grammar learning, attention, feedback, artificial language learning

Abstract:

Extraction and generalization of rules from stimuli that share an underlying structure is one of the bedrocks of language acquisition. This rule learning ability has been shown time and again in adults using both simple and complex grammars in the auditory, visual, and tactile domains. Understanding the conditions under which simple rule learning can occur and to what extent learning is implicit or explicit is essential for understanding what the fundamentals of language acquisition are and whether language acquisition may have evolved from simpler pattern-extraction mechanisms. Inconsistency in experimental methodology used to show rule learning indicates that it is of interest to explore the precise conditions influencing how well learners perform in such tasks. To this end, we conducted four auditory artificial grammar learning experiments with 12 conditions (n=192) using XYX and XXY grammars similar to those used in Marcus et al. (1999). In Experiments 1-3, ten participant groups received passive familiarization with one of the two grammars and were tested with a yes/no paradigm. In Experiment 4, two groups were exposed to one of the two grammars via reinforced training until criterion and were tested in a go-left/go-right task. Across these four experiments, we manipulated the following experimental factors: vagueness of instructions, input variety, presence or absence of feedback, and types of testing items.

In Experiment 1, instructions were “undirected,” not directing participants’ attention to the underlying structure. Instructions asked participants to indicate whether the test items are part of the same “language” or “group” as in the listening phase. To study the effect of variety in the input on generalization, participants were further divided into groups exposed to either 3 or 15 triplets, for a total of 45 trials in both cases. Test items consisted of a consistent and an inconsistent grammar, each made up of either familiar or novel syllables, constituting “undirected” testing in which test items could not direct participants to what they should be attending to. In Experiment 2, instructions were “directed,” telling participants that the exposure sounds followed a certain “pattern,” and that they should indicate whether test items followed that same pattern. As in Experiment 1, participants either heard 3 or 15 triplets during the familiarization phase, and testing was “undirected,” consisting of both familiar and unfamiliar sounds. In Experiment 3, we again compared the role of instruction and of number of familiarization triplets, but now used “directed” testing, meaning only novel sounds were used (directing participants’ attention away from processing at the sound level). Finally, in Experiment 4, we again varied the number of exposure triplets and used undirected testing, but now exposed participants, without instruction, in a reinforced go-left/go-right task. When they reached criterion, they continued (now without feedback) with the same procedure of categorizing the test items as either a left-side sound or a right-side sound, where each grammar corresponded with one of the sides.

Our results show that participants were able to apply the rule to test items composed of previously heard sounds, independently of our experimental manipulations, discriminating the two grammars significantly above chance in all conditions. However, they were not able to generalize the rule to novel sounds if they were not somehow “directed,” either through directed instruction, directed testing, or feedback training. Notably, variety in number of exposure triplets during familiarization did not affect generalization, with no significant difference in performance between participants exposed to 3 or 15 triplets. It thus seems that in order to generalize simple rules beyond their surface form, participants require their attention to be directed, supporting recent findings in a dual-mechanism account of AGL (Opitz & Hofmann, 2015). These results have implications for the design of future AGL experiments and for theories of implicit vs. explicit AGL. A comprehensive understanding of language learning must integrate the evolution of a primary similarity-detection and an attention-based rule-detection mechanism.

References

Marcus, G.F., Vijayan, S., Bandi Rao, S. and Vishton, P.M. (1999). Rule learning in 7-month-old infants. Science, 283, 77–80.

Opitz, B. and Hofmann, J. (2015). Concurrence of rule- and similarity-based mechanisms in artificial grammar learning. Cognitive Psychology, 77, 77-99.

Citation:

Geambasu A., Spierings M. J., ten Cate C. and Levelt C. C. (2016). Effects Of Task-specific Variables On Auditory Artificial Grammar Learning And Generalization. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/161.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Geambasu, Spierings, ten Cate, Levelt 2016

Intentional Meaning Of Bonobo Gestures

Kirsty Graham , Catherine Hobaiter and Richard Byrne
University of St Andrews

Keywords: bonobo, great ape, gesture, meaning

Short description: A bonobo-chimp dictionary: a look at the meanings of wild bonobo gestures, and how they differ from chimpanzees.

Abstract:

Unlike linguists, animal communication researchers cannot ask their subjects what they mean. Most animal communication is non-intentional, and the function of a signal can be assessed by looking at the outcome (Seyfarth, Cheney, & Marler, 1980). However, great ape gestural communication is intentional. Great apes direct their gestures towards a specific recipient; check the attention of that recipient; wait for that recipient to respond; and, if the recipient does not respond, the signaller persists and elaborates (Call & Tomasello, 2007; Cartmill & Byrne, 2007; Leavens & Hopkins, 1998; Tomasello, George, Kruger, Farrar, & Evans, 1985). These behaviours show that the signaller begins with an intended goal and uses gestures in order to achieve that goal; the signal therefore has meaning, in the sense of Gricean first order intentional meaning (Grice, 1969). To determine a signal’s intentional meaning, we cannot just look at the outcome, as with its biological function. Rather, we see which outcome satisfies the signaller, showing that the “apparently satisfactory outcome” (ASO) matched the signaller’s original intended goal (Cartmill & Byrne, 2011; Hobaiter & Byrne, 2014). Under the natural conditions likely to elicit a full range of intended meanings, this method of defining the meaning of great ape gestures has only so far been used for wild chimpanzees (Hobaiter & Byrne, 2014). In our current research, we use the same method for gestural communication of wild bonobos. Bonobos are chimpanzees’ closest living relatives, having diverged approximately 0.8-0.9 MYA (Becquet & Przeworski, 2007; Won & Hey, 2005). Despite genetic closeness, their social systems are remarkably different. Bonobo females form the centre of parties and high-ranking females outrank males; they engage in frequent genito-genital rubbing and other forms of non-conceptive copulation; and they encounter peacefully with neighbouring communities (Furuichi, 2011; Idani, 1990; Kano, 1980). We already know that the chimpanzee and bonobo gestural repertoires overlap significantly in gesture form (Graham, Hobaiter, & Byrne, 2015), but not whether these shared gestures also have the same meanings. The data of this paper come from 900 hours of focal individual data and 4381 video clips from focal behaviour filming, collected during two six-month field seasons at Wamba, DR Congo. In order to catalogue the bonobo repertoire and examine the meaning of their gestures, we extracted gestures that met criteria for intentionality, in particular those that allowed us to recognize ASOs of the signaller. This paper will concentrate on differences in gesture form and meaning between bonobos and chimpanzees, and relate these differences to the strikingly different social and sexual behaviour of the two species.

Citation:

Graham K., Hobaiter C. and Byrne R. (2016). Intentional Meaning Of Bonobo Gestures. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/22.html


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© Graham, Hobaiter, Byrne 2016

The Impact Of Communicative Network Structure On The Conventionalization Of Referring Expressions In Gesture

Matt Hall , Russell Richie and Marie Coppola
University of Connecticut

Keywords: conventionalization, network structure, referring expressions, lexicon, sign language, gesture

Short description: How do people agree on what to call things? A gesture-based experiment finds it's not just who you talk to, it's who talks to each other.

Abstract:

The emergence of referring expressions is a critical component of the evolution of any linguistic system. Building on evidence from naturally-emerging sign languages as well as computational simulations, we use a behavioral experiment to test the hypothesis that the structure of a communicative network influences the processes of conventionalization for referring expressions. We ask hearing non-signers to engage in a gestural communication task, and randomly assign them to either a sparsely-connected or richly-connected network. Pairwise conventionalization is consistent in both conditions, but network-wide conventionalization is greater in the richly-connected network. This is the first time this effect has been demonstrated in a controlled experiment in which humans communicate in a natural linguistic modality (i.e. gesture). Differences in the number of communicative interactions may account for the network effect in the present data; results in the literature are mixed on this point.

Citation:

Hall M., Richie R. and Coppola M. (2016). The Impact Of Communicative Network Structure On The Conventionalization Of Referring Expressions In Gesture. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/134.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Hall, Richie, Coppola 2016

Plain Simple Complex Structures: The Emergence Of Overspecification In An Iterated Learning Setup

Stefan Hartmann1 , Peeter Tinits2 , Jonas Nölle3 , Thomas Hartmann4 and Michael Pleyer5
1 Johannes Gutenberg-Universität Mainz
2 Tallinn University
3 Aarhus University
4 Karlsruhe Institute of Technology
5 Universität Heidelberg

Keywords: Iterated Learning, Grammatical Complexity, Overspecification

Short description: Overspecification makes languages more complex - but given contextual pressures, complex structures can sometimes be the simpler solution.

Abstract:

see pdf

Citation:

Hartmann S., Tinits P., Nölle J., Hartmann T. and Pleyer M. (2016). Plain Simple Complex Structures: The Emergence Of Overspecification In An Iterated Learning Setup. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/144.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Hartmann, Tinits, Nölle, Hartmann, Pleyer 2016

Language Origins In Light Of Neuro-atypical Cognition And Speech Profiles

Wolfram Hinzen1 and Joana Rosselló2
1 ICREA & Universitat de Barcelona
2 Universitat de Barcelona

Keywords: Language evolution, speech, thought, schizophrenia, autism

Short description: Conditions like autism and schizophrenia suggest that speech is highly significant for thought, which may shed light on the nature of language and its evolution.

Abstract:

Language is not the expression of thought. Schizophrenia and ASD (Autism Spectrum Disorder) indicate the opposite. Speech in particular is shown to play a key role in the integration of communication and cognition that human language brings about, both in ontogeny and phylogeny.

Citation:

Hinzen W. and Rosselló J. (2016). Language Origins In Light Of Neuro-atypical Cognition And Speech Profiles. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/172.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Hinzen, Rosselló 2016

Deictic Tools Can Limit The Emergence Of Referential Symbol Systems

Elizabeth Irvine1 and Sean Roberts2
1 University of Cardiff
2 Max Planck Institute for Psycholinguistics, Nijmegen

Keywords: pointing, symbols, co-operation, gesture

Short description: Pointing & sequence organisation decreases need for symbols in co-operative tasks, symbols costly to set up so may emerge late

Abstract:

Previous experiments and models show that the pressure to communicate can lead to the emergence of symbols in specific tasks. The experiment presented here suggests that the ability to use deictic gestures can reduce the pressure for symbols to emerge in co-operative tasks. In the 'gesture-only' condition, pairs built a structure together in 'Minecraft', and could only communicate using a small range of gestures. In the 'gesture-plus' condition, pairs could also use sound to develop a symbol system if they wished. All pairs were taught a pointing convention. None of the pairs we tested developed a symbol system, and performance was no different across the two conditions. We therefore suggest that deictic gestures, and non-referential means of organising activity sequences, are often sufficient for communication. This suggests that the emergence of linguistic symbols in early hominids may have been late and patchy with symbols only emerging in contexts where they could significantly improve task success or efficiency. Given the communicative power of pointing however, these contexts may be fewer than usually supposed. An approach for identifying these situations is outlined.

Citation:

Irvine E. and Roberts S. (2016). Deictic Tools Can Limit The Emergence Of Referential Symbol Systems. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/99.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Irvine, Roberts 2016

Inferring The World Tree Of Languages From Word Lists

Gerhard Jaeger1 and Soeren Wichmann2
1 University of Tuebingen
2 Leiden University & Kazan Federal University

Keywords: world tree of languages, phylogenetic inference, cultural language evolution

Abstract:

Since its launch in 2007, the Automated Similarity Judgment Program has collected basic vocabulary lists from more than 6,000 languages and dialects, covering close to two third of the world’s languages. Using these data and phylogenetic techniques from computational biology, such as weighted sequence alignment and phylogenetic inference, we computed a phylogenetic language tree covering all continents and language families. Our method relies on word lists in phonetic transcription only, i.e. it does not rely on expert cognacy judgments. This decision enabled us to perform inference across the boundaries of language families. The world tree of languages thus obtained largely recaptures the established classification of languages into families and their sub-groupings. Additionally it reveals intriguing large-scale patterns pointing at a statistical signal from deep time.

Citation:

Jaeger G. and Wichmann S. (2016). Inferring The World Tree Of Languages From Word Lists. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/147.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Jaeger, Wichmann 2016

Effort Vs. Robust Information Transfer In Language Evolution

T. Florian Jaeger1 and Maryia Fedzechkina2
1 University of Rochester
2 University of Pennsylvania

Keywords: Language learning, Language production, Language universals, Efficient information transfer, Morpho-syntax, Miniature artificial language learning

Abstract:

In his seminal work, Zipf (1949) popularized the hypothesis that languages are shaped by a trade-off between production effort and robust message transfer. It is hard to overestimate the influence this idea has had in functional linguistics and related approaches. Yet, to this day, there is little direct (rather than correlational) evidence for this trade-off.



Recent large-scale quantitative typological studies have shown that lexicon structure in a variety of languages exhibits properties that are consistent with the hypothesized trade-off (e.g., Piantadosi, Tily, & Gibson, 2011). Iterated miniature language learning studies have identified a potential cause for these patterns: biases during learning and communication cause learners to deviate from the input towards languages that conserve effort while still guaranteeing robust communication (e.g., Kirby, Tamariz, Cornish, & Smith, 2015).



While this work has identified patterns consistent with the trade-off hypothesis, it has not manipulated effort or the chance of communicative success to directly test the presence of a trade-off. Here we present a crowdsourcing-based miniature language learning experiment that directly assesses whether learners trade off the probability of successful message transmission against the effort associated with producing the message.

We ask in particular whether the inverse correlation between word order (WO) flexibility and the presence of a case system in a language is shaped by this trade-off.



In the experiment (administered in 2x45min sessions over 2 consecutive days over Amazon Mechanical Turk), different groups of participants learned miniature artificial languages by watching short videos and hearing their descriptions. All videos depicted human actors performing simple transitive events. Participants first learned the names of the actors and then learned the grammar through sentence exposure. At the end of each session, participants were shown the entire lexicon of the language at the top of the screen and asked to describe previously unseen scenes by clicking on the corresponding lexical items. All languages had optional case-marking (present on 67% of objects; never on subjects). The languages differed in the amount of WO flexibility: The fixed WO language used SOV 100% of the time, while the flexible WO language used SOV and OSV equally frequently. Thus, the uncertainty about the intended message was low in the fixed and high in the flexible WO language. The critical manipulation was the amount of effort required to produce case. During the production test, participants in the low-effort condition were shown a case-marked and non-case-marked variant of every noun (along with all the verbs). Case production, thus, required the same number of clicks as production of bare nouns. In the high-effort condition, participants saw non-case-marked variants of all nouns along with the two free case-markers. Case production, thus, took 2 additional clicks compared to bare nouns.



If production effort is indeed traded off against robust message transfer, we would expect learners in the high-effort condition to use more case in the language with higher uncertainty about the intended meaning (flexible WO) compared to the language with lower uncertainty (fixed WO). Since there is no difference in effort associated with case use in the low-effort condition, differential case use would not be expected here. The results support our hypothesis. We observed differential case use only in the high-effort condition: Learners tended to maintain case only in the flexible WO language. In contrast, in the low-effort condition, learners of both languages produced the same amount of case, equal to the input proportion.



Our findings suggest that some cross-linguistic patterns are shaped by a trade-off between production effort and robust message transmission. Even though the difference in uncertainty about the message between the flexible and fixed WO languages was equal across the two effort conditions, learners restructured the input language to more closely resemble naturally occurring types only when case production required a substantial effort increase. Our results also highlight the potential of web-based multi-day experiments as an alternative to substantially more expensive and time-consuming lab paradigms.



References

Kirby, S., Tamariz, M., Cornish, H., & Smith, K. (2015). Compression and communication in the cultural evolution of linguistic structure. Cognition, 141, 87-102.

Piantadosi, S., Tily, H., & Gibson, E. (2011). Word lengths are optimized for efficient communication. Proc Natl Acad Sci USA, 108(9), 3526.

Zipf, G. K. (1949). Human Behavior and the Principle of Least Effort. Cambridge, MA: Addison-Wesley.

Citation:

Jaeger T. F. and Fedzechkina M. (2016). Effort Vs. Robust Information Transfer In Language Evolution. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/100.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Jaeger, Fedzechkina 2016

Nonlinear Biases In Articulation Constrain The Design Space Of Language

Rick Janssen1 , Bodo Winter2 , Dan Dediu1 , Scott Moisik1 and Sean Roberts1
1 Max Planck Institute for Psycholinguistics
2 University of California, Merced

Keywords: iterated learning, nonlinear biases, slide whistle experiment

Short description: Subjects converge on stable regions in iterated learning experiment using nonlinear flutes. Anatomical biases might constrain the design space of language.

Abstract:

In Iterated Learning (IL) experiments, a participant’s learned output serves as the next participant’s learning input (Kirby et al., 2014). IL can be used to model cultural transmission and has indicated that weak biases can be amplified through repeated cultural transmission (Kirby et al., 2007). So, for example, structural language properties can emerge over time because languages come to reflect the cognitive constraints in the individuals that learn and produce the language. Similarly, we propose that languages may also reflect certain anatomical biases. Do sound systems adapt to the affordances of the articulation space induced by the vocal tract?

The human vocal tract has inherent nonlinearities which might derive from acoustics and aerodynamics (cf. quantal theory, see Stevens, 1989) or biomechanics (cf. Gick & Moisik, 2015). For instance, moving the tongue anteriorly along the hard palate to produce a fricative does not result in large changes in acoustics in most cases, but for a small range there is an abrupt change from a perceived palato-alveolar [ʃ] to alveolar [s] sound (Perkell, 2012). Nonlinearities such as these might bias all human speakers to converge on a very limited set of phonetic categories, and might even be a basis for combinatoriality or phonemic ‘universals’.

While IL typically uses discrete symbols, Verhoef et al. (2014) have used slide whistles to produce a continuous signal. We conducted an IL experiment with human subjects who communicated using a digital slide whistle for which the degree of nonlinearity is controlled. A single parameter (α) changes the mapping from slide whistle position (the ‘articulator’) to the acoustics. With α=0, the position of the slide whistle maps Bark-linearly to the acoustics. As α approaches 1, the mapping gets more double-sigmoidal, creating three plateaus where large ranges of positions map to similar frequencies. In more abstract terms, α represents the strength of a nonlinear (anatomical) bias in the vocal tract.

Six chains (138 participants) of dyads were tested, each chain with a different, fixed α. Participants had to communicate four meanings by producing a continuous signal using the slide-whistle in a ‘director-matcher’ game, alternating roles (cf. Garrod et al., 2007).

Results show that for high αs, subjects quickly converged on the plateaus. This quick convergence is indicative of a strong bias, repelling subjects away from unstable regions already within-subject. Furthermore, high αs lead to the emergence of signals that oscillate between two (out of three) plateaus. Because the sigmoidal spaces are spatially constrained, participants increasingly used the sequential/temporal dimension. As a result of this, the average duration of signals with high α was ~100ms longer than with low α. These oscillations could be an expression of a basis for phonemic combinatoriality.

We have shown that it is possible to manipulate the magnitude of an articulator-induced non-linear bias in a slide whistle IL framework. The results suggest that anatomical biases might indeed constrain the design space of language. In particular, the signaling systems in our study quickly converged (within-subject) on the use of stable regions. While these conclusions were drawn from experiments using slide whistles with a relatively strong bias, weaker biases could possibly be amplified over time by repeated cultural transmission, and likely lead to similar outcomes.

Citation:

Janssen R., Winter B., Dediu D., Moisik S. and Roberts S. (2016). Nonlinear Biases In Articulation Constrain The Design Space Of Language. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/86.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Janssen, Winter, Dediu, Moisik, Roberts 2016

Simple Agents Are Able To Replicate Speech Sounds Using 3d Vocal Tract Model

Rick Janssen1 , Dan Dediu1 and Scott Moisik2
1 Max Planck Institute for Psycholinguistics
2 MPI

Keywords: agent modelling, anatomical biasing, evolutionary computation, neural networks

Short description: Simple neural network agents are able to replicate speech sounds using a 3D vocal tract model. Investigation of anatomical biases in population is now feasible.

Abstract:

Many factors have been proposed to explain why groups of people use different speech sounds in their language. These range from cultural, cognitive, environmental (e.g., Everett, et al., 2015) to anatomical (e.g., vocal tract (VT) morphology). How could such anatomical properties have led to the similarities and differences in speech sound distributions between human languages?

It is known that hard palate profile variation can induce different articulatory strategies in speakers (e.g., Brunner et al., 2009). That is, different hard palate profiles might induce a kind of bias on speech sound production, easing some types of sounds while impeding others. With a population of speakers (with a proportion of individuals) that share certain anatomical properties, even subtle VT biases might become expressed at a population-level (through e.g., bias amplification, Kirby et al., 2007). However, before we look into population-level effects, we should first look at within-individual anatomical factors. For that, we have developed a computer-simulated analogue for a human speaker: an agent. Our agent is designed to replicate speech sounds using a production and cognition module in a computationally tractable manner.

Previous agent models have often used more abstract (e.g., symbolic) signals. (e.g., Kirby et al., 2007). We have equipped our agent with a three-dimensional model of the VT (the production module, based on Birkholz, 2005) to which we made numerous adjustments. Specifically, we used a 4th-order Bezier curve that is able to capture hard palate variation on the mid-sagittal plane (XXX, 2015). Using an evolutionary algorithm, we were able to fit the model to human hard palate MRI tracings, yielding high accuracy fits and using as little as two parameters. Finally, we show that the samples map well-dispersed to the parameter-space, demonstrating that the model cannot generate unrealistic profiles. We can thus use this procedure to import palate measurements into our agent’s production module to investigate the effects on acoustics. We can also exaggerate/introduce novel biases.

Our agent is able to control the VT model using the cognition module.

Previous research has focused on detailed neurocomputation (e.g., Kröger et al., 2014) that highlights e.g., neurobiological principles or speech recognition performance. However, the brain is not the focus of our current study. Furthermore, present-day computing throughput likely does not allow for large-scale deployment of these architectures, as required by the population model we are developing. Thus, the question whether a very simple cognition module is able to replicate sounds in a computationally tractable manner, and even generalize over novel stimuli, is one worthy of attention in its own right.

Our agent’s cognition module is based on running an evolutionary algorithm on a large population of feed-forward neural networks (NNs). As such, (anatomical) bias strength can be thought of as an attractor basin area within the parameter-space the agent has to explore. The NN we used consists of a triple-layered (fully-connected), directed graph. The input layer (three neurons) receives the formants frequencies of a target-sound. The output layer (12 neurons) projects to the articulators in the production module. A hidden layer (seven neurons) enables the network to deal with nonlinear dependencies. The Euclidean distance (first three formants) between target and replication is used as fitness measure. Results show that sound replication is indeed possible, with Euclidean distance quickly approaching a close-to-zero asymptote.

Statistical analysis should reveal if the agent can also: a) Generalize: Can it replicate sounds not exposed to during learning? b) Replicate consistently: Do different, isolated agents always converge on the same sounds? c) Deal with consolidation: Can it still learn new sounds after an extended learning phase (‘infancy’) has been terminated? Finally, a comparison with more complex models will be used to demonstrate robustness.

Citation:

Janssen R., Dediu D. and Moisik S. (2016). Simple Agents Are Able To Replicate Speech Sounds Using 3d Vocal Tract Model. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/97.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Janssen, Dediu, Moisik 2016

Protolanguage Possibilities In A Construction Grammar Framework

Sverker Johansson
Dalarna University

Keywords: protolanguage, Construction Grammar, evolvability

Short description: Construction Grammar is evolvable, and a suitable framework for protolanguages.

Abstract:

Identifying possible stages of protolanguage critically depends on the underlying nature of language. Theories of language differ in evolvability, and in whether they permit protolanguage stages. In this presentation, I will study the protolanguage potential and evolva¬bility of Construction Grammar. Postulating that CG is a biologically real description of language, its evolvability through a sequence of intermediate protolanguages is investigated.

Citation:

Johansson S. (2016). Protolanguage Possibilities In A Construction Grammar Framework. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/149.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Johansson 2016

Modeling Language Change Triggered By Language Shift

Anna Jon-And1 and Elliot Aguilar2
1 Centre for the Study of Cultural Evolution, Stockholm University
2 Department of Biology, University of Pennsylvania

Keywords: agent-based model, language change, language shift, language contact, Portuguese, Mozambique

Short description: We build an agent-based model of how language changes due to an influx of new speakers and test its predictions using data on Mozambican Portuguese.

Abstract:

Language shift is widely believed to accelerate change in the target language, an effect which is generally attributed to innovations introduced by new speakers during the second language acquisition (SLA) process (Thomason & Kaufman, 1988). If this hypothesis is correct, then the rate of contact-induced language change in a language shift context should be related to the rate at which second language (L2) speakers enter the population. Unfortunately, little diachronic data exists to test this hypothesis. The aim of the present paper is to model the mechanism that makes SLA accelerate language change on a population level and compare its predictions to a rare diachronic data set from the ongoing language shift in Maputo, Mozambique.

To model linguistic interaction, we adapted Jansson et al.’s model of creole formation (Jansson et al. 2015). At each time step, all speakers met in pairwise interactions and chose to utter one of n variants of a linguistic feature based on their probability distribution of usage. Each agent then modified their distribution of usage based on what they heard by using a linear updating rule with parameter l. After a round of interactions, population turnover occurred with some individuals dying and new L1 and L2 speakers entering the population with rates b and r, respectively. Newborn L1s chose two linguistic `parents’ at random and averaged their usage distributions to initialize their own. L2 individuals started with the population mean frequencies of usage. However, with probability μ, a newly recruited L2 speaker could assign all the probability mass to a `mutant’ variant. We explored the general behavior of the model in both fixed and expanding populations for 100 years, with 365 rounds of interaction per year. We then ran a specific set of runs parameterized by demographic data (number of L1 and L2 speakers) from Maputo over a thirty-two year period (1975-2007). We compared our model runs with diachronic data on innovative preposition use and reduced verbal morphology in Maputo Portuguese from two time points (1993 & 2007), presuming that the use of the innovative forms was zero in 1975, as the spread of Portuguese through massive L2 acquisition started only after this year. The datasets comprehend 12 hours of recordings with 20 participants in similar circumstances from each time point, where variation between innovative and conservative forms is quantified.

As predicted, our results show that the rate of increase in usage of the novel variant was most strongly dependent on the rate at which L2 speakers entered the population, r, as well as the mutation rate, μ. In the Maputo runs, however, our data points did not fall within the 95% confidence intervals of any of our parameter groupings. We then modified the model to allow the L2 speakers to continue to introduce variation for the first five years they were in the population, to represent the fact that the SLA process occurs over time. Using the same criterion we found agreement between the simulation and the preposition data, while the verb data continued to diverge from model predictions. Importantly, our model assumed neutral evolution of the linguistic features. The departure of the verb data from our model predictions may indicate the presence of selection pressures or biases, for instance, the new verb forms being more economical.

Agent-based models have been successfully used in the field of cultural language evolution for explaining the emergence of linguistic structure (e.g. Kirby 2001), whereas change in already established structures seems to be more difficult to account for. Recent theoretical papers (Blythe & Croft, 2012; Pierrehumbert et al., 2014) have aimed at modeling the propagation of a single innovation (introduced by one speaker) in a population, thus accounting for language change with no pressure from contact. In these models, conditions such as biases and/or innovator network position, are required for the novel variant to be successful. Our simulations demonstrate how with minimal assumptions novel variants can be introduced and spread in a population, due to multiple introductions by different individuals. We thus suggest that this may be a basic typological difference between contact-induced and non-contact-induced language change, which would explain how SLA may increase language change in shift situations.

Citation:

Jon-And A. and Aguilar E. (2016). Modeling Language Change Triggered By Language Shift. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/156.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Jon-And, Aguilar 2016

The Evolution Of Zipf’s Law Of Abbreviation

Jasmeen Kanwal , Kenny Smith , Jennifer Culbertson and Simon Kirby
University of Edinburgh

Keywords: zipf's law of abbreviation, lexical change, artificial language task

Abstract:

As Zipf observed in 1935, human languages appear to exhibit an inverse relationship between word length and word frequency; the higher the frequency of a word, the shorter it tends to be. Since then, due to the increasing availability of large corpora, this inverse relationship (Zipf’s Law of Abbreviation, or ZLA) has been observed in a wide range of languages. We ask what causes so many languages to align length and frequency in this way, and investigate one explanatory hypothesis through an artificial language experiment.

Citation:

Kanwal J., Smith K., Culbertson J. and Kirby S. (2016). The Evolution Of Zipf’s Law Of Abbreviation. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/64.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Kanwal, Smith, Culbertson, Kirby 2016

The Spontaneous Emergence Of Linguistic Diversity In An Artificial Language

Deborah Kerr and Kenny Smith
University of Edinburgh

Keywords: linguistic diversity, artificial language learning, social selection

Short description: Artificial language learning in social groups, evolution of linguistic diversity even with no functional pressure for social differentiation

Abstract:

We present an experimental paradigm, combining artificial language learning with the Minimal Group method borrowed from social psychology, and demonstrate the spontaneous emergence of linguistic diversity despite the absence of functional pressures for social differentiation.

Citation:

Kerr D. and Smith K. (2016). The Spontaneous Emergence Of Linguistic Diversity In An Artificial Language. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/112.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Kerr, Smith 2016

Evolution Of The Language-ready Brain: Warfare Or ‘mother Tongues’?

Chris Knight1 and Camilla Power2
1 University College London
2 University of East London

Keywords: cooperation, matrilocal, patrilocal, alloparenting, childcare, mother-tongues, warfare

Short description: Recent findings in population genetics lend support to the 'mother tongues' hypothesis

Abstract:

For language to evolve, group-level normativity, cooperation and mutual understanding must have intensified beyond the range of variation permitted by non-human primate social life. Although this general assumption is broadly shared, recent theoretical models to explain the necessary group-level cooperation have clustered around two poles. At one extreme, theorists have traditionally invoked inter-group conflict including warfare. At the other, scholars have invoked grandmothering and coalitionary alliances between females to share the burdens of childcare. These competing approaches make divergent predictions testable in the light of recently available evidence from population genetics.

Citation:

Knight C. and Power C. (2016). Evolution Of The Language-ready Brain: Warfare Or ‘mother Tongues’?. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/77.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Knight, Power 2016

A General Auditory Bias For Handling Speaker Variability In Speech? Evidence In Humans And Songbirds.

Buddhamas Kriengwatana1 , Paola Escudero2 , Anne Kerkhoven1 and Carel ten Cate1
1 Institute for Biology Leiden
2 University of Western Sydney

Keywords: vowel normalization, zebra finch, vowel categorization, speech perception, comparative cognition

Abstract:

Different speakers produce the same speech sound differently, yet listeners are still able to reliably identify the speech sound. A compelling example of our ability to distinguish speech sounds despite enormous variability arising from speaker, gender, and age differences is in the case of vowels. Despite the large between-speaker variation within a vowel category and striking overlap between vowel categories, human adults, pre-linguistic infants, and even nonhuman animals are able to classify vowels of different speakers and genders. How is this achieved, and are they achieved in the same way by human adults, infants, and nonhuman animals?

Perceptual adjustments to accommodate for speaker differences in vowels may possibly be achieved pre-attentively via low level processing mechanisms. Combined with findings suggesting nonhuman animals also adjust for speaker differences, this raises an intriguing possibility that there is a tendency for the vertebrate auditory system to automatically accommodate for speaker differences in vowel production. If this is the case, then exposure to speaker-variability in vowel production need not be necessary in order for listeners to compensate for speaker and gender differences.

The aim of this study was to compare the ability of humans and zebra finches to categorize vowels despite speaker variation in speech in order to test the hypothesis that accommodating speaker and gender differences in isolated vowels can be achieved without prior experience with speaker-related variability.

Using a behavioral Go/No-go task and identical stimuli, we compared Australian English adults’ (naïve to Dutch) and zebra finches’ (naïve to human speech) ability to categorize / I/ and /ε/ vowels of an novel Dutch speaker after learning to discriminate those vowels from only one other speaker. The Go/No-go task requires subjects to make a response towards vowel stimuli assigned to one category (Go) and to inhibit responses toward vowel stimuli assigned to the other category (No-go). Experiments 1 and 2 presented vowels of two speakers interspersed or blocked, respectively. If experience with speaker variability in vowel production is necessary for successful normalization to occur, then we predicted that zebra finches and humans would not be able to discriminate the vowels of the second, new speaker.

Results demonstrate that categorization of vowels is possible without prior exposure to speaker-related variability in speech for zebra finches, and in non-native vowel categories for humans. This study is the first to provide evidence for what might be a species-shared auditory bias that may supersede speaker-related information during vowel categorization. The role of experience with speaker-related variability may be to tune the auditory system to the most relevant acoustic parameters that define phonetic categories. Our results do not seem to be adequately explained by existing vowel normalization algorithms (e.g. formant ratios, exemplar-based models). Future investigations of alternative accounts of vowel normalization should incorporate the possibility of an auditory bias for disregarding between-speaker variability, and bear in mind that there are many similarities between humans and zebra finches in vocal production, characteristics of the acoustic vocal signal, auditory perception, and need for accurate perceptual categorization. Thus, it may not be so surprising that there are also parallels in perceptual mechanisms that allow both species to overcome the problem of separating variability associated with content of the signal from variability arising from the individual signaler.

Citation:

Kriengwatana B., Escudero P., Kerkhoven A. and ten Cate C. (2016). A General Auditory Bias For Handling Speaker Variability In Speech? Evidence In Humans And Songbirds.. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/29.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Kriengwatana, Escudero, Kerkhoven, ten Cate 2016

Cumulative Vocal Cultures In Orangutans And Their Ontogenetic Origin

Adriano Lameira1 , Jeremy Kendal1 and Marco Gamba2
1 Durham University
2 University of Torino

Keywords: vocal cultures, great ape call, vocal learning, infant behaviour, cumulative culture

Short description: CUMULATIVE VOCAL CULTURES IN ORANGUTANS AND THEIR ONTOGENETIC ORIGIN

Abstract:

Recently, several lines of evidence indicate that orangutans (Pongo sp.) – the earliest diverging great ape lineage – are capable of expanding their species-specific vocal repertoire with new (voiced and voiceless) calls. These calls are shared and learned between individuals of the same cultural community. Orangutans represent, thus, a desirable model species for the study of language and speech precursors within the human lineage, since human spoken languages are fundamentally characterized by being learned. In the first section of this talk, based on the largest and most comprehensive call database ever assembled in orangutans (currently comprising 9 wild populations across Sumatra and Borneo), and perhaps among any great ape species, we show that orangutan vocal repertoires across populations show a nested structure – a signature of cultural build up that indicates that orangutan vocal cultural repertoires emerge and culturally evolve through a process of “sound upon sound”. The identification of these orangutan vocal cultures raises, however, questions about their ontogenetic origins. In the second section of this talk, we present a case of extreme vocal malleability in a wild Sumatran orangutan infant (approximately 5 years old), who exhibits a repertoire 2 to 4-fold the size of that of adults. A modest dataset of less that 150 recordings collected from this young individual increased the known orangutan vocal repertoire hitherto described by more than 20%. This flexibility verifies that assemblages of cultural calls may be indeed acquired by infant orangutans and subsequently passed on through generations. Altogether, our findings indicate that vocal cultures in orangutans are real and may ontogenetically emerge through similar mechanisms as human vocal cultures. Like children, orangutan infants exhibit a latent degree of vocal malleability that expressively surpasses that of adults, and they experience a process of cultural trimming within their “linguistic” community in the process of development of the adult repertoire. Once present in our last great ape common ancestor, similar vocal skills would have allowed the rise and preservation of vocal cultures comparable at a basic level with modern human spoken languages.

Citation:

Lameira A., Kendal J. and Gamba M. (2016). Cumulative Vocal Cultures In Orangutans And Their Ontogenetic Origin. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/60.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Lameira, Kendal, Gamba 2016

The Emergence Of Argument Marking

Sander Lestrade
CLS/RU

Keywords: grammatical argument marking, role marking, person indexing, cultural evolution, computer simulation

Short description: The emergence of grammatical role marking and person indexing from lexical ad hoc solutions is modeled in a multi-agent computer simulation.

Abstract:

The emergence of grammatical role marking and person indexing is modeled in a cognitively motivated, multi-agent computer simulation of language change. As the forms of frequently used words erode and their meanings desemanticize, they develop into maximally short forms with maximally general meanings, which eventually can no longer be used as referring expressions. Using an artificial language that initially does not have any grammatical argument-marking strategy whatsoever, it can be shown how lexical ad hoc solutions for event-role ambiguity develop into case marking, while referring expressions develop into verb indexes.

Citation:

Lestrade S. (2016). The Emergence Of Argument Marking. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/36.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Lestrade 2016

Learnability Pressures Influence The Encoding Of Information Density In The Lexicon

Molly Lewis and Michael C. Frank
Stanford University

Keywords: uniform information density, lexicon, word length, learnability

Short description: Learnability pressures influence the encoding of information density in the lexicon: Languages with more speakers have less uniform lexica

Abstract:

Learnability pressures influence the encoding of information density in the lexicon

Citation:

Lewis M. and Frank M. C. (2016). Learnability Pressures Influence The Encoding Of Information Density In The Lexicon. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/154.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Lewis, Frank 2016

A Developmental Perspective On Language Origin: Children Are Old Hands At Gesture

Casey Lister , Tiarn Burtenshaw , Nicolas Fay , Bradley Walker and Jeneva Ohan
University of Western Australia

Keywords: Language, Gesture, Evolution, Iconicity, Vocalisation, Development

Short description: Do children communicate more successfully through gestures or sounds? A developmental perspective on sign creation and iconicity among 6-12 year olds.

Abstract:

The capacity for language is a distinguishing feature of our species. A problem for those studying its origin is that our pre-linguistic ancestors, who used language in its earliest forms, no longer exist. Instead, we must draw conclusions based on studies of modern humans with fully fledged languages. This makes it difficult to assess the impact of culture and convention on the creation of novel sign systems. The current study addresses this issue through a referential communication task that examines how participants aged 6-12 years create novel sign systems using gestures or vocalisations (sounds that are not words). As children have less developed linguistic systems, and less exposure to conventionalised signs, they offer a new perspective on how people create novel sign systems when prevented from using their pre-existing language.

Citation:

Lister C., Burtenshaw T., Fay N., Walker B. and Ohan J. (2016). A Developmental Perspective On Language Origin: Children Are Old Hands At Gesture. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/95.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Lister, Burtenshaw, Fay, Walker, Ohan 2016

Emergence Of Signal Structure: Effects Of Duration Constraints

Hannah Little , Kerem Eryılmaz and Bart de Boer
Vrije Universiteit Brussel

Keywords: Signal Duration, Emergence of Speech, Artificial Language Experiments

Short description: Signal duration affects signal structure. This has implications for both speech evolution & the design of speech evolution experiments.

Abstract:

Recent work has investigated the emergence of structure in speech using experiments which use artificial continuous signals. Some experiments have had no limit on the duration which signals can have (e.g. Verhoef et al., 2014), and others have had time limitations (e.g. Verhoef et al., 2015). However, the effect of time constraints on the structure in signals has never been experimentally investigated.



Physical, functional or cultural pressures will effect how long signals in the real world can be. Obviously, speech is constrained by breath. Social and functional pressures for transmitting information quickly, succinctly and with little effort will also create pressures for signals to be shorter (Piantadosi et al., 2011).



Signal duration will affect signal structure. Having shorter signals may limit redundancy and influence how quickly signal units are discretised and reused.



We carried out a signal creation experiment. Participants created continuous signals using a Leap Motion. The pitch of signals could be manipulated by the position of a participant’s hand in relation to the Leap Motion (see Little et al. (2015) for a summary of the paradigm). Participants created signals for a set of meanings. No two meanings had any features (shape, colour or texture) which were shared. Participants took part in two conditions. In the unconstrained condition, signals did not have a limit on duration (signals remained quite short, with an average length of 3 2:3s). In the constrained condition, signals could only be 1 second long. The experiment had 3 phases, with the meaning space expanding in each phase; 5, 10, 15 meanings in phase 1, 2, 3 respectively. Each phase consisted of a practice session, a signal creation task (participants created signals for each randomly selected meaning), and a signal recognition task (participants heard their own signals and chose between 4 possible meanings for each).



In the constrained condition, participants were worse at recognising their signals (mean = 64% correct), than in the unconstrained condition (mean = 86%). Success levels were not significantly affected by the growth of the meaning space. This discrepancy in success indicates that in the constrained condition, the participants had a much harder time creating distinct signals. In the constrained condition, signals were much simpler, with a lot of participants relying on static pitch, rather than on patterns and pitch changes. We were able to measure the amount of movement within signals by calculating the variance of the signal trajectory coordinate values, and showed that the amount of movement in trajectories was significantly lower in the constrained condition than in the unconstrained condition (we compared a mixed linear model with a null model, chi squared(1) = 9, p < 0:001).



We also found that in the unconstrained condition, there was a significant downward trend in the amount of systematicity in signals (measured by trajectory predictability given the rest of the signal repertoire) as the meaning space expanded. Signals for meanings introduced later were less predictable than those in earlier phases (we compared a mixed linear model with a null model, chi squared(1) = 4, p < 0:05). This trend did not occur in the constrained condition, maybe suggesting that the limited signal duration stopped participants creating new strategies for new meanings, or constrained the use of redundant features in new signals, both of which would make signals less predictable.



Our results highlight why experimental studies need to consider the effects which time constraints will have on structure, systematicity and redundancy in artificial signals. Further, our time constraints impeded the production of distinct signals, generating a pressure for more efficient strategies for differentiating signals. One potential strategy, which accommodates the crowding of signal spaces, is the use of combinatorial structure. However, further experimental work needs to be done to see if this is the case.

Citation:

Little H., Eryılmaz K. and de Boer B. (2016). Emergence Of Signal Structure: Effects Of Duration Constraints. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/25.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Little, Eryılmaz, de Boer 2016

Differing Signal-meaning Dimensionalities Facilitates The Emergence Of Structure

Hannah Little , Kerem Eryılmaz and Bart de Boer
Vrije Universiteit Brussel

Keywords: Iconicity, Combinatorial structure, Evolution of Speech, Artificial Languages

Short description: A signal space having fewer dimensions than a meaning space facilitates the emergence of structure in non-speech-like continuous signals.

Abstract:

Structure of language is not only caused by cognitive processes, but also by physical aspects of the signalling modality. We test the assumptions surrounding the role which the physical aspects of the signal space will have on the emergence of structure in speech. Here, we use a signal creation task to test whether a signal space and a meaning space having similar dimensionalities will generate an iconic system with signal-meaning mapping and whether, when the topologies differ, the emergence of non-iconic structure is facilitated. In our experiments, signals are created using infrared sensors which use hand position to create audio signals. We find that people take advantage of signal-meaning mappings where possible. Further, we use trajectory probabilities and measures of variance to show that when there is a dimensionality mismatch, more structural strategies are used.

Citation:

Little H., Eryılmaz K. and de Boer B. (2016). Differing Signal-meaning Dimensionalities Facilitates The Emergence Of Structure. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/4.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Little, Eryılmaz, de Boer 2016

Correlated Evolution Or Not? Phylogenetic Linguistics With Syntactic, Cognacy, And Phonetic Data

Giuseppe Longobardi1 , Armin Buch2 , Andrea Ceolin3 , Aaron Ecay4 , Cristina Guardiano5 , Monica Irimia6 , Dimitris Michelioudakis6 , Nina Radkevich6 and Gerhard Jaeger2
1 University of York/Università diTrieste
2 Universitaet Tuebingen
3 University of Pennsylvania
4 University of york
5 Università di Modena e Reggio Emilia
6 University of York

Keywords: - cultural language evolution, - linguistic phylogenetic inference, - generative grammar, - phonetic alignment

Abstract:

In this work we compare, on the well explored domain of Indo-European languages, the phylogenetic outputs of three different sets of linguistic characters: traditional etymological judgments, a system for phonetic alignment of lists of cognates, and a set of values for generative syntactic parameters. The correlation and relative informativeness of distances and phylogenies generated by the three types of characters can thus be for the first time accurately evaluated, and the degree of success of the last two, innovative, alternatives to the classical comparative method can be so assessed.

Citation:

Longobardi G., Buch A., Ceolin A., Ecay A., Guardiano C., Irimia M., Michelioudakis D., Radkevich N. and Jaeger G. (2016). Correlated Evolution Or Not? Phylogenetic Linguistics With Syntactic, Cognacy, And Phonetic Data. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/162.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Longobardi, Buch, Ceolin, Ecay, Guardiano, Irimia, Michelioudakis, Radkevich, Jaeger 2016

The Evolution Of Redundancy In A Global Language

Gary Lupyan and Justin Sulik
University of Wisconsin-Madison

Keywords: language change, cultural evolution, ngram models, linguistic niche hypothesis

Short description: What makes American English different from British English? Kids have something to do with it!

Abstract:

Why are there different languages? Religious mythology aside, the usual story is that languages diverge when an initial group of speakers disperses, al-lowing their ways of speaking to begin to drift independently, instead of togeth-er (Sapir, 1921). But consider the explanatory inadequacy of such a neutral drift account if applied to a biological organism. Why do birds have different beaks? Because there is random drift in beaks shapes and once a population of birds disperses, their beaks drift independently, instead of together. When explaining animal morphology, we often appeal to adaptive fit: some beaks are better suited for some environments than others. Might similar logic apply to languages as well? Might linguistic diversity reflect, in part, the adaptation of languages to different environments in which they are learned and used?

Citation:

Lupyan G. and Sulik J. (2016). The Evolution Of Redundancy In A Global Language. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/197.html


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© Lupyan, Sulik 2016

Nonhuman Animals’ Use Of Ostensive Cues In An Object Choice Task

Heidi Lyn1 , Stephanie Jett2 , Megan Broadway1 and Mystera Samuelson1
1 University of Southern Mississippi
2 University of South Alabama

Keywords: Ostensive acts, Gricean Communication, pointing comprehension, apes, dogs, gesture

Short description: Dogs and bonobos use ostensive cues to follow human gestures, potentially clarifying part of the evolutionary path to language.

Abstract:

One recent argument concerning the evolution of language centers on the ability of the last common ancestor of apes and humans to engage in Gricean communication (that is, communication in which the speaker has the clear intent to produce a response, but also with the intent that the hearer can recognize the communicative intent of the speaker (e.g. Moore, 2015; Scott-Phillips, 2015; Tomasello, 2008). The standard argument (see Moore, 2015) has been that true Gricean communication requires fourth-order meta-representation (the speaker intends for the hearer to understand that the speaker intends for the hearer to understand the message) and is therefore unique to humans. In this conception, animal communication is limited to strict associations (coded communication) and Gricean or ostensive communication was the key innovation in the evolution of language.



An alternative view suggests that the representation required for ostensive communication is much simpler (see Moore, 2015 for an explanation) – the speaker intends to communicate a message to the hearer and the speaker also intends for that message to represent something in the world (two first-order meta-representations). On the receiving end, then, the hearer must both understand that the speaker is intending to communicate (through overt or covert intentional cues) and must decipher the message (the signal). The intent to communicate would itself be communicated through an ostensive act or cue – an act designed to draw attention to and facilitate the receipt of a communicative signal. Ostensive cues can include eye contact and shifting of joint attention, among other behaviors. These ostensive cues remain separate from the communicative signal, which could include gestures, verbalizations, etc. Under the standard view, nonhuman animals should rely entirely on associative learning to follow a signal such as pointing. In this scenario, nonhumans should perform at the same level when a communicator uses ostensive cues as when those cues are eliminated.

To evaluate this theory, we tested 3 bonobos (Pan paniscus) from the Ape Cognition and Conservation Initiative (ACCI) in Des Moines, IA on the object choice task (point following) both with and without ostensive cues (in this case, ostensive cues included gaze alteration between the gesture and the recipient). When ostensive cues were removed, the apes’ performance fell from almost perfect to chance levels, indicating that ostensive cues are vital for the performance of bonobos in this task (p<.01, binomial tests).

A new study has been initiated with domestic dogs (Canis familiaris) at the Humane Society of South Mississippi in Gulfport, MS. A total of forty dogs will be tested on a variety of ostensive cues to determine which if any are most salient. To date, twenty-five dogs have begun testing, but only twelve passed the initial evaluation of following eye gaze. Of those twelve, eight could follow a distal point, but only two could follow a cross-body point (required for our study). In this stage, the ostensive cues tested were gaze alterations among the three points of the joint attention triad – gesture and recipient, gesture and referent, and recipient and referent. Neither dog could follow points with all ostensive cues removed, although one had no difficulty when alteration to any one point of the triad was eliminated (e.g. eye gaze only moved between gesture and recipient, but not to the referent).

These findings strongly indicate that nonhuman animals utilize overt ostensive cues to recognize gestural communication from humans, suggesting that the development of ostensive cues was not the key innovation that triggered the evolution of human language. Rather, the dog data reinforce the likely crucial social component of gesture comprehension in nonhumans, as the dogs in this study almost certainly had less human interaction and performed less well than pet dogs in other studies, again indicating more than strict associative learning.

Citation:

Lyn H., Jett S., Broadway M. and Samuelson M. (2016). Nonhuman Animals’ Use Of Ostensive Cues In An Object Choice Task. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/173.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Lyn, Jett, Broadway, Samuelson 2016

Language Adapts To Signal Disruption In Interaction

Vinicius Macuch Silva1 and Sean Roberts2
1 Radboud University Nijmegen
2 Max Planck Institute for Psycholinguistics

Keywords: linguistic structure, interactive pressures, structural adaptation, signal disruption, conversational repair

Short description: Interactive pressures help shape the emergence of linguistic structure

Abstract:

Linguistic traits are often seen as reflecting cognitive biases and constraints (e.g. Christiansen & Chater, 2008). However, language must also adapt to properties of the channel through which communication between individuals occurs. Perhaps the most basic aspect of any communication channel is noise. Communicative signals can be blocked, degraded or distorted by other sources in the environment. This poses a fundamental problem for communication. On average, channel disruption accompanies problems in conversation every 3 minutes (27% of cases of other-initiated repair, Dingemanse et al., 2015). Linguistic signals must adapt to this harsh environment. While modern language structures are robust to noise (e.g. Piantadosi et al., 2011), we investigate how noise might have shaped the early emergence of structure in language.

The obvious adaptation to noise is redundancy. Signals which are maximally different from competitors are harder to render ambiguous by noise. Redundancy can be increased by adding differentiating segments to each signal (increasing the diversity of segments). However, this makes each signal more complex and harder to learn. Under this strategy, holistic languages may emerge. Another strategy is reduplication - repeating parts of the signal so that noise is less likely to disrupt all of the crucial information. This strategy does not increase the difficulty of learning the language - there is only one extra rule which applies to all signals. Therefore, under pressures for learnability, expressivity and redundancy, reduplicated signals are expected to emerge.

However, reduplication is not a pervasive feature of words (though it does occur in limited domains like plurals or iconic meanings). We suggest that this is due to the pressure for redundancy being lifted by conversational infrastructure for repair. Receivers can request that senders repeat signals only after a problem occurs. That is, robustness is achieved by repeating the signal across conversational turns (when needed) instead of within single utterances.

As a proof of concept, we ran two iterated learning chains with pairs of individuals in generations learning and using an artificial language (e.g. Kirby et al., 2015). The meaning space was a structured collection of unfamiliar images (3 shapes x 2 textures x 2 outline types). The initial language for each chain was the same written, unstructured, fully expressive language. Signals produced in each generation formed the training language for the next generation. Within each generation, pairs played an interactive communication game. The director was given a target meaning to describe, and typed a word for the matcher, who guessed the target meaning from a set. With a 50% probability, a contiguous section of 3-5 characters in the typed word was replaced by ‘noise’ characters (#). In one chain, the matcher could initiate repair by requesting that the director type and send another signal. Parallel generations across chains were matched for the number of signals sent (if repair was initiated for a meaning, then it was presented twice in the parallel generation where repair was not possible) and noise (a signal for a given meaning which was affected by noise in one generation was affected by the same amount of noise in the parallel generation).

For the final set of signals produced in each generation we measured the signal redundancy (the zip compressibility of the signals), the character diversity (entropy of the characters of the signals) and systematic structure (z-score of the correlation between signal edit distance and meaning hamming distance). In the condition without repair, redundancy increased with each generation (r=0.97, p=0.01), and the character diversity decreased (r=-0.99,p=0.001) which is consistent with reduplication, as shown below (part of the initial and the final language):

Linear regressions revealed that generations with repair had higher overall systematic structure (main effect of condition, t = 2.5, p < 0.05), increasing character diversity (interaction between condition and generation, t = 3.9, p = 0.01) and redundancy increased at a slower rate (interaction between condition and generation, t = -2.5, p < 0.05).

That is, the ability to repair counteracts the pressure from noise, and facilitates the emergence of compositional structure. Therefore, just as systems to repair damage to DNA replication are vital for the evolution of biological species (O’Brien, 2006), conversational repair may regulate replication of linguistic forms in the cultural evolution of language. Future studies should further investigate how evolving linguistic structure is shaped by interaction pressures, drawing on experimental methods and naturalistic studies of emerging languages, both spoken (e.g Botha, 2006; Roberge, 2008) and signed (e.g Senghas, Kita, & Ozyurek, 2004; Sandler et al., 2005).

Citation:

Macuch Silva V. and Roberts S. (2016). Language Adapts To Signal Disruption In Interaction. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/20.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Macuch Silva, Roberts 2016

Biological Systems Of Interest To Researchers Of Cultural Evolution

Luke Mccrohon
DMM.com

Keywords: cultural evolution, biological models, evolutionary theory

Short description: What insights can Jellyfish, prions, Tasmanian devil cancers, air plants and Oxytricha trifallax give us into the evolution of culture?

Abstract:

Cultural evolution has played an important role in the evolution of language (e.g. see Kirby & Hurford, 2002; Kirby, Cornish, & Smith, 2008). This cultural process is however far less well understood than its biological equivalent, which has led to the proposal of various analogies between biological and cultural evolution (Sereno, 1991). These analogies, however, have been rightfully criticised as misleading (e.g. Smith, 2012). Despite this, we argue much insight can still be gained from the study of biology, and in this paper survey several lesser-known biological systems that are informative for the study of cultural evolution.



The first class of systems we discuss are species which undergo an alternation of generations between two distinct reproductive forms as part of their life cycles. Examples of such species include those of the phylum Cnidaria (Collins, 2002), which includes jellyfish and corals, and the parasitic fungi of the genus Gymnosporangium best known for causing cedar-apple rust (Petersen, 1974). A parallel is drawn with the inherently two-stage replication of cultural information from brains to the environment and then back from the environment to brains.



Second, we consider prions, misfolded variants of the mammalian PrP protein which can cause transmissable neurological diseases. Li, Browning, Mahal, Oelschlegel, and Weissmann (2010) have shown that, without changes in the genetic encoding of the base protein, changes in the secondary (folding) structure of prions can be selected and evolve via a darwinian process. We argue prions are therefore interesting from a cultural evolution perspective due to their replication via direct copying of form, without the separation of genotype and phenotype found in other biological systems (which is also absent in cultural transmission).



Next we introduce clonal transmissible cancers; infectious cancers evolved from a species’ own cells (Murchison, 2008; Metzger, Reinisch, Sherry, & Goff, 2015). Both the inter-cellular selection process leading to the emergence of these cancers, as well as their subsequent evolution as pathogens (and the host response), are suggested as a model for the often proposed evolution of maladaptive cultural variants. This host-parasite relationship is briefly contrasted with the symbiosis between ants and certain epiphytes (plants that grow on other plants) which are known to grow structures specifically to house ant colonies (Huxley, 1980). It is suggested that, in the general case, this is likely a better model for thinking about linguistic gene-culture interactions.



And finally, we discuss the species Oxytricha trifallax which is claimed to have the most complex genome architecture of any known eukaryote (Chen et al., 2014). Possessing two nuclei per cell, and undergoing large-scale genome remodeling during reproduction (including deletions, rearrangements and inversions), it is argued to provide a good test ground for theories concerning the nature and necessary properties of generalised darwinian replicators.

Citation:

Mccrohon L. (2016). Biological Systems Of Interest To Researchers Of Cultural Evolution. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/194.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Mccrohon 2016

Preliminary Results From A Computational Multi Agent Modelling Approach To Study Humpback Whale Song Cultural Transmission

Michael Mcloughlin*1 , Luca Lamoni*2 , Ellen Garland2 , Simon Ingram1 , Alexis Kirke1 , Michael Noad3 , Luke Rendell2 and Eduardo Miranda1
* These authors contributed equally to the work
1 Plymouth University
2 University of St. Andrews
3 The University of Queensland

Keywords: Multi Agent Modelling, Artificial life, Animal vocalisations, Cultural Transmission, Humpback Whale

Short description: Multi Agent Modelling of Humpback whale song cultural transmission

Abstract:

Humpback whale (Megaptera novaeangliae) songs are a striking example of cultural transmission in non-humans (Garland et al., 2011). During the migration and mating season of this species, males produce complex, stereotyped sound sequences defined as ‘songs’(Payne & McVay, 1971). Within a population, males conform to a common yet slowly evolving song. Change can also occur more rapidly when a completely new song is adopted by the entire population in a relatively short time (termed ‘revolution’) (Noad, Cato, Bryden, Jenner, & Jenner, 2000). These phenomena can only occur if the whales are learning song from each other. While it is possible to record the shared song within a population and how this evolves in time, the individual mechanisms and learning strategies behind the cultural transmission of song remain unknown. Furthermore, it is not clear how populations maintain conformity in songs that change over variable timescales (evolution vs. revolution). This paper presents a spatially explicit multi-agent model designed to investigate humpback whale song learning and transmission. Models with an emphasis on cultural evolution have previously been used to describe the emergence of genetic diversity in whales, and these models have been adapted to demonstrate how cultural dynamics can have the same impact on genetic diversity in humans (Whitehead, Richerson, & Boyd, 2002). In these studies however, the exact nature of the cultural evolution is deliberately left vague. Our model seeks to extend and explore that developed in (Kirke, Miranda, Rendell & Ingram, 2015) to specifically study humpback whales songs cultural transmission and may prove to be a valuable reference point for future studies for the early evolution of human language. In detail, the model simulates both the movement and acoustic behavior of humpback whales. The migratory movement of whales between feeding and breeding grounds is enabled using flocking algorithms and movement rules that also govern the interactions among agents. Agents in the model are also equipped with a first order Markov model to generate songs (list of symbols). The transition matrix, or ‘grammar’, can be updated by ‘learning’ from other singing agents; the influence of a song on a listener agents’ grammar is determined by the distance between the listener and the singer (Kirke, Miranda, Rendell, & Ingram 2015). Each agent is initialized with a randomly generated grammar. This modelling architecture enables us to study how songs are transmitted within and between populations and to record the population convergence on one or multiple song grammars. Modelling results are compared qualitatively to known song evolution patterns and specifically validated against real song data recorded in the South Pacific during the last 11 years. The model was run with varying values of spatial parameters. Namely, the size of the feeding ground, the minimum distance between agents, the size of the acoustic active space, and the size of the breeding ground(s). In total, 56 runs were implemented to explore this parameter space. Four main scenarios emerged. Firstly, in 34% of the experiments the majority of agents converged on one or multiple song grammars, depending primarily on the formation of discrete, spatially segregated groups. This result echoes what is commonly observed in the wild, where spatially segregated populations generally sing different song grammars at any given time. In the second scenario (12%), the agents’ convergence was more variable compared to scenario 1 due to a combination of widely spaced breeding grounds and weak attraction between agents. Thirdly, 20% of the runs showed the highest variability in final song grammar due to strong convergence on grammars characterized by lower transition matrix probabilities. Finally, 34% of the runs showed no sign of song learning, as grammars did not converge. Across scenarios song grammars tended to decrease in size/length along each run, resulting in short and simple songs. Future work will include equipping agents with different learning strategies, a more realistic representation of humpback whale song structure and the ability to innovate song.

Acknowledgements

The authors wish to thank The Leverhulme Trust for funding this project.



References

Garland, E. C., Goldizen, A. W., Rekdahl, M. L., Constantine, R., Garrigue, C., Hauser, N. D., Poole, M. M., Robbins, J., Noad, M. J. (2011). Dynamic horizontal cultural transmission of humpback whale song at the ocean basin scale. Curr Biol, 21(8), 687-691. doi: 10.1016/j.cub.2011.03.019

Noad, M. J., Cato, D. H., Bryden, M. M., Jenner, M. N., & Jenner, K. C. S. (2000). Cultural revolution in whale songs. Nature, 408(6812), 537-537. doi: 10.1038/35046199

Payne, R. S., & McVay, S. (1971). Songs of Humpback Whales. Science, 173(3997), 585-597. doi: 10.1126/science.173.3997.585

Kirke, A., Miranda, E., Rendell, L., Ingram, S. (2015). Towards Modelling Humpback Whale Song Evolution using Multi-agent Systems, Proceedings of Transdisciplinary Approaches to Cognitive Innovation (Off the Lip 2015), 7-11 September, Plymouth (UK)

Whitehead, H., Richerson, P. J., & Boyd, R. (2002). Cultural Selection and

Genetic Diversity in Humans, Selection, 3(1), 115–125.

Citation:

Mcloughlin M., Lamoni L., Garland E., Ingram S., Kirke A., Noad M., Rendell L. and Miranda E. (2016). Preliminary Results From A Computational Multi Agent Modelling Approach To Study Humpback Whale Song Cultural Transmission. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/131.html


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© Mcloughlin, Lamoni, Garland, Ingram, Kirke, Noad, Rendell, Miranda 2016

Human-like Brain Specialization In Baboons: An Invo Anatomical MRI Study Of Language Areas Homologs In 96 Subjects

Adrien Meguerditchian1 , Damien Marie2 , Konstantina Margiotoudi2 , Scott A. Love2 , Alice Bertello2 , Romain Lacoste3 , Muriel Roth4 , Bruno Nazarian4 , Jean-Luc Anton4 and Olivier Coulon4
1 (1) Laboratoire de Psychologie Cognitive UMR7290, Brain and Language Research Institute, Aix-Marseille Univ./CNRS; (2) CNRS UPS 846 Station de Primatologie
2 (1) Laboratoire de Psychologie Cognitive UMR7290, Aix-Marseille Univ./CNRS; (2) CNRS UPS 846 Station de Primatologie
3 CNRS UPS 846 Station de Primatologie
4 Institut des Neurosciences de la Timone, Aix-Marseille Univ./CNRS, Marseille

Keywords: Brain specialization, Anatomical Resonance Magnetic Imaging, Language, Handedness, Nonhuman primates

Abstract:

Language is a unique system of communication in humans and involves complex hemispheric specialization of the brain (Vigneau et al., 2006, 2011). Brain regions such as the motor cortex, Broca’s area and the Planum Temporale play key-roles within the language network. Given the phylogenetic proximity between humans and nonhuman primates, the investigation of the cortical organization in apes and monkeys within a comparative approach might enable detecting the potential precursors of hemispheric specialization for language processing. Most comparative studies have focused on great apes, particularly chimpanzees (Hopkins & Cantalupo, 2008). Similarly to humans, leftward asymmetries of the planum temporale (Gannon et al., 1998; Hopkins & Nir, 2010) and rightward asymmetries of the superior temporal sulcus (Leroy et al., 2015) have been documented in chimpanzees, but not in non-hominidae species. The aim of the present study is to investigate the neuroanatomical asymmetries of some of these key-cortical regions for language in an non-hominidae Old World monkey species. T1-weighted anatomical images were acquired in vivo in 96 olive baboons (Papio anubis) at the Centre IRMf (Institut de Neurosciences de la Timone) from anesthetized baboons housed in social groups at the Station de Primatologie CNRS. The depths of the central sulcus (CS) following the motor cortex and of the superior temporal sulcus (STS) have been quantified in both hemispheres in each subject using semi-automatic procedures from the free software BrainVisa. For the planum temporale (PT), the surface area was manually traced on a computer in both hemispheres (Analyze 11.0 software). We found, for the first time in a non-hominidae species, human-like significant neuroanatomical asymmetries in favor of the left hemisphere for the PT surface and in favor of the right hemisphere for the STS depth. Interestingly, inter-hemispheric asymmetries of the CS depth were significantly driven by the contralateral direction of hand preference (i.e., left- or right-hand), which were previously assessed in those individuals using a bimanual coordinated task. These collective findings suggest that the continuity of hemispheric specialization between apes and humans extends to baboons for key structures of language and handedness. These findings argue that prerequisites of hemispheric specialization for language and handedness might date back not to the common ancestor of hominidae at 14-17 million years ago but to the common ancestor of Catarrhini at 30-40 million years ago.

Citation:

Meguerditchian A., Marie D., Margiotoudi K., Love S. A., Bertello A., Lacoste R., Roth M., Nazarian B., Anton J. and Coulon O. (2016). Human-like Brain Specialization In Baboons: An Invo Anatomical MRI Study Of Language Areas Homologs In 96 Subjects. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/167.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Meguerditchian, Marie, Margiotoudi, Love, Bertello, Lacoste, Roth, Nazarian, Anton, Coulon 2016

Linking The Processes Of Language Evolution And Language Change: A Five-level Hierarchy

Jérôme Michaud
University of Edinburgh

Keywords: language evolution and language change, timescale and social scale, Living fossil of language, Creoles and pidgins, Protolanguage

Short description: We use a usage-based definition of language to identify five systems relevant to language evolution and change and apply them to Bickerton's living fossil.

Abstract:

The class of problems that can be categorized under "language evolution and change" is very heterogeneous and involves many different timescales and spatial/social scales. In order to better understand the underlying evolutionary processes, one has to identify the relevant systems involved. In this paper, we propose a hierarchy of five interconnected systems as a tool to systematically analyse the structure and evolutionary mechanisms of language evolution and change problems. In particular, this hierarchy is well-adapted to identify the most relevant social structures involved.

We then apply this new tool to the study of Bickerton's "living fossils" of language (Bickerton, 1995) and argue that their explanatory power for understanding the protolanguage to language transition is limited. In particular, we highlight the importance of the social structures of the speech communities involved. Our approach shows that living fossils candidates are probably not very relevant for language emergence, but are crucial for the study of cultural language change.

Citation:

Michaud J. (2016). Linking The Processes Of Language Evolution And Language Change: A Five-level Hierarchy. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/52.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Michaud 2016

Interaction For Facilitating Conventionalization: Negotiating The Silent Gesture Communication Of Noun-verb Pairs

Ashley Micklos
UCLA

Keywords: Interaction, Silent Gesture, Negotiation, Repair, Conventionalization

Abstract:

This study demonstrates how interaction – specifically negotiation and repair – facilitates the emergence, evolution, and conventionalization of a silent gesture communication system. In a modified iterated learning paradigm, partners communicated noun-verb meanings using only silent gesture. The need to disambiguate similar noun-verb pairs drove these "new" language users to develop a morphology that allowed for quicker processing, easier transmission, and improved accuracy. The specific morphological system that emerged came about through a process of negotiation within the dyad, namely by means of repair. By applying a discourse analytic approach to the use of repair in an experimental methodology for language evolution, we are able to determine not only if interaction facilitates the emergence and learnability of a new communication system, but also how interaction affects such a system.

Citation:

Micklos A. (2016). Interaction For Facilitating Conventionalization: Negotiating The Silent Gesture Communication Of Noun-verb Pairs. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/143.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Micklos 2016

The Evolution Of Repair: Evidence From Online Conversations

Gregory Mills
University of Groningen, Netherlands

Keywords: Dialogue, Miscommunication, Conventionalization, CMC, Reddit, FTFY

Abstract:

According to Dingemanse et al (2013),the first word a baby learns is not “Mama” or “Papa” but the single-syllable “Huh?”. Already in early infancy children use “Huh?” to divert the flow of the interaction and elicit a response from their caregiver. The development of such forms of coordinated joint action is a prerequisite for the ontogeny of language (Clark, 2009) and is equally important in adult language use. Studies of utterances such as “Huh?” have shown that they constitute a large family of interactive repair mechanisms that are used by interlocutors to deal with problems of intersubjectivity (Schegloff, 1992). Repair consists of two main components (1) Mechanisms for initiating repair, i.e. signaling to others that there is some “trouble” (2) Mechanisms for performing the repair, i.e. resolving the problem via elaboration or reformulation. Crucially, (1) and (2) can be performed by the same or by different people. For example, the utterance “The next shape is the green one, oops I meant the red one” is a self-initiated, self-repair, whereas in the exchange below B initiates repair with “huh?”, but the correction is performed by A.

A: hhe next shape is the green one

B: huh?

A: oops I meant the red one

One important dimension along which repair mechanisms differ is their ability to locate and diagnose the problem. For example, “Huh?” and “What?” do not specify the nature of the problem, whereas “when?”, “where?”, “who” diagnose the problem as concerning a time, place, or person. Even more specific are partial repeats such as “Partially diagnose the what?”.

Studies on the emergence of referring conventions have demonstrated the importance of repair: If participants are able to repair each other’s referring expressions, this leads to quicker convergence on more systematized, abstract representations (Galantucci and Garrod, 2011). This is a recurrent finding which occurs across modalities (Healey et al., 2007).

To investigate repair in closer detail, this talk presents an analysis of interactions on the social media site reddit.com. Currently 7% of all US adults use reddit, and the archive, consisting of billions of messages from 2005-2015 is freely available from archive.org. In addition to other-initiated repair, since each message specifies whether it was edited by the user, this corpus also allows automated identification and analysis of a large subset of self-repair mechanisms.

By examining users’ conversations throughout this 10 year period, we show how the community conventionalizes its own repair mechanisms via (1) repurposing of existing mechanisms, and (2) the development of novel mechanisms. Examples of both are given below:

1. Repurposing self-repair markers as other-repair

Garcia and Jacobs, (2014) showed that people append asterisks to their turns in order to perform self-repair, e.g. “Let’s grab a beere. *beer”. Over a period of 6 years the asterisk was gradually repurposed to perform other-repair, e.g.

A: Let’s go for a drink

B: *drink

2. Developing novel forms of repair-initiation

During the same period, users developed the convention of using FTFY (Fixed That For You) to perform other-repair, a mechanism that became increasingly honed for targeting the preceding turn, e.g.

A: When is It you’re free? Tomorrow let’s go get a pizza.

B: “get some dim sum and a beer” FTFY. It’s been a while.

By tracing the development of repair in the corpus, we argue that in addition to referring conventions, interlocutors also develop community-specific routines for identifying, signaling and correcting problems in the interaction.

References:

Clark, E. V. (2009). First language acquisition. Cambridge University Press.

Dingemanse, M., Torreira, F., & Enfield, N. J. (2013). Is “Huh?” a universal word? PLOS One. DOI: 10.1371/journal.pone.0078273

Galantucci, B., & Garrod, S. (2011). Experimental semiotics: a review. Frontiers in human neuroscience, 5, 11.

Healey, P. G., Swoboda, N., Umata, I., & King, J. (2007). Graphical language games:. Cognitive Science, 31(2), 285-309.

Jacobs, J. B., & Garcia, A. C. (2013). Repair in chat room interaction. Handbook of pragmatics of computer-mediated communication, 565-588.

Schegloff, E. A. (1992). Repair after next turn:. AJS, Vol. 97, No. 5 1295-1345.

Citation:

Mills G. (2016). The Evolution Of Repair: Evidence From Online Conversations. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/181.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Mills 2016

Arbitrary Hierarchy: A Precedent For Language?

Dominic Mitchell
University of Bath

Keywords: hierarchy, RHP, winner-loser, arbitrary, social coordination, representation

Short description: Many species seem able to coordinate on arbitrary agreement but are unable to re-present the agreement in another context. Why?

Abstract:

Many species seem able to maintain a hierarchy despite the arbitrary nature of its formation and the fact that its rankings contradict individuals' physical abilities which are perceptualy evident. We examine in a model how this may be possible and discuss the relevance of such social organization to the question of the evolution of language.

Citation:

Mitchell D. (2016). Arbitrary Hierarchy: A Precedent For Language?. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/160.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Mitchell 2016

How Selection For Language Could Distort The Dynamics Of Human Evolution

William Mitchener
College of Charleston, Math Department

Keywords: artificial life, gene regulatory network, molecular clock, evolutionary dynamics

Short description: A-life simulation shows how the language-faculty gene network might distort the molecular clock

Abstract:

The human language faculty is supported by a gene regulatory network that influences the development and operation of the motor, sensory, and cognitive systems. Consequently, this network must be fairly large and complex, and probably includes genes scattered throughout the genome. It is under selective pressure to continue functioning. Many gene loci are therefore in linkage disequilibrium with some element of this network, potentially disrupting their evolutionary dynamics. In the interest of exploring how significant this effect could be, we consider artificial life simulations, in which agents are required to perform an information coding task analogous to the replay of a memorized gesture. The task requires a network of interacting genes. The population is then branched, and phylogenetic trees are constructed based on genetic distances between leaf populations. Distances are determined by comparing genes for a simple task unrelated to the coding task. The process is then repeated without the coding task. Sample runs from these two simulations are compared to data from a neutral drift model. Both a-life simulations result in lower edge weights than the neutral model, giving the appearance of a slower molecular clock. Furthermore, the simulation with the coding task shows even lower edge weights, even though the mutation rate is the same. Therefore, the presence of a large gene network such as the one for language could distort the evolutionary trajectories of unrelated genes.

Citation:

Mitchener W. (2016). How Selection For Language Could Distort The Dynamics Of Human Evolution. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/108.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Mitchener 2016

Make New With Old: Human Language In Phylogenetically Ancient Brain Regions

Marie Montant1 , Johannes Ziegler1 , Benny Briesemeister2 , Tila Brink3 , Bruno Wicker1 , Aurélie Ponz1 , Mireille Bonnard1 , Arthur Jacobs2 and Mario Braun4
1 Aix-Marseille University and CNRS
2 Free University of Berlin
3 Freie Universität
4 Salzburg University

Keywords: human language, fMRI, TMS, neural re-use, insula, emotion, reading

Short description: Human language network has evolved through the re-use of phylogenetically ancient cortical and subcortical brain regions: evidence from disgust words.

Abstract:

A common view in cognitive science is that human language consists of computations and transformations of mainly abstract symbolic information that takes place in various (mostly left) neocortical regions. According to this view, when hearing or reading emotionally loaded words, like “vomit” or ‘love”, the emotional content of these words is accessed through the activation of the perisylvian language network in charge of processing symbolic meaning. Alternatively, because language is one of the most recent cognitive products of human evolution, it has been argued that the language network has evolved through the re-use of already existing cortical and subcortical brain regions (Anderson, 2010). Thus, reading or hearing emotionally loaded words should therefore activate phylogenetically ancient and heteromodal brain structures that are in charge of emotions. In a functional magnetic resonance imaging (fMRI) experiment, we show that the same brain region is activated whether people observe facial expressions of disgust or whether they read words that refer to core disgust. This particular region corresponds to a portion of the insula (i.e. the anterior part) that is also known to be involved in the perception of disgusting odors (Wicker at al, 2003). In a subsequent transcranial magnetic stimulation (TMS) experiment, we show that transient disruption of the anterior insula affects the processing of core disgust words in a reading task. Participants are much slower to recognize visually presented disgust words, compared to neutral words, when TMS is applied over the anterior insula rather than the vertex. Altogether, these results are compatible with theories of embodied emotion (Niedenthal, 2007) and neural re-use (Anderson, 2010), according to which phylogenetically ancient brain structures that process basic emotions in all mammals actively participate in high-level cognitive skills, such as language.



Anderson ML (2010) Neural reuse: a fundamental organizational principle of the brain.Behav Brain Sci 33(4):245-266; discussion 266-313.

Niedenthal PM (2007) Embodying emotion. Science 316(5827):1002-1005.

Wicker B, et al. (2003) Both of us disgusted in My insula: the common neural basis of seeing and feeling disgust. Neuron 40(3):655-664.

Citation:

Montant M., Ziegler J., Briesemeister B., Brink T., Wicker B., Ponz A., Bonnard M., Jacobs A. and Braun M. (2016). Make New With Old: Human Language In Phylogenetically Ancient Brain Regions. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/201.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Montant, Ziegler, Briesemeister, Brink, Wicker, Ponz, Bonnard, Jacobs, Braun 2016

Frequency-dependent Regularization In Iterated Learning

Emily Morgan and Roger Levy
UCSD

Keywords: regularization, iterated learning models, frequency, word order, binomial expressions, compositionality

Short description: Frequency-dependent regularization emerges from frequency-independent regularization plus cultural transmission.

Abstract:

Binomial expressions are more regularized--their ordering preferences (e.g. “bread and butter” vs. “butter and bread”) are more extreme--the higher their frequency. Although standard iterated-learning models of language evolution can encode overall regularization biases, the stationary distributions in these standard models do not exhibit a relationship between expression frequency and regularization. Here we show that introducing a frequency-independent regularization bias into the data-generation stage of a 2-Alternative Iterated Learning Model yields frequency-dependent regularization in the stationary distribution. We also show that this model accounts for the distribution of binomial ordering preferences seen in corpus data.

Citation:

Morgan E. and Levy R. (2016). Frequency-dependent Regularization In Iterated Learning. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/193.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Morgan, Levy 2016

The Effect Of Modality On Signal Space In Natural Languages

Hope Morgan
University of California San Diego

Keywords: phonological space, language modality, combinatorics, sign language, duality of patterning

Abstract:

Natural language in the visual-gestural modality provides an opportunity to discover which aspects of sub-lexical structure are common between the two language modalities —spoken language and signed language—and which aspects result from emergent properties grounded in the signaling and perceptual systems. The project reported here finds that there are fundamental differences in how words are constructed in each modality, offering evidence that contrastive features in both types of phonological system are emergent, not innate. What is shared are more general design features, such as those proposed by Hockett (1960): i.e., productivity, arbitrariness, discreteness, etc. However, the findings suggest that the principle of duality of patterning is in need of refinement if it is to apply across modalities.



A dataset of 1,868 signs in Kenyan Sign Language (KSL) were coded for 39 formational characteristics, such as number of hands, handshapes, palm orientation, finger orientation, type of movement, manner of movement, 1st major area, 2nd major area, 1st minor area, etc. During coding and analysis, minimal pairs were gathered that differed by the narrowest possible degree—i.e., by only one formational feature.



Previous researchers have mentioned that sign languages have few minimal pairs (Sandler 1996: 202; Brentari 1998: 4; Kooij 2002: 160), but a comprehensive account of phonological contrasts has not been available until now. The results of the current study finds that there are around 370 true minimal pairs in the dataset, with only 40 signs that contrast with more than one other sign. Conservatively calculating from the set of recombinable elements in the language indicated by minimal pairs, there are millions of possible signs, a very large combinatoric space for a primarily monosyllabic language; and much larger than monosyllables in spoken languages (Kirby & Yu 2007).



Why do sign languages have such a large phonological space? A likely explanation relates to the fact that signs are not comprised of strings of segments as are words are in spoken languages; and therefore cannot use sequential, syntagmatic structure to construct words and create distinct form-meaning mappings. Instead, as the present study confirms, signed words depend on a multitude of simultaneous features for boundless productivity in word creation.



These features are also discrete and have arbitrary properties, but they do not fully conform to Hockett’s view of duality of patterning, which appears to rest on syntagmatic contrast such that segments can be reordered (e.g., ‘tack’, ‘cat’, and ‘act’ [Hockett 1960: 92]). Thus, some refinement of Hockett is necessary to incorporate language without strings of segments.



References

Brentari, D. (1998). A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.

Edwards, T. (2014). Language Emergence in the Seattle DeafBlind Community. PhD Thesis. University of California Berkeley.

Hockett, C. F. (1960). The Origin of Speech. Scientific American, 203(3), 88–96.

Kirby, J. P. and A. C. L. Yu. (2007). Lexical and phonotactic effects on wordlikeness judgments in Cantonese. Proceedings of the 16th International Congress of Phonetics Science: 1389–1392.

Kooij, E. van der. (2002). Phonological Categories in Sign Language of the Netherlands. LOT.

Sandler, W. (1996). Phonological features and feature classes: The case of movements in sign language. Lingua 98: 197–220.

Citation:

Morgan H. (2016). The Effect Of Modality On Signal Space In Natural Languages. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/192.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Morgan 2016

Linguistic Structure Emerges In The Cultural Evolution Of Artificial Sign Languages

Yasamin Motamedi , Marieke Schouwstra , Kenny Smith and Simon Kirby
University of Edinburgh

Keywords: gesture, sign language, iterated learning, cultural evolution, learning, communication

Short description: iterated learning of silent gesture leads to language-like segmentation

Abstract:

The growing body of research into homesign and emerging sign languages offers insight into languages at their earliest stages of creation and development. The study of such languages allows us to monitor the types of structures that emerge and how they develop through the first generations of a language; for example, although evidence of lexical categories in Nicaraguan Sign Language and spatial grammar in Al-Sayyid Bedouin Sign Language appear in initial generations, these structures have been shown to take time to conventionalize and become systematized (Goldin-Meadow et al, 2014; Padden et al, 2010). Furthermore, recent laboratory experiments in which hearing participants are asked to communicate using gesture can be used to test the factors that shape languages, such as cross-linguistic word order preferences (Goldin-Meadow et al., 2008; Schouwstra and de Swart, 2014), while minimizing interference from participants’ native languages. Because of this, results can be compared to data from natural emerging sign languages.

Here, we present an iterated learning study (Kirby, Griffiths and Smith, 2014) that uses the silent gesture paradigm to investigate how the use and transmission of manual communication systems drives the emergence of systematic structure. Pairs of participants take part in an artificial language learning experiment in which they are first trained on a set of gestures and then must communicate with a partner using only gesture. In the training stage, participants are shown videos of a previous participant gesturing a concept taken from a meaning space of 24 concepts. These concepts are presented orthographically and share either a functional association (person, location, object or action) or a semantic association (based on six professions) with other items in the meaning space. For example, “hairdresser” and “hair salon” share a semantic but not a functional association, and “hairdresser” and “police officer” share a functional but not a semantic association. In the testing stage, pairs of participants take it in turns to be director (the gesturer) and matcher (the interpreter). The director is presented with a concept from the meaning space and must communicate that

concept to their partner using only gesture (presented via video streaming between computers in two separate experiment booths). The matcher then attempts to match their partner’s gesture to the correct item from the meaning space, presented as a grid of lexical items.

We use both a gesture coding system as well as direct video frame analysis to produce a set of measures capturing the presence of systematic structure in the sets of gestures our participants produce. Our data show three main results concerning the structures that emerge: 1. The entropy of gesture shapes used by participants reduces over time, suggesting that participants increasingly re-use and re-combine gestures from a smaller pool of gesture shapes; 2. The gestural systems become more efficient over time as the range of movement used by participants reduces; 3. Markers for functional categories in the meaning space emerge over generations in the evolution of the gestural systems, such as a roof gesture used to signal the location category, or a point at the director’s body to signal the person category. These results suggest that, as the systems are used in communication and transmitted through generations, gestures develop from pantomimes to conventionalized signs that demonstrate language-like segmentation through the marking of functional categories. Our results also indicate that the gestures produced by participants become more learnable as the systems are transmitted to naïve learners, and that participants in later generations become increasingly aligned with their communication partner. We suggest that the need for learnable and efficient communicative systems may drive the emergence of structure in the gestures our participants produce.



References



Goldin-Meadow, S., Brentari, D., Coppola, M., Horton, L., & Senghas, A. (2014). Watching language grow in the manual modality: Nominals, predicates, and handshapes. Cognition, 136C, 381–395.

Goldin-Meadow, S., So, W. C., Ozyürek, A., & Mylander, C. (2008). The natural order of events: how speakers of different languages represent events nonverbally. Proceedings of the National Academy of Sciences of the United States of America, 105(27), 9163–8.

Kirby, S., Griffiths, T., & Smith, K. (2014). Iterated learning and the evolution of language. Current Opinion in Neurobiology, 28C, 108–114.

Padden, C., Meir, I., Aronoff, M., & Sandler, W. (2010). The grammar of space in two new sign languages. Sign Languages: A Cambridge Language Survey. Cambridge University Press, Cambridge, UK, 570–592.

Schouwstra, M., & de Swart, H. (2014). The semantic origins of word order. Cognition, 131(3), 431–6.

Citation:

Motamedi Y., Schouwstra M., Smith K. and Kirby S. (2016). Linguistic Structure Emerges In The Cultural Evolution Of Artificial Sign Languages. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/27.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Motamedi, Schouwstra, Smith, Kirby 2016

Self-organization In Sound Systems: A Model Of Sound Strings Processing Agents

Roland Mühlenbernd and Johannes Wahle
Eberhard Karls Universität Tübingen

Keywords: computational model, multi-agent system, imitation game, emergence of human sound systems

Abstract:

A number of typological universals of sound systems of human languages are most likely a result of self-organization in a population's communication performance. This could be shown for human vowel system in a number of studies (c.f. de Boer, 2000b; Jäger, 2008). In those studies computational models were designed, where agents communicate with single vowel sounds. This is a noteworthy abstraction from realistic language use, where individuals communicate with expressions, realized as strings of vowels and consonants. In our study we present a computational model, where agents communicate with whole expressions in form of concatenation of single sounds. The goals of this study are i) to examine decisive factors that contribute to the emergence of realistic sound systems in artificial societies of interacting agents, and ii) to discus ways to evaluate such artificially emerged sound systems.

Citation:

Mühlenbernd R. and Wahle J. (2016). Self-organization In Sound Systems: A Model Of Sound Strings Processing Agents. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/42.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Mühlenbernd, Wahle 2016

A Social Dimension Of Language Evolution

Albert Naccache
Lebanese University (retired)

Keywords: Group Size, Language Evolution, Social Evolution, Speech community

Short description: Plots evolution of Homo speech community size + points out main inflection points correlated with development of human communication system

Abstract:

Since language emerged, its specific implementations have varied among the groups, or speech communities, making up our species. We focused on the most basic marker of the speech community, its size, which can be roughly estimated for the whole of the history of Homo. Using estimates of the global Homo population since its emergence and present-day distribution of speech communities as mooring points, together with archaeologically-based estimates of group size and population density, we plotted the curve of the size of speech communities over the whole of Homo’s history, identifying its main inflection points indicative of social development of language.

Citation:

Naccache A. (2016). A Social Dimension Of Language Evolution. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/30.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Naccache 2016

Edward Sapir And The Origin Of Language

Albert Naccache
Lebanese University (retired)

Keywords: Culture, Edward Sapir, Language Origin, Social consensus, Thought

Short description: Sapir’s 1921 analysis of language origins still carries insightful diagnostic and heuristic value for Language Evolution researchers today

Abstract:

The field of Language Evolution is at a stage where its speed of growth and diversification is blurring the image of the “prime problem” at its heart, the origin of language. To help focus on this central issue, we take a step back in time and look at the logical analysis of it that Edward Sapir presented nearly a century ago. Starting with Sapir’s early involvement with the problem of language origin, we establish that his analysis of language is still congruent with today’s thinking, and then show that his insights into the origin of language still carry diagnostic and heuristic value today.

Citation:

Naccache A. (2016). Edward Sapir And The Origin Of Language. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/31.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Naccache 2016

Shared Basis For Language And Mathematics Revealed By Cross-domain Syntactic Priming

Tomoya Nakai and Kazuo Okanoya
The University of Tokyo

Keywords: Syntax, Mathematics, Priming

Abstract:

It has been proposed that the capacity of recursive computation plays a central role in language evolution, and that such capacity provides a basis not only for syntax in language, but also for mathematics (Hauser et al., 2002). It is possible that an evolutionarily older linguistic function is “recycled” in novel cultural inventions such as mathematics (Dehaene & Cohen, 2007). Recently, Scheepers et al. (2011) found that the syntactic structures of prime mathematical expressions affected the processing of following target linguistic stimuli. In their subsequent study, Scheepers and Sturt (2014) revealed that such cross-domain syntactic priming effect was bidirectional.

Although these priming studies had a critical importance, their questionnaire-based experimental settings had limitations in evaluating on-line syntactic processing. In the present study, we created a new task that focused on the on-line syntactic priming between language and mathematics. We recruited 34 college students (all native Japanese speakers, aged 18–26). Participants were asked to perform a calculation task and a semantic decision task for consecutively presented mathematical expressions and sentences (in Japanese), respectively. For both domains, we created stimuli with left-branching (“4*3+8” and “kuroi neko-ga hashiru” [a black cat runs]) and right-branching structures (“8+3*4” and “neko-ga hayaku hashiru” [a cat runs fast]). Linguistic stimuli were created by inserting either an adjective or adverb into the phrase composed of a NP and an intransitive verb. This experimental setting allowed us to examine the implicit structural priming effect between two domains. To consider the influence of participants’ sensitivity to structural information in mathematics, we recruited students in both scientific and non-scientific departments.

By using two-way repeated measures analysis of variance (ANOVA), we found a significant main effect of congruency (structural priming effect) only for students in scientific departments (P < 0.05). We found no significant main effect of modality (language to math/math to language) or interactions. Structurally congruent stimuli induced lower error rates compared to incongruent stimuli, both from language to mathematics, and from mathematics to language. Our results support the idea that language and mathematics have a shared basis in their syntactic structure, with individual variability related to the environment. We also suggest a putative application of current task settings in neuroimaging experiments.

Acknowledgements

This work is supported by a Grant-in-Aid for JSPS Fellows (No. 26-9945) from the Japan Society for the Promotion of Science.

References

Dehaene, S., & Cohen, L. (2007). Cultural recycling of cortical maps. Neuron, 56(2), 384–398.

Hauser, M. D., Chomsky, N., & Fitch, W. T. (2002). The faculty of language: What is it, who has it, and how did it evolve? Science, 298(5598), 1569–1579.

Scheepers, C., Sturt, P., Martin, C. J., Myachykov, A., Teevan, K., & Viskupova, I. (2011). Structural priming across cognitive domains from simple arithmetic to relative-clause attachment. Psychological Science, 22(10), 1319–1326.

Scheepers, C., & Sturt, P. (2014). Bidirectional syntactic priming across cognitive domains: From arithmetic to language and back. The Quarterly Journal of Experimental Psychology, 67(8), 1643–1654.

Citation:

Nakai T. and Okanoya K. (2016). Shared Basis For Language And Mathematics Revealed By Cross-domain Syntactic Priming. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/163.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Nakai, Okanoya 2016

Measuring Conventionalization In The Manual Modality

Savithry Namboodiripad , Daniel Lenzen , Ryan Lepic and Tessa Verhoef
University of California, San Diego

Keywords: gesture, conventionalization, quantitative measures, sign language, language emergence, social coordination, laboratory experiments

Short description: Recording a communication game with Kinect, we measured changes in gesture that are the hallmarks of conventionalization in sign language

Abstract:

Gestures produced by users of spoken languages differ from signs produced by users of sign languages in that gestures are more typically ad hoc and idiosyncratic, while signs are more typically conventionalized and shared within a language community. To study how gestures may change over time as a result of the process of conventionalization, we designed a social coordination game to elicit repeated silent gestures from hearing nonsigners, and used Microsoft Kinect to unobtrusively track the movement of their bodies as they gestured (following Lenzen, 2015). Our approach follows both a tradition of lab experiments designed to study social coordination and transmission in the emergence of linguistic structure (Schouwstra et al., 2014) and insights from sign language research on language emergence. Newly emerging sign languages are being discovered, and even established sign languages are relatively young; it is therefore possible to observe linguistic conventionalization as it happens naturally (Senghas et al., 2005). Working with silent gesture, we were able to simulate and quantify effects of conventionalization that have been described for sign languages (Frishberg, 1975), including changes in efficiency of communication and size of articulatory space, in the laboratory.

Participants took turns either giving clues about (the Communicator) or guessing (the Guesser) items from a set of English nouns. Items were presented on a screen visible only to the Communicator, and once the Communicator confirmed that the Guesser had guessed correctly, the Guesser pressed a button to advance to the next item. Trial length was recorded as the time (ms) between button presses. Participants switched roles halfway through each of four rounds. Each item appeared once per round. In Round 1, the Communicator could use gesture and speech to ensure that both participants were familiar with the entire set of items going into Rounds 2-4, which were gesture-only. The Communicator’s movements were recorded as sequences of locations in XYZ-space using the Kinect. 10 pairs of undergraduates received course credit for participating in the study. Participant pairs had never met before, and no participant reported knowledge of a sign language.

We examined Rounds 2-4, in which the Communicator was gesturing without speech about a set of items known to both participants. We observed rapid alignment between participant pairs across the rounds; as participants became familiar with the items in the game, they correctly guessed the items at faster rates. Trial lengths (s) started longer in Round 2 (M=11.54), and became shorter in Round 3 (M=5.65) and Round 4 (M=4.39). A linear mixed-effects model showed that ROUND significantly affected TRIAL LENGTH (𝜒2=87.09, p < 0.0001), reducing trial length by about 3.57s (S.E. +/-0.37) each round.

Two additional analyses concerned volume of gesture space and distance traveled by the hands. The Kinect measurements showed that gesture spaces started larger (m3) in Round 2 (M=0.15) and became smaller in Round 3 (M=0.11) and Round 4 (M=0.10). A linear mixed-effects model showed that ROUND significantly affected GESTURE SPACE (𝜒2=51.01, p < 0.0001), reducing the volume of the gesture space by about 0.03m3 (S.E. +/- 0.004) each round. The total distance traveled by the hands also started longer (m) in Round 2 (M=11.58) and became shorter in Round 3 (M=6.22) and Round 4 (M=5.14). A linear mixed-effects model showed that ROUND significantly affected HAND TRAVEL DISTANCE (𝜒2=75.85, p < 0.0001), reducing the distance that the hands traveled by about 3.22m (S.E. +/- 0.36) each round. (Figures associated with trial length, volume of gesture space, and distance traveled by the hand are included in the supplementary materials.)

We chose an experimental setup known to result in rapid conventionalization (Scott-Phillips & Kirby, 2010), and with Kinect we were able to measure changes in gesture that are also hallmarks of conventionalization in sign language. This approach opens the door for more direct future comparisons between ad hoc gestures produced in the lab and natural sign languages in the world. By operationalizing concepts like reduction and articulatory space, which, out of necessity have been typically discussed in vague terms, we anticipate that this approach will also be beneficial for future studies of (sign) language emergence.

Citation:

Namboodiripad S., Lenzen D., Lepic R. and Verhoef T. (2016). Measuring Conventionalization In The Manual Modality. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/107.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Namboodiripad, Lenzen, Lepic, Verhoef 2016

Quantifying The Semantic Value Of Words

Dillon Niederhut
University of California

Keywords: semantics, semiotics, natural language processing

Short description: Objective method for quantifying the semantic content in an n-gram.

Abstract:

Intuitively, words that are used less frequently provide a richer set of inferences to a competent speaker who hears them. The relative magnitude of the information contained in a word can be estimated using the relative proportions of the words surrounding it. A test produced by comparing these relative proportions to those of the entire language gives high values at low word frequencies, and low values at high term frequencies. Future work could use this test to disambiguate between social and cognitive factors in linguistic change.

Citation:

Niederhut D. (2016). Quantifying The Semantic Value Of Words. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/204.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Niederhut 2016

The Arbitrariness Of The Sign Revisited: The Role Of Phonological Similarity

Alan Nielsen1 , Dieuwke Hupkes2 , Simon Kirby1 and Kenny Smith1
1 University of Edinburgh
2 Universiteit van Amsterdam

Keywords: Language Learning, Computational Modeling, Arbitrariness, Systematicity, Structure

Short description: The results of a learning experiment suggest that learners benefit from novel systematic associations that are not structured phonologically

Abstract:

Recent research has suggested that the structure of the lexicon bears the hallmarks of an adaptation to support language learning (Monaghan et al., 2014). It has been suggested that systematically structured languages (i.e. where some feature of meanings is related to a feature of words) might aid in bootstrapping language acquisition- thus, explorations of how different types of systematic structure affect learnability might answer important questions and generate further testable predictions about the origins of language. In 2011, Monaghan, Christiansen, & Fitneva reported the results of a series of experiments and computational models of language learning that were designed to test the effect of systematicity on learning. In their study they used a feed-forward neural network model and an artificial language learning paradigm with human participants to explore the differences in learnability between languages where the relationships between forms and meanings were either systematic or arbitrary (i.e. where no feature of meaning is reliably associated with any feature of words).



In Monaghan et al.’s study, systematic associations between words and meanings are based on there being phonological similarities within a group of words (e.g the fricative phonemes /f/ and /ʒ/ being associated with similar meanings), and also phonological dissimilarity between groups (e.g. the plosive phonemes /g/ and /k/ being associated with a second group of meanings) Here, we extend the findings of Monaghan et al. (2011) using a new experimental methodology and a number of computational simulations. In addition to systematic associations between words and meanings that are based on phonological similarity, we explore the learnability of systematic languages that are phonologically dispersed.



We replicated the model described by Monaghan et al. (2011), instantiating a version using a 2x2 design with systematicity (arbitrary vs. systematic) as one factor and phonology (clustered vs. dispersed) as a second factor. In the clustered condition of the simulation (which directly replicates Monaghan et al.), labels with similar phonemes (e.g. f and ʒ) were used to create one set of labels, with a set of dissimilar phonemes (e.g. g and k) used in a second set of labels. In the newly added dispersed conditions the coupling of phonemes based on their featural similarity was broken (pairing, for example, f and g).



Where Monaghan et al. (2011) contrasted fricative and plosive phonemes, our experiment used a set of phonemes that differed in plosivity (plosive vs. continuant consonants)as in Nielsen & Rendall, 2012. Additionally, our experiment moved from an alternative forced choice task to a signal detection protocol: after training, participants were presented with trials where they were shown a single image with a single label and tasked with responding whether the pairing was one that they had been trained on before. As with the model, the experiment was a 2 (systematic vs. arbitrary) x 2 (phonological vs. dispersed) design.



Our results suggest that human language learners learn systematic languages better than arbitrary ones, regardless of their degree of phonological dispersion. This stands in contrast to the results of the model, which overestimates the importance of phonological dispersion for learning- confusing similar phonemes at higher rates than do human participants. These results suggest that the types of systematic structures we might expect to see in real languages might not always be neatly phonologically clustered, but that systematic structure in its most general form is adaptive for the process of language learning.



Monaghan, P., Christiansen, M.H., & Fitneva, S.A. (2011). The Arbitrariness of the sign: Learning advantages from the structure of the vocabulary. Journal of Experimental Psychology: General, 140, 325-347.

Monaghan, P., Shillcock, R.C., Christiansen, M.H., & Kirby, S. (2014). How arbitrary is language? Philosophical Transactions of the Royal Society B, 369.

Nielsen, A. & Rendall, D. (2012). The source and magnitude of sound-symbolic biases in processing artificial word material and their implications for language learning and transmission. Language and Cognition, 4, 115-125.

Citation:

Nielsen A., Hupkes D., Kirby S. and Smith K. (2016). The Arbitrariness Of The Sign Revisited: The Role Of Phonological Similarity. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/126.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Nielsen, Hupkes, Kirby, Smith 2016

Semantic Approximation And Its Effect On The Development Of Lexical Conventions

Bill Noble1 and Raquel Fernández2
1 University of Amesterdam
2 University of Amsterdam

Keywords: signaling games, dialogue interaction, semantic approximation

Short description: Signaling games simulations suggest that discourse-level semantic approximation helps to explain community-level lexical ambiguity.

Abstract:

We define a signaling games setting for investigating how short- and long-term conventions are established in a community of interacting speakers. Using simulations, we model a particular type of non-literal use of linguistic expressions, semantic approximation, and investigate its effects on lexical alignment, ambiguity, polysemy, and communicative success. Critically, in our approach agents do not only keep track of a lexicon reflecting conventions at the level of the community, but also of a discourse lexicon that stores information agreed upon by the participants in a specific dialogue. We find that semantic approximation creates opportunities for discourse-level lexicalization, which boosts the expected utility of the discourse lexicon, and that it can have a profound effect on the evolution of community-level lexical resources.

Citation:

Noble B. and Fernández R. (2016). Semantic Approximation And Its Effect On The Development Of Lexical Conventions. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/35.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Noble, Fernández 2016

Domestication And Evolution Of Signal Complexity In Finches

Kazuo Okanoya
Graduate School of Arts and Sciences, The University of Tokyo

Keywords: domestication, sexual selection, finite-state syntax, finches, brains, neural crest

Abstract:

Among vocalizations birds make, a class of sounds that are consisted of more than two types of sound patterns arranged in a certain temporal sequence is called as a ‘birdsong’, not only because of the organization of sound patterns, but also because our musical aesthetics intuitively allow such an analogy. Scientific investigations of birdsong to date suggest that certain properties of birdsong extend beyond the musical to the developmental analogies.

Bengalese finches are domesticated strains of wild white-rumped munias imported from China to Japan 250 years ago. Bengalese finch songs are composed of multiple chunks and each chunk is a combination of 2-4 song notes. Furthermore, chunks are arranged in a finite-state probabilistic automaton. We studied how and why Bengalese finches sing such complex songs. We found the following facts. 1) The ancestral strain sing simpler songs. 2) There is high learning specificity in white-rumped munias but not in Bengalese finches. 3) Bengalese finches have larger song control nuclei and higher level of glutamate receptor gene expressions than white-rumped munias. 4) Both Bengalese finch and white-rumped munia females prefer complex songs as measured by the nest string assay and males with complex songs are physically fitted than the males with simpler songs. These results promoted sexual selection scenario of song complexity in Bengalese finches (Okanoya, 2004).

We further examined factors related with domestication. We examined songs of white-rumped munias in subpopulations of Taiwan (Kagawa, et al., 2012). Where there is a sympatric species to white-rumped munias, songs were simpler. This leads to a hypothesis that in the wild songs needed to be simple to secure species identification, but under domestication this constrains was set free. Not only that, analyses of isolated songs and cross-fostering results suggest that there are different degrees of learnability between white-rumped munias and Bengalese finches (Takahasi, et al, 2010; Kagawa et al, 2014).

Furthermore, recent suggestion of neural crest hypothesis that might account for the “domestication syndrome” fits well with the properties of Bengalese finches (Wilkins, et al., 2014). For example, Bengalese finches are sooner to recover from tonic immobility test than white-rumped munias (Suzuki, et al., 2013. Feces corticosterone level is lower in Bengalese finches (Suzuki, et al., 2014). Biting force is stronger in white-rumped munias than Bengalese finches, and time required to come back to the food cup after a foreign object was placed was quicker in Bengalese finches. All of these result suggest that the difference between Bengalese finches and white-rumped munias in socio-emotional factors are related with the limited diffusion of the neural crest cells, since these properties are controlled by cells derived from the neural crest cells.

Thus, evolution of song complexity involves not only factors related with strengthen of sexual selection and relaxation of species identification, but also socio-emotional factors due to domestication. These results on Bengalese finches must be useful in discussing possible biological origin of human speech in terms of proximate and ultimate factors.

Citation:

Okanoya K. (2016). Domestication And Evolution Of Signal Complexity In Finches. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/138.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Okanoya 2016

Parrot " Phonological Regression": Expanding Our Understanding Of The Evolution Of Vocal Learning

Irene M. Pepperberg , Katia Zilber-Izhar and Scott Smith
Harvard University

Keywords: vocal learning, phonological regression, parrot speech acquisition, evolution of vocal learning

Short description: A Grey parrot demonstrates phonological regression very similar to that of some young children during early label acquisition.

Abstract:

Development of vocal learning was a critical aspect in the evolution of spoken language, mainly because such learning allows for cultural transmission of the complex communication patterns that are the hallmark of human language. Notably, few nonhumans engage in such behavior. Oscine songbirds are the most commonly studied of these exceptional nonhumans, and research demonstrates striking avian-human parallels with respect to the ontogeny and neurological bases of vocal communication (e.g., Jarvis et al. 2005). Parrots also engage in vocal learning, but, unlike most songbirds, are adept at vocal mimicry (Chakraborty et al., 2015)—the capacity to reproduce, exactly, sounds such as those of human speech. Given that Grey parrots (Psittacus erithacus), at least, have shown some ability for referential use of such speech and advanced cognitive capacities (e.g., Pepperberg, 1999; Pepperberg & Carey, 2012), they could provide a particularly good model for studying how vocal communication may have evolved (e.g., Pepperberg, 2013).

One aspect of human language, with likely evolutionary importance, that of pre-speech babbling, has been studied extensively in both children and songbirds (e.g., Doupe & Kuhl, 1999) and somewhat in parrots. In parrots, however, researchers either investigated development of conspecific vocalizations in the wild (Berg et al., 2011) or the vocalizations of an adult bird, already fluent in human speech, that was learning novel labels (Pepperberg, Brese, & Harris, 1991). The development of human speech in a juvenile parrot, however, had not been tracked.

A recent study of such tracking found an interesting aspect of vocal learning, that of “phonological regression”, also seen in children, even if rarely reported (e.g., Bleile & Tomblin, 1991): Here, young children sometimes nearly perfectly produce words at a very early stage, but these correct first productions are then followed by less faithful renditions, only to be returned later to relative accuracy. Fledgling songbirds may similarly occasionally countersing with adults using a fully adult rendition, then return to subsong before fully developing their vocalizations (e.g., Baptista, 1983). The present study examined the trajectory of vocal development of a young Grey parrot (Athena) as she learned referential English. By tracking Athena’s acquisition of vowel-like sounds over the course of fifteen months, using audio recordings and acoustic software programs, her vocal development was analyzed over time, from her first squeaks to her more distinct pronunciations, and her progress compared with human children and other parrots in the lab. Not one, but multiple U-shaped curves characterized her acquisition of isolated labels, from what initially seemed to be almost exact renditions of an English label, to much less clear versions, and on to more faithful copies. The results indicate that, like human children, parrots can experience the phenomenon of phonological regression, a finding which provides additional evidence for avian-human vocal learning parallels.

References

Baptista, L.F. (1983). Song learning. In A.H. Brush & G.A. Clark Jr (Eds), Perspectives in ornithology (pp 500-506). Cambridge: Cambridge University Press.

Berg, K.S., Delgado, S., Cortopassi, K.A., Bessinger, S.R., & Bradbury, J.W. Vertical transmission of learned signature calls in a wild parrot. Proceedings of the Royal Society B: doi:10.1098/rspb.2011.0932.

Bleile, K., & Tomblin, K. (1991). Regressions in the phonological development of two children. Journal of Psychological Research, 20(6), 483-99.

Chakraborty, M., Walløe, S., Nedergaard, S., Fridel, E.E., Dabelsteen, T., et. al., (2015). Core and shell song systems unique to the parrot brain. PLoS ONE, DOI:10.1371/journal.pone.0118496

Jarvis, J.D., Güntürkün, O., Bruce, L., Csillag, A., Karten, H., Kuenzel, W. et al. (2005). Avian brains and a new understanding of vertebrate evolution. Nature Reviews Neuroscience, 6, 151-159.

Doupe, A.J., & Kuhl, P.K. (1999). Birdsong and human speech: common themes and mechanisms. Annual Review of Neuroscience, 22, 567–631.

Pepperberg, I.M. (1999). The Alex studies. Cambridge, MA: Harvard University Press.

Pepperberg, I.M. (2013). Evolution of vocal communication: An avian model. In J. Bolhuis & M. Everaert (Eds).,Birdsong, speech and language (Ch. 26) Cambridge, MA: MIT Press

Pepperberg, I.M., Brese, K.J., & Harris, B.J. (1991). Solitary sound play during acqui¬sition of English vocalizations by an African Grey parrot (Psittacus erithacus): Possible parallels with children's mono¬logue speech. Applied Psycholinguistics, 12:151 177.

Pepperberg, I.M. & Carey, S. (2012). Grey Parrot number acquisition: the inference of cardinal value from ordinal position on the numeral list, Cognition 125:219-232.

Citation:

Pepperberg I. M., Zilber-Izhar K. and Smith S. (2016). Parrot " Phonological Regression": Expanding Our Understanding Of The Evolution Of Vocal Learning. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/5.html


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© Pepperberg, Zilber-Izhar, Smith 2016

Early Learned Words Are More Iconic

Lynn Perry1 , Marcus Perlman2 , Gary Lupyan2 , Bodo Winter3 and Dominic Massaro4
1 University of Miami
2 University of Wisconsin-Madison
3 University of California Merced
4 University of California Santa Cruz

Keywords: iconicity, sound symbolism, English vocabulary

Short description: English-speaking children learn highly iconic words earlier and produce them more frequently than less iconic words

Abstract:

NA

Citation:

Perry L., Perlman M., Lupyan G., Winter B. and Massaro D. (2016). Early Learned Words Are More Iconic. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/34.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Perry, Perlman, Lupyan, Winter, Massaro 2016

Cooperative Communication: What Do Primates And Corvids Have To Tell?

Simone Pika
Max Planck for Ornithology

Keywords: human language, cooperation, cooperative communication, comparative approach

Abstract:

Human language is thought to be a fundamentally cooperative enterprise, involving fast-paced and extended social interactions (Grice, 1957; Sperber & Wilson, 1986). Although it is still highly debated how human language originated, it has been suggested that it evolved as part of a larger adaptation of humans’ species-unique forms of cooperation (Levinson, 1995; Tomasello, 2008). Earliest cooperative interactions can be observed around the age of 12 months, when human infants start to engage in interactive routines with their caretakers involving distinct gestures such as showing, offering, giving (for example, food, objects), and pointing to coordinate attention towards a social partner and an object of mutual interest.

Although, cooperative abilities have clearly been shown under experimental (Crawford, 1937; Hare, Melis, Woods, Hastings, & Wrangham, 2007; Melis, Hare, & Tomasello, 2006) and/or wild conditions (Boesch & Boesch-Achermann, 2000; Mitani, 2009) in our closest living relatives, the bonobos (Pan paniscus) and chimpanzees (Pan troglodytes), studies into cooperative communication skills are relatively rare. For instance, by drawing on a conversation analysis framework Rossano (2013) showed that two mother infant dyads of captive bonobos used gesture sequences that strongly resemble the structure of sequences of social action in human conversation. They utilized cooperative adjacency-pair structures and communicated at communication tempi similar to the timing of ordinary human conversation (Stivers et al., 2009).

Here, I aim to revisit the claim that communicative interactions of nonhuman primates and other social living animals lack the cooperative nature of human communication. To do so, I will (a) provide an overview of the state of the art, (b) present newest data on collaborative communication in primates and other animals, and (c) develop a framework, which could be used to predict patterns of collaborative communication in other primates and non-primate species and to facilitate more systematic investigation.

References

Boesch, C., & Boesch-Achermann, H. (2000). The Chimpanzees of the Taï Forest: Behavioural Ecology and Evolution. Oxford: Oxford University Press.

Crawford, M. P. (1937). The cooperative solving of problems by young chimpanzees (Vol. #SP: Tippfehler in Originalreferenz ): Johns Hopkins Press.

Grice, H. P. (1957). Meaning. Philosophical Review, 66(3), 377-388.

Hare, B., Melis, A. P., Woods, V., Hastings, S., & Wrangham, R. (2007). Tolerance allows bonobos to outperform chimpanzees on a cooperative task. Current Biology, 17(7), 619-623. doi: 10.1016/j.cub.2007.02.040

Levinson, S. C. (1995). Interactional biases in human thinking. In E. N. Goody (Ed.), Social Intelligence and Interaction (pp. 221-260). Cambridge: Cambridge University Press

Melis, A. P., Hare, B., & Tomasello, M. (2006). Chimpanzees recruit the best collaborators. Science, 311, 1297-1301.

Mitani, J. C. (2009). Cooperation and competition in chimpanzees: Current understanding and future challenges. Evolutionary Anthropology, 18(5), 215-227.

Rossano, F. (2013). Sequence organization and timing of bonobo mother-infant interactions. Interaction Studies, 14(2), 160-189. doi: 10.1075/is.14.2.02ros

Sperber, D., & Wilson, D. (1986). Relevance: Communication and Cognition. Cambridge, Massachusetts: Harvard University Press.

Stivers, T., Enfield, N. J., Brown, P., Englert, C., Hayashi, M., & Heinemann, T. (2009). Universals and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences of the United States of America (PNAS), 106(26), 10587-10592.

Tomasello, M. (2008). Origins of Human Communications. Cambridge, Massachusetts: MIT Press.







Citation:

Pika S. (2016). Cooperative Communication: What Do Primates And Corvids Have To Tell?. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/56.html


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© Pika 2016

Construction Grammar For Apes

Michael Pleyer1 and Stefan Hartmann2
1 Universität Heidelberg
2 Johannes Gutenberg-Universität Mainz

Keywords: Construction Grammar, Great Ape Gesture Systems, Schematicity, Prototypicality

Short description: Can a Construction Grammar approach prove helpful in comparing human language and great ape gesture systems?

Abstract:

Constructionist approaches describe language as a structured network of form-meaning pairings of varying degrees of schematicity and prototypicality. Language acquisition is seen as based on general social and cognitive skills. Starting out from concrete, item-based constructions, children use these skills to extract and gradually abstract constructions from instances of actual language use (Tomasello 2006). Constructionist approaches haven been increasingly applied to language evolution research (e.g. Hurford 2012). In line with this growing research movement, in this paper we propose that constructionist approaches can prove useful in elucidating similarities and differences between human language and non-human primate communication systems. Specifically, we will discuss the question whether the nature of great ape gestures systems can be captured in terms of an inventory of (proto-)constructions and whether such a network is based on cognitive capacities homologous to the cognitive infrastructure underlying the acquisition, usage, and processing of constructions in humans.

Regarding the gesture systems of chimpanzees, Roberts et al. (2012: 586-587) note that they “have a multifaceted and complex repertoire of manual gestures, organised around prototypes, within which there is considerable variation.” Schematization and prototypicality can therefore be seen as important foundational features both of great ape gesture systems as well as for the human constructicon. In a usage-based, constructionist approach linguistic knowledge also relies heavily on these capacities, as it is seen to consist in abstractions and schematizations from exemplar representations of experience in context that form radial prototype networks. Importantly, Roberts et al. note that there are gestures that are “intermediate between the prototypical forms” (587) and are not structurally discrete but instead graded. Similarly, usage-based accounts of language acquisition assume that knowledge of linguistic constructions in young children is characterized by fuzzy boundaries and graded representations (e.g. Abbot-Smith, Lieven & Tomasello 2008).

Another important point of comparison concerns the role of pragmatics in human and non-human primate communication. In studies of the gesture systems of great apes it was found that they flexibly use multiple different gestures in the same context for the same goal. They also use single gestures in different contexts with different goals (Liebal et al. 2014: 155). As Genty & Zuberbühler (2015) note, “several gestures appear to have several outcomes, suggesting that meaning resides more in the pragmatic context than in the morphological form of the signal,” although there are also some iconic and deictic gestures. Human linguistic constructions, in contrast, possess a more specific conceptual content. Still, the meaning side of human linguistic constructions is prototypical and schematic and is only properly instantiated in actual language use in particular contexts. It is thus also heavily dependent on pragmatics. Overall, then, human constructions and great ape gesture systems exhibit striking similarities but also marked differences.

References

Abbot-Smith, K., Lieven, E., & Tomasello, M. (2008). Graded representations in the acquisition of English and German transitive constructions. Cognitive Development, 23, 48-66.

Hurford, J.R. (2012). The Origins of Grammar: Language in the Light of Evolution II. Oxford: Oxford University Press.

Genty, E. & Zuberbühler, K. (2015). Iconic gesturing in bonobos. Communicative & Integrative Biology, 8(1), e992742

Liebal, K., Waller, B.M., Burrows, A.M. & Slocombe, K.E. (2014): Primate Communication: A Multimodal Approach. Cambridge: CUP.

Roberts, A.I. , Vick, S, Roberts, S.G.B., Buchanan-Smith, H.M. & Zuberbühler, K. (2012). A structure-based repertoire of manual gestures in wild chimpanzees:statistical analyses of a graded communication system. Evolution and Human Behaviour, 33, 578-589.

Tomasello, M. (2006). Construction grammar for kids. Constructions, SV1-11/2006.

Citation:

Pleyer M. and Hartmann S. (2016). Construction Grammar For Apes. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/185.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Pleyer, Hartmann 2016

The Evolution Of Im/politeness

Monika Pleyer and Michael Pleyer
Universität Heidelberg

Keywords: Politeness, Impoliteness, Pragmatics, Evolution of Im/politeness, Social Cognition

Short description: This paper investigates the cognitive, social, and evolutionary foundations and functions of im/politeness, and presents an evolutionary model of its emergence.

Abstract:

Im/politeness is a fundamental feature of human language and communication. However, there is hardly any research on the evolution of im/politeness and the cognitive and social factors underlying its emergence. In this paper we argue that the evolution of politic, polite, as well as impolite behaviour is an important research question for language evolution research. We investigate the evolutionary foundations of im/politeness, present an evolutionary model of the emergence of im/politeness and discuss the evolutionary functions of im/politeness. In this way, we illustrate that investigating im/politeness from an evolutionary perspective can make significant contributions to our understanding of the evolution of pragmatic competencies, language, and also im/politeness research in general.

Citation:

Pleyer M. and Pleyer M. (2016). The Evolution Of Im/politeness. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/176.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Pleyer, Pleyer 2016

What Kind Of Grammar Did Early Humans (and Neanderthals) Command? A Linguistic Reconstruction

Ljiljana Progovac
Wayne State University

Keywords: test, test2, test3

Abstract:

Here I pursue a linguistic reconstruction of the earliest stages of grammar, following a precise syntactic theory. This reconstruction arrives at the initial stages of grammar which are in consort with crosslinguistic variation attested in the expression of various syntactic phenomena, including transitivity and tense marking. Interestingly, in making an argument for the antiquity of language, Dediu & Levinson (2013, p. 11) express their hope “that some combinations of structural features will prove so conservative that they will allow deep reconstruction.” I propose that the earliest stages of syntax as reconstructed here provide just such a conservative platform from which all the subsequent variation could arise, and which could have been commanded also by our cousins and the common ancestor. The reconstruction is at the right level of granularity to exclude some hypotheses regarding the hominin timeline, and to support others. It leads to specific and testable hypotheses which can be explored in e.g. anthropology, neuroscience, and genetics.

Citation:

Progovac L. (2016). What Kind Of Grammar Did Early Humans (and Neanderthals) Command? A Linguistic Reconstruction. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/p1.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Progovac 2016

The Cultural Evolution Of Structure In Music And Language

Andrea Ravignani1 , Tania Delgado2 and Simon Kirby2
1 Vrije Universiteit Brussel
2 University of Edinburgh

Keywords: cultural transmission, rhythm perception, diffusion chains, iterated learning, evolution of music, evolution of rhythm, structure, learning

Abstract:

-no abstract-

Citation:

Ravignani A., Delgado T. and Kirby S. (2016). The Cultural Evolution Of Structure In Music And Language. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/14.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Ravignani, Delgado, Kirby 2016

Languages Support Efficient Communication About The Environment: Words For Snow Revisited

Terry Regier1 , Alexandra Carstensen1 and Charles Kemp2
1 UC Berkeley
2 Carnegie Mellon University

Keywords: semantic variation, efficient communication, words for snow, functionalism

Abstract:

Citation:

Regier T., Carstensen A. and Kemp C. (2016). Languages Support Efficient Communication About The Environment: Words For Snow Revisited. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/54.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Regier, Carstensen, Kemp 2016

Strategies In Gesture And Sign For Demoting An Agent: Effects Of Language Community And Input

Lilia Rissman1 , Laura Horton1 , Molly Flaherty1 , Marie Coppola2 , Annie Senghas3 , Diane Brentari1 and Susan Goldin-Meadow1
1 University of Chicago
2 Unniversity of Connecticut
3 Columbia University

Keywords: Nicaraguan Sign Language, Agency, Verbal morphology, Language emergence, Gesture

Short description: Across cohorts of Nicaraguan Sign Language, changes in how signers describe agentive events.

Abstract:

Languages use a variety of devices to indicate that an agent is present in an event, but is not particularly salient, e.g., passive voice (see Siewierska, 2013). Here we investigate agent-demotion devices in an emerging sign language in Nicaragua. Nicaraguan Sign Language (NSL) began when Homesigners (deaf individuals who use homemade gestures to communicate with hearing individuals) were brought together for the first time in the late 1970s (Cohort 1 signers). Cohort 2 signers entered the community after 1984, and Cohort 3 signers joined after 1994; these later cohorts learned their sign language from the previous generations. We asked Homesigners, Cohort 1 signers, and Cohort 2-3 signers to describe vignettes that varied in how salient the agent was. All groups used verbal morphology to distinguish agentive vs. non-agentive scenes. However, only signers who learned their language from older peers (i.e., Cohorts 2-3) used verbal morphology to distinguish between strong and weak agents, suggesting that it is possible to create linguistic devices to distinguish agents from non-agents without a language model, but using these devices to demote agents occurs only when the language is transmitted to the next generation of learners.

Citation:

Rissman L., Horton L., Flaherty M., Coppola M., Senghas A., Brentari D. and Goldin-Meadow S. (2016). Strategies In Gesture And Sign For Demoting An Agent: Effects Of Language Community And Input. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/158.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Rissman, Horton, Flaherty, Coppola, Senghas, Brentari, Goldin-Meadow 2016

Social Biases Versus Efficient Communication: An Iterated Learning Study

Gareth Roberts and Mariya Fedzechkina
University of Pennsylvania

Keywords: social bias, communicative efficiency, iterated learning, problem of linkage, experimental

Short description: An experiment on how opposing biases interact in the cultural evolution of language.

Abstract:

A crucial question in the evolution of language concerns the “problem of linkage” (Kirby, 1999, 20): How do the constraints acting on individual language users give rise to observed patterns of linguistic diversity? For instance, recent work has suggested that some properties of language benefit efficient information transmission (e.g., Piantadosi et al., 2011), and Fedzechkina et al. (2012) showed experimentally that efficiency can increase through restructuring by language learners. A number of other experiments investigating the problem of linkage have employed the iterated-learning model (ILM), which simulates cultural transmission by using the output of one learner as the training data for the next (Kirby et al., 2014). A typical simplification is that learners are exposed to data undifferentiated by source, very often from only one individual. Real-world trans- mission, by contrast, involves multiple models distinguished by such variables as contact frequency and social status, which are known to influence the spread of linguistic variants (Labov, 2001). An interesting case concerns situations where social pressures apparently run counter to efficiency. For instance, modern plural second-person pronouns in English (yous, y’all, yinz) reduce ambiguity, but are often avoided on social grounds (leaving ambiguity, or requiring less efficient workarounds). Social factors may also explain the retention of distinctions that do little communicative work, such as who/whom.



In this study, we focus on two questions left open by prior work: a) Is the increase in efficiency observed by Fedzechkina et al. amplified by iterated learning? b) How is this process influenced by the presence of social pressures that run counter to communicative efficiency? To investigate this, we conducted an iterated learning experiment on Amazon Mechanical Turk, in which participants were ex- posed equally to two dialects of an “alien language”. Both dialects exhibited strict SOV word order, but one dialect redundantly marked case, while the other did not. Participants in the Bias condition were encouraged to view the aliens speaking the redundant dialect as potential trading partners. In the No bias condition all aliens were potential trading partners. There were five chains in each condition, with two participants in each generation, whose output in the test phase was presented as the redundant dialect to the next generation. Case marking behaved significantly differently in the two conditions (Figure 1). All first-generation participants were exposed to 50% case-marked sentences. In the No bias condition, this proportion declined fast and disappeared completely within four generations for all chains. In the Bias condition, it also declined, but more slowly, disappearing eventually in only three chains. Our results suggest that the effect of learners’ biases towards efficiency is amplified by transmission and can thus account for (some) observed cross-linguistic typological patterns. However, while languages in both conditions became more efficient over generations, the biases towards efficiency were modulated by social factors—redundant case-marking persisted longer when associated with a preferred social group.

Citation:

Roberts G. and Fedzechkina M. (2016). Social Biases Versus Efficient Communication: An Iterated Learning Study. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/127.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Roberts, Fedzechkina 2016

Vocal Learning And Homo Loquens

Joana Rosselló
Universitat de Barcelona

Keywords: Vocal learning, speech as default, externalization, speech and thought

Short description: A bottom-up approach to language evolution with a vocal learning system as starting point seems to be more parsimonious and explanatory than top-down alternatives: externalization is not secondary.

Abstract:

A bottom-up approach to language evolution with a vocal learning system as starting point seems to be parsimonious and explanatory: many spurious dilemmas dissolve, a sound explanation for the default nature of speech (vs. sign) falls into place and a vision where speech has cognitive import obtains.

Citation:

Rosselló J. (2016). Vocal Learning And Homo Loquens. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/196.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Rosselló 2016

The Cultural Evolution Of Complexity In Linguistic Structure

Carmen Saldana , Simon Kirby and Kenny Smith
The University of Edinburgh

Keywords: Cultural evolution, iterated learning, language structure, constituency, complexity, expressivity, compressibility

Short description: Complex constituent structure evolves through cultural evolution from the trade-off between compressibility and expressivity.

Abstract:

Languages are culturally transmitted through a repeated cycle of learning and communicative interaction, a process known as iterated learning. Previous work has shown how different features of linguistic structure evolve from the trade-off between different competing pressures acting on language learning and communication such as compressibility and expressivity (Kirby, Cornish, & Smith, 2008; Perfors, Tenenbaum, & Regier, 2011; Lupyan & Dale, 2015; Regier, Kemp, & Kay, 2015; Kirby, Tamariz, Cornish, & Smith, 2015). In Kirby et al. (2015), compositional miniature artificial languages evolve as a result of their transmission across “generations”. Where both compressibility and expressivity pressures are in play, signals in later generations are composed of atomic units, each mapping to a specific dimension of the meaning to be conveyed. However, the complexity of the languages which evolve in these experiments is necessarily limited by the objects (meanings) people were learning labels for. In particular, the sets of objects to be labelled do not require a language which exhibits hierarchical constituency and syntactic categories. In this paper, we increase the complexity of the meanings to be conveyed by including motion events that comprise shape, number, motion and aspect. The events are composed by a focal object which performs the action and optionally, an anchor object which remains static. By increasing the complexity of the meaning space, we expect the same mechanisms involved in the evolution of simple compositionality to lead to richer syntactic structure more closely resembling that found in real languages.

We ran an Iterated Artificial Language Learning study and manipulated the expressivity pressure. We designed a monadic condition (N=32) with an artificial pressure for expressivity, and a dyadic condition (N=80) with communication as a natural pressure for expressivity. Following Kirby et al. (2015) we use the transmission chain paradigm. Participants were trained on a set of meaningsignal mappings, and then tested on their ability to recall that language. The first participants in a chain were trained on a non-compositional randomly generated language. Subsequent participants were trained on the language produced by the previous participants. The test phase of the monadic condition involved typing descriptions for motion event scenes using the language learned previously; participants were not allowed to reuse the same description for different meanings,introducing an artificial pressure for expressivity. The test phase in the dyadic condition required participants to communicate with their partner in the language that they previously learned; members of a dyad alternated between describing meanings for their partner, and interpreting descriptions provided by their partner.

In accordance with previous results, we found a significant increase in learning success and structure in both conditions along the evolution of compositional structure. Moreover, constituency was hinted at by the emergence of morphologically complex N-like and V-like syntactic lexical categories. These categories were used to form hierarchically compositional sentential structures with meaningful word order.

Despite the qualitative similarity of the results in the two conditions, we found that condition significantly affected the evolution of structure: languages in the dyadic condition became structured more rapidly and their level of structure was consistently higher. The levels of complexity in the emergent compositional systems were significantly different between conditions: the systems in the monadic condition showed higher system complexity on average and less transparent morphosyntactic structures (i.e. they exhibit functional elements such as category markers, and non-adjacent dependencies, not found in the dyadic condition).

Compositionality operating at the levels of morphology and syntax evolved through the trade off between compressibility and expressivity. Nevertheless, the difference in complexity found between the two conditions points to the need for further investigation into the nature of the pressure for expressivity in these experiments. In the dyadic condition, the need to maintain communication may lead to a conservative approach. If participants find a solution that works, they stick with it. In the monadic condition, the pressure for expressivity is quite different. The need to avoid reuse of the same description for different meanings leads to an anticonservative approach, with participants actively generating novel signals. Future work should investigate whether an analog of this tendency to innovate is at play in real languages, and consequently whether a pressure for novelty needs to take its place alongside compressibility and expressivity in the evolution of complex linguistic structure.

References

Kirby, S., Cornish, H., & Smith, K. (2008). Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language. PNAS, 105(31), 10681–10686.

Kirby, S., Tamariz, M., Cornish, H., & Smith, K. (2015). Compression and communication in the cultural evolution of linguistic structure. Cognition, 141, 87–102.

Lupyan, G., & Dale, R. (2015). The role of adaptation in understanding linguistic diversity. In R. D. Busser & R. J. LaPolla (Eds.), Language structure and environment: Social, cultural, and natural factors.cognitive linguistic studies in cultural contexts, 6. John Benjamins.

Perfors, A., Tenenbaum, J. B., & Regier, T. (2011). The learnability of abstract syntactic principles. Cognition, 118(3), 306–338.

Regier, T., Kemp, C., & Kay, P. (2015). Word meanings across languages support efficient communication. In B. MacWhinney & W. O’Grady (Eds.), The handbook of language emergence. John Wiley & Sons.

Citation:

Saldana C., Kirby S. and Smith K. (2016). The Cultural Evolution Of Complexity In Linguistic Structure. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/49.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Saldana, Kirby, Smith 2016

Skepticism Towards Skepticism Towards Computer Simulation In Evolutionary Linguistics

Carlos Santana
University of Pennsylvania

Keywords: Computer simulation, Sources of evidence, Theory confirmation

Short description: How computer simulations are useful even if they don't provide evidence

Abstract:

Many scientists doubt that computer simulation can play a meaningful role in research on the evolution of language. Others argue that agent-based models are especially well-suited to address the paucity of empirical evidence in evolutionary linguistics. I argue that while simulations have no evidential or confirmatory power in themselves, they are capable of extending the inferential reach of our experimental and observational evidence, and thus have an important role to play in the science of language evolution.

Citation:

Santana C. (2016). Skepticism Towards Skepticism Towards Computer Simulation In Evolutionary Linguistics. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/51.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Santana 2016

From Natural Order To Convention In Silent Gesture

Marieke Schouwstra , Kenny Smith and Simon Kirby
University of Edinburgh

Keywords: gesture, silent gesture, experiment, semantics, interaction

Short description: Basic word order in silent gesture: initially conditioned semantically, but becomes regular through interaction and iteration

Abstract:

All languages have ways to describe who did what to whom, and many have a fixed order for Subject, Object and Verb. Silent gesture, an experimental paradigm in which adult hearing participants are asked to describe events using only their hands, has proven to be a valuable tool to investigate the origins of word order. It was found that regardless the dominant order of their native language, people prefer to use SOV order for describing extensional transitive events (e.g., boy- ball-throw) (Goldin-Meadow et al., 2008). They deviate from this order, however, when an event has certain semantic characteristics. When events are reversible (the roles of Agent and Patient can be switched; e.g. boy-lift-girl) this leads to various word orders, one of which SVO (Gibson et al., 2013, Hall et al., 2013). When intensional (the Object is non-existent or non-specific; e.g., boy-search- ball), this leads to a preference for SVO order (Schouwstra & de Swart, 2014).

The two orders mentioned above, SOV for extensional and SVO for intensional events, arise independently of the dominant order of the participants’ native language, and we will claim that they represent naturalness: they are cognitively the most intuitive way to impose linear structure on information, reflecting a preference to put Agents first (see Jackendoff (2002)) and more abstract or relational information last. Existing languages do not generally reflect this natural- ness, as they tend not to have word order conditioned on event type. Given the improvisation-situation that favours semantically conditioned word order, and the fully conventionalised situation that favours regularity in word order, we investigated what happens to silent gesture over time when it is used for communication. Will it become more regular, like conventional language?

In experiment 1, 24 adult native speakers of English with no knowledge of any sign language were assigned into dyads, and each dyad was asked to communicate about intensional and extensional events. The set of stimuli consisted of 64 line drawings: 32 intensional and 32 extensional events. Participants alternated between the role of actor and interpreter and engaged in six rounds of 32 trials each (switching roles each trial). As actor they described an image (presented on an iPad) using only their hands, and as interpreter they chose (from an array of 8 images on the iPad) the image they thought was intended by the actor. Each actor described equal numbers of intensional and extensional events. They received immediate feedback after each trial, and were encouraged to increase their speed over rounds and be as quick and accurate as possible overall.

Speed and accuracy increased over the course of the experiment. Moreover, the word orders showed signs of conventionalisation: over the rounds, word order became less conditioned on meaning, and 7 of 12 dyads even converged on a single word order. However, all 7 pairs converged on SVO, the natural order for intensional events but also same order as their native language.

To see if the frequency of event types could influence the word order of the emerging sign system, we conducted a second experiment, in which extensional events were more frequent than intensional events. Experiment 2 was set up the same as experiment 1, except for the proportions of the two kinds of events: each actor described 24 intensional events (25%) and 72 extensional events (75%). The larger proportion of extensional events had an influence on the proportion of SOV orders used throughout the experiment, compared to experiment 1, and on the way in which conventional word order was introduced: although 3 dyads converged on SVO word order, 3 other dyads converged on SOV.

Our experiments show that in silent gesture communication, semantically conditioned word order tends to disappear in favour of more regular word order. The frequency of extensional and intensional events influences the way in which regularisation progresses. This suggests that where pressures for naturalness and regularity are in conflict, languages may start natural, but that naturalness will give way to regularity as signalling becomes conventionalised through repeated usage.

Citation:

Schouwstra M., Smith K. and Kirby S. (2016). From Natural Order To Convention In Silent Gesture. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/67.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Schouwstra, Smith, Kirby 2016

Active Control Of Complexity Growth In Naming Games: Hearer's Choice

William Schueller1 and Pierre-Yves Oudeyer2
1 INRIA Bordeaux Sud-Ouest
2 INRIA Bordeaux Sud-Ouest, ENSTA ParisTech

Keywords: active learning, emergence of language, self-organization

Short description: Active choice of topics in Naming Games improves lexicon alignment dynamics at the population level, especially when the hearer is choosing.

Abstract:

How do linguistic conventions emerge among a population of individuals? A shared lexicon can self-organize at this level through local interactions between individuals, as this has been modelled in the Naming Games computational framework. However, the dynamics of the convergence process towards this shared convention can differ a lot, depending on the interaction scenario. Infants, who acquire social conventions really fast, control actively the complexity of what they learn, often following a developmental pathway. Adults also adapt the complexity of their linguistic input when speaking to language beginners. We show here that such active learning mechanism can improve considerably the speed of language formation in Naming Game models. We compare two scenarios for the interactions: the speaker exherts an active control, or the hearer does. The second scenario shows faster dynamics, with more robustness.

Citation:

Schueller W. and Oudeyer P. (2016). Active Control Of Complexity Growth In Naming Games: Hearer's Choice. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/105.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Schueller, Oudeyer 2016

Mind The Gap: Inductive Biases In Phonological Feature Learning

Klaas Seinhorst
University of Amsterdam

Keywords: inductive biases, phonological features, complexity, iterated learning

Abstract:

Although extensive research has been done into the acquisition of non-linguistic feature combinations, empirical evidence about phonological feature learning is scarce. I present results from learning experiments in which participants learnt a data set with the internal structure of a plosive segment inventory. The outcomes suggest that learning biases may indeed play a role in phonological typology, and that learners reduce the cumulative complexity in the data set considerably. These results support the hypothesis that the reduction of complexity is a driving force in the evolution of language.

Citation:

Seinhorst K. (2016). Mind The Gap: Inductive Biases In Phonological Feature Learning. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/155.html


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© Seinhorst 2016

Children's Production Of Determiners As A Test Case For Innate Syntactic Categories

Catriona Silvey1 and Christos Christodoulopoulos2
1 University of Chicago
2 University of Illinois

Keywords: determiner acquisition, child language, corpus analysis, constructivist versus generativist accounts, discourse

Short description: Children's determiner productions in the light of discourse: questioning support for innate syntactic categories

Abstract:

A central debate in language evolution is whether humans have a specific innate capacity for language, or whether domain-general learning abilities can explain the acquisition of linguistic structures. One testing ground for these hypotheses has been children’s early use of English determiners, specifically the definite and indefinite articles ‘the’ and ‘a’. The argument goes as follows: if children have an innate syntactic determiner category, they should interchangeably use ‘the’ and ‘a’ with all nouns as soon as they begin producing them with a determiner. However, if children initially learn determiner-noun combinations as islands and only gradually abstract a syntactic category, they should initially use particular nouns with only one determiner (Valian, Solt, & Stewart, 2009). These two possibilities can be quantified as ‘overlap’: the number of nouns children produce with both ‘a’ and ‘the’, divided by the number of nouns children produce with either. If overlap is 0, children use each noun only with one of the two determiners, suggesting island-based learning. If overlap is 1, children use each noun with both determiners, suggesting a productive syntactic category. Results from this paradigm have been mixed. Some researchers find that children’s overlap is low, suggesting that an abstract category of determiner is gradually constructed rather than being present from the start (Pine, Freudenthal, Krajewski, & Gobet, 2013). Others counter that children’s overlap is not significantly different from their parents’, suggesting an innate syntactic category (Valian et al., 2009).

Yang (2013) addresses an important problem with using overlap as a measure of productivity. As Valian et al. (2009) observe, the fewer times a noun appears, the more likely it will appear with only one determiner. Therefore, low overlap may simply be the consequence of many nouns appearing only few times. Yang therefore uses the frequencies of noun types and determiners to predict expected overlap if determiners and nouns freely combine within these frequency constraints. His model accurately predicts empirical overlap values in early child language. Yang interprets this result as showing that from the start, children have an abstract determiner category. This finding has since been cited as evidence for innate syntactic categories (Bolhuis, Tattersall, Chomsky, & Berwick, 2014).

We replicate Yang’s model on the six children from the CHILDES corpus analysed in Yang (2013). We show that while the model holds on average across nouns, it poorly predicts the behaviour of individual nouns. As a result, it systematically underestimates the overlap that would occur if nouns and determiners freely combined within Zipfian constraints. Keeping constant the overall frequencies of nouns and determiners, we shuffle each child’s productions so that determiners and nouns combine at random. For these shuffled data, overlap measures exceed those predicted by Yang’s model. The model, then, predicts the children’s data not because they resemble the product of a freely combinatorial grammar, but because determiners and nouns do not freely combine: many mid- to high- frequency nouns appear with only one determiner. While Yang acknowledges these ‘use asymmetries’, he characterises them as ‘unlikely to be linguistic’. We argue, however, that a) these asymmetries significantly constrain both children’s and adults’ data, and b) they are linguistic, specifically the product of lexical semantics interacting with the discourse functions of ‘a’ and ‘the’. Since the target of acquisition is therefore not a freely combinatorial system, but one conditioned on semantics and discourse factors, children’s productions are more accurately represented as a gradual acquisition of these factors, rather than as either islands or grammatical combinations isolated from discourse. More broadly, studies using naturalistic corpora to test hypotheses about language acquisition and evolution should be wary of either taking constrained usage patterns as evidence of lack of grammar, or abstracting away from them in aid of revealing underlying rules, since these constraints are a non-arbitrary part of the function of the language.

Citation:

Silvey C. and Christodoulopoulos C. (2016). Children's Production Of Determiners As A Test Case For Innate Syntactic Categories. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/23.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Silvey, Christodoulopoulos 2016

Vocal Learning In Functionally Referential Chimpanzee Food Calls

Katie Slocombe1 , Stuart Watson1 , Anne Schel1 , Claudia Wilke1 , Emma Wallace1 , Leveda Cheng1 , Victoria West1 and Simon Townsend2
1 University of York
2 University of Zurich

Keywords: Primate communication, vocal learning, vocal Production, flexibility, chimpanzee, food calls, functionally referential calls

Short description: We show socially mediated changes in the structure of chimpanzee food calls. This is the first example of vocal learning in referential vocalizations of any non-human species, and it dispels the myth that the structure of such calls is solely determined by arousal.

Abstract:

One standout feature of human language is our ability to reference external objects and events with socially learnt symbols, or words. Exploring the phylogenetic origins of this capacity is therefore crucial to a comprehensive understanding of the evolution of language. While non-human primates can produce vocalizations that function as if they refer to external objects in the environment, the psychological mechanisms underlying call production in terms of a caller’s motivation and a caller’s ability to alter the structure of these calls is likely different to humans (Wheeler and Fischer, 2012). Indeed it is generally argued that the acoustic structure of context specific calls elicited by salient external stimuli (e.g. food, predator) is directly determined by arousal states induced by the external stimuli (Wheeler and Fischer, 2012). This apparent lack of flexible control over the structure of functionally referential vocalizations represents a key discontinuity with language. We tested the degree of flexibility in the acoustic structure of functionally referential chimpanzee food calls (Slocombe and Zuberbuhler, 2005) and whether the structure of these calls could be influenced by vocal learning processes.

We examined the food preferences and acoustic structure of food calls of two groups of adult chimpanzees, prior to and for 3 years after their integration into a single group at Edinburgh Zoo, UK. Prior to social integration in 2010 the resident Edinburgh (ED) chimpanzees (N = 6) and the immigrant Beekse Bergen (BB) chimpanzees (N = 7) had significantly different preferences for apples and produced acoustically distinct calls whose structure, in line with previous research, matched their preferences for this food (Slocombe and Zuberbuhler 2006). Apples were regularly fed to both groups for at least 3 years before integration, so were not a novel food for either group. General arousal levels may have been elevated in 2010 as both groups adjusted to a new social environment and BB chimpanzees habituated to a new enclosure. However, in 2011, one year after integration and habituation to the new social and physical environment, the call structures and preferences for apples of the two groups remained stable, indicating changes in general arousal were not affecting call structures. Social network analysis (SNA) revealed two distinct subgroups in 2011, with individuals still preferring to associate with members of their original group and maybe lacking the motivation to converge their calls. In 2013, SNA showed the subgroups had dissolved and strong inter-group relations had developed. Although the ED calls stayed stable in their structure 2010-13, in 2013 BB calls changed significantly to converge with the lower frequency ED calls. Importantly this call convergence occurred independently of preferences for apples, which stayed stable over all years for both groups. This shows a decoupling of the affective response induced by the external stimulus (apples) and the structure of the call produced. We argue that these data represent the first evidence of non-human animals actively modifying and socially learning the structure of a meaningful functionally referential vocalization from conspecifics. Our findings indicate that functionally referential call structure is not solely determined by arousal processes in our closest living relative. Although this modest degree of acoustic change within an existing call type is not analogous to the impressive vocal learning shown by humans, this flexibility may be an important evolutionary precursor to socially learnt referential words that are so central to human communication.

References

Slocombe, K.E., & Zuberbühler, K. (2006). Food-associated calls in chimpanzees: responses to food types or food preferences? Animal Behaviour, 72, 989–999.

Slocombe, K.E., & Zuberbühler, K. (2005). Functionally referential communication in a chimpanzee. Current Biology, 15, 1779–1784.

Wheeler, B.C., & Fischer, J. (2012). Functionally referential signals: a promising paradigm whose time has passed. Evolutionary Anthropology, 21, 195–205.

Citation:

Slocombe K., Watson S., Schel A., Wilke C., Wallace E., Cheng L., West V. and Townsend S. (2016). Vocal Learning In Functionally Referential Chimpanzee Food Calls. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/118.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Slocombe, Watson, Schel, Wilke, Wallace, Cheng, West, Townsend 2016

Chimpanzees Process Structural Isomorphisms Across Sensory Modalities

Ruth Sonnweber1 and Andrea Ravignani2
1 University of Vienna
2 Vrije Universiteit Brussel

Keywords: primate, artificial grammar learning, dependencies, abstract, statistical learning

Abstract:

Humans and other animals are constantly exposed to environmental stimuli, from which they extract sensory regularities (Fitch, 2014; ten Cate and Okanoya, 2012). Moreover, they often relate and integrate one-dimensional quantities across sensory modalities (Ludwig et al., 2011), for instance relating conspecific faces to voices (Seyfarth and Cheney, 2009). If basic patterns like repetitions and identities are perceived in different sensory modalities (Ravignani et al., 2013; Ravignani et al., 2015; Sonnweber et al., 2014), it could be advantageous to detect cross-modal isomorphisms, i.e. modality-independent representations of structural features, which could be used in visual, tactile, and auditory processing. Humans can transfer structural regularities learnt in one modality, e.g. visual sequences, to another modality, e.g. unfamiliar sound sequences (Altmann et al., 1995). To date, this ability to map structural regularities across domains has not been demonstrated in other animals. Here we show that two chimpanzees trained to choose symmetric sequences of geometric shapes spontaneously detected a visual-auditory isomorphism. Although chimpanzees were never trained to associate sounds to images, their response latencies in choosing symmetric visual sequences was shorter when presented

with (structurally isomorphic) symmetric, rather than foil sound triplets. Thus, previously unheard sound sequences influenced the choice of visual sequences solely based on structural similarities. This provides the first evidence of structure learning across modalities in a non-human animal. Our findings suggest that human language is not a prerequisite to map abstract structures between modalities. Cross-modal abilities might instead have constituted a precursor to human linguistic abilities (Cuskley and Kirby, 2013), involving evolutionary old neural mechanisms (Ghazanfar and Takahashi, 2014).

Citation:

Sonnweber R. and Ravignani A. (2016). Chimpanzees Process Structural Isomorphisms Across Sensory Modalities. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/76.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Sonnweber, Ravignani 2016

Rule Learning In Birds: Zebra Finches Generalize By Positional Similarities, Budgerigars By The Structural Rules.

Michelle Spierings and Carel ten Cate
Leiden University

Keywords: Rule learning, Positional learning, Songbirds, Parrots

Short description: Budgerigars can learn abstract rules in acoustic strings, zebra finches focus on the positional information.

Abstract:

The ability to abstract a rule that defines the structure of strings of sounds is a core mechanism underlying the language faculty, but might not be specific to language learning or even to humans. Up until now, it is unclear whether and to what extent non-human animals possess the ability to abstract a rule defining the relationship among arbitrary auditory items in a string and to generalize this rule to strings of acoustically novel items (ten Cate & Okanoya, 2012; ten Cate, 2014). In this study we tested both a songbird (zebra finch) as well as a parrot species (budgerigar) on these rule learning abilities. Subjects were trained in a go/no-go design to discriminate between two sets of sound strings that followed either an XYX or an XXY structure. After this discrimination was acquired, each subject received a number of test strings (mixed with the training strings) that followed the same structural rules, but consisted of either new combinations of known elements or of novel elements belonging to other categories. Both species learned to discriminate between the two stimulus sets during training. However, their responses to the test strings were strikingly different. Zebra finches categorized test stimuli with known elements by the positions that these elements occupied in the training strings. A subsequent experiment with artificially created sound elements showed that this was independent of whether the strings consisted of conspecific or unkown sounds. In contrast, the budgerigars categorized both novel combinations of familiar elements as well as strings consisting of novel elements by their underlying structure. They thus abstracted the relationship among items in the XYX and XXY structures, indicating a level of abstraction comparable to analogical reasoning, a cognitive ability long thought to be unique for humans and thus far only known from great apes and crows (Thompson & Oden 2000; Smirnova et al; 2015). Our study is the first clear indication that abstract rule learning in auditory strings is not specific to language or to humans.

References

Smirnova, A., Zorina, Z., Obozova, B., & Wasserman, E. (2015). Crows spontaneously exhibit analogical reasoning. Current Biology, 25, 1-5.

ten Cate, C. (2014). On the phonetic and syntactic processing abilities of birds: from songs to speech and artificial grammars. Curr. Opin. Neurobiol. 28, 157-164.

ten Cate, C., & Okanoya, K. (2012). Revisiting the syntactic abilities of non- human animals: natural vocalizations and artificial grammar learning. Phil.Trans. R. Soc. B., 367(1598), 1984-1994.

Thompson, R.K.R., & Oden, D.L. (2000). Categorical perception and conceptual judgments by nonhuman primates: The paleological monkey and the analogical ape. Cogn. Sci. 24, 363–396.

Citation:

Spierings M. and ten Cate C. (2016). Rule Learning In Birds: Zebra Finches Generalize By Positional Similarities, Budgerigars By The Structural Rules.. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/15.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Spierings, ten Cate 2016

Minimal Pressures Leading To Duality Of Patterning

Matthew Spike , Kenny Smith and Simon Kirby
The University of Edinburgh

Keywords: duality of patterning, compositionality, combinatoriality, agent-based modelling

Abstract:

\citeA{Hockett1959} identified \textit{duality of patterning} as a fundamental design feature of human language. \citeA{Ladd2014} re-analyses `duality' as two levels of systematicity, one fully embedded within the other: a meaningless, \textit{combinatorial} level, providing the building blocks for a meaningful, \textit{compositional} level. Explanations for the emergence of combinatoriality include ease of production and reception \cite{Roberts2015}, robustness \cite{Ay2007}, cultural adaptation to physical constraints \cite{Zuidema2009}, and learnability \cite{Verhoef2014}. Likewise, explanations for compositionality range from communicative function \cite{DeBeule2008} to cultural selection for expressivity and learnability \cite{Kirby2015}. \citeA{Tria2012} are the first to outline an integrated model of the emergence of duality, showing that distinct mechanisms of `noise/recognition' and `blending/repair' lead to the emergence of combinatorial and compositional structure respectively.

Our proposal is that both combinatoriality and compositionality are functional responses to maintain expressivity and learnability against noise. This depends on the level of analysis at which noise applies: signal-directed noise leads to combinatoriality, and noise which affects signal/meaning associations drives compositionality. Our approach contrasts with that of \citeauthor{Tria2012}: we also investigate the emergence of the two levels of patterning, but we aim to show that they are driven by identical --- not distinct --- pressures. We employ a exemplar-based computational model of cultural learning subjected to twin pressures of expressivity and learnability. In common with \citeauthor{Tria2012}, utterances are modelled as strings drawn from a potentially infinite set of characters, subject to noise during transmission/storage. \textit{Learnability} is modelled in terms of \textit{compressibility}, a consequence of noisy pressures causing sub-strings across all exemplars to become more similar. \textit{Expressivity} is an opposing force causing competition between similar sub-strings with a shared meaning. Besides these processes, agents are modelled simply as a set of exemplars associating full strings and complex meanings, with an exemplar memory of fixed size. We show that both combinatorial \textit{and} compositional structures act to maintain learnable, expressive systems against noise: similarly to \citeauthor{Tria2012}, we find that combinatoriality is modulated by signal-directed noise, which can be situated in both perception and cognition. Furthermore, compositionality also requires pressures for learnability and expressivity but, as with \citeauthor{Kirby2015}, whether systems become compositional or holistic depends on the presence of noise in the shape of an information bottleneck, which can be located in both transmission and memory. Given these results, we propose that combinatoriality emerges when noise puts the signal space under pressure to maintain learnability and expressivity: compositionality occurs when noise puts the signal/meaning association space under similar pressures. This helps dispel apparent conflicts between physical, perceptual and cognitive accounts of combinatoriality on the one hand, and acquisition vs. interaction-based accounts of compositionality on the other. However, this does not guarantee that duality of patterning will arise in any socially learnt communication system. Neither does either level of patterning predict the other: we suggest that duality is a response to noise at two levels of analysis.

Citation:

Spike M., Smith K. and Kirby S. (2016). Minimal Pressures Leading To Duality Of Patterning. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/129.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Spike, Smith, Kirby 2016

Information Dynamics Of Learned Signalling Games

Matthew Spike1 , Simon Kirby1 and Kenny Smith2
1 The University of Edinburgh
2 University of Edinburgh

Keywords: signalling games, agent-based modelling, dynamics, information theory, self-organising lexicons

Abstract:

Signalling games involving agent learners exist in various guises, from the game-theoretic Roth-Erev learners of Skyrms (2010), to the Naming Game (Steels, 1997), and agents employing varieties of observational learning (e.g. Oliphant & Batali, 1996; Smith, 2002). The agent-based nature of this work means that the resulting dynamics have an inherently unpredictable character: individual simulations may or may not be representative of average behaviour, if such a thing exists at all. Typically, the best way of overcoming this problem is by running large numbers of simulations and observing the aggregate behaviour. This contrasts with other frameworks — for example, classical or evolutionary game theory. In these cases, there is some macro-level property of the model which drives the overall dynamic of the game. For example, fitness of individual agents in evolutionary models is evaluated using the global average communicative success. Because of this, it is possible to calculate the mean-field dynamic for any known mixture of strategies in the population, revealing any attractors or stable points. In the case of agent-based models, because overall dynamics are completely determined by individual pairwise interactions — at the micro-level (Muhlenbernd, 2013) — the likely result of any interaction is not a direct con- ¨

sequence of the global communicative success of a population, which as a result cannot serve to describe the overall dynamics. Hence, identifying attractors and stable points poses a much harder problem. In order to resolve this problem, we

introduce a new information-theoretic measure of optimality which can describe the overall dynamics of signalling populations of learning agents.

Typically, information theory (Shannon, 1948) has proven difficult to apply to problems involving meaningful communication as it has no way of describing semantic or referential content. Although there have been attempts to address this (e.g. Corominas-Murtra, Fortuny, & Sole, 2014), these still include a problematic ´

macro-level term such as described above. However, we are able to avoid this under the assumption that agent signalling production and reception behaviours are derived from a single shared set of signal meaning associations. In this case, we

can use the signal production behaviour of individual agents to describe their individual optimality in terms of the conditional entropy of meanings given signals, H(M|S), where low entropy represents low ambiguity. Employing this measure, we show that the overall entropy of a system has two components determined by the average individual entropy and average alignment entropy: individual entropy measures the optimality of a single agent’s own signalling system, while alignment entropy is the extra uncertainty due to the divergence of any agent from the population mean. We draw on results such as (Xue, 2006) which show that any population of agents which imitate each other with positive probability will

inevitably drive the alignment entropy to zero.

This allows us to dissect the overall dynamics of any signalling game involving associative agents, which we do by analysing the pairwise interaction defined by its model of learning. In particular, we can describe any population as a point in an entropy state-space. Certain points within this space represent final stable states of the population in terms of their optimality. As such, we are able to show that the way ‘imitative’ learning by itself causes populations to move around the state-space resembles a type of genetic drift. Moreover, we identify the features which must exist to ensure populations develop optimal signalling: firstly, the imitative property described above; secondly, the learning model must on average reduce conditional entropy in any pairwise interaction. Finally, there must be a way to prevent learning slowdown: i.e. agents must retain plasticity. Using these three factors as a diagnostic, we are able to determine the dynamics of any population model involving associative signalling agents without recourse to numerical simulation, including whether or not it will develop optimal signalling. This applies to not just modelling work, but any theory of the emergence of novel lexicons.

Citation:

Spike M., Kirby S. and Smith K. (2016). Information Dynamics Of Learned Signalling Games. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/130.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Spike, Kirby, Smith 2016

Metalinguistic Awareness Of Trends As A Driving Force In Language Change: An Empirical Study

Kevin Stadler , Elyse Jamieson , Kenny Smith and Simon Kirby
The University of Edinburgh

Keywords: language change, metalinguistic awareness, frequency tracking, trends

Short description: Metalinguistic awareness of trends as a driving force in linguistic evolution: is language change intentional, and not just incidental?

Abstract:

Like other culturally replicated traits, human languages - whether spoken or signed - evolve continually. But, unlike many other cultural traits, linguistic conventions are arbitrary - the exact form of a morpheme or syntactic word order convention matter much less than the fact that there is a convention that is understood by all. Consequently, once a convention is established there is a pressure to maintain it, and not to replace it with another form - so what selective forces cause languages to keep evolving?

Sociolinguistic research of the past decades has shown that language changes spread across social groups in an orderly fashion, a process which is typically explained by the notion of `prestige' - a metalinguistic property of linguistic variants that determines whether a speech community will seek to adopt the new variant or not. A positively evaluated variant might be taken up thanks to its overt prestige value, while the spread of negatively evaluated variants (those perceived as `wrong' or otherwise imbued with negative associations) is said to be due to covert prestige. Crucially, the establishment of social prestige is itself a puzzle to be solved: the choice of which linguistic form becomes `prestigious' is as arbitrary as the choice of using one form over another. The prestige value of a variant needs to be negotiated and spread across the speech community in the first place, a process which requires just as much explanation as the diffusion of the linguistic form that it is supposed to explain.

To put the notion of metalinguistic prestige on a more solid footing, Labov (2001, ch. 14) suggested that the steady advancement of changes across generations might be driven by adolescents' awareness of the directionality of linguistic changes, combined with a pressure to discriminate themselves from older speakers. While experiments have shown that the latter pressure can indeed drive linguistic divergence of an artificial language when social group membership is marked explicitly (Matthews, Roberts, & Caldwell, 2012), empirical evidence that humans are able to exploit information on the directionality of ongoing changes is still missing.

In this work we report results from a first quantitative investigation of the human capacity for tracking language change in progress. Using a questionnaire methodology we collected data on speakers' implicit and explicit awareness of three ongoing syntactic changes to verb positioning in the local variety of Scots spoken in Shetland, an island group to the North of Great Britain. 77 participants were asked to report their perceived usage levels of different age and speaker groups for the three changing variables as well as a stable, non-changing control. Our results show that individuals can reliably identify which of the competing linguistic variants are older and which are newer. The data also indicates that individual perceptions of apparent time differences (when younger speakers are leading a change, with the usage levels of older speakers `lagging behind') can be used reliably to determine the directionality of the changes in progress.

The efficacy of `trend-amplifying' selection mechanisms such as the one suggested by Labov has already been demonstrated theoretically by means of computational modelling (Stadler, Blythe, Smith, & Kirby, 2016). In particular, Mitchener (2011) showed that a model of language change based on perceived usage differences between age groups can successfully produce directional selection of arbitrary variants. Our quantitative results indicate that the information required by such mechanisms is indeed readily available to humans, and might consequently be used to coordinate changes across a community. Our results are in support of the idea that language evolution, rather than just being the result of drift and the incidental accumulation of errors in transmission and acquisition, is actively maintained and driven by individuals. Evidence to this end suggests that the origins of human language as we know it rest not only on an increased linguistic capacity, but on metalinguistic capacities that are sensitive to variation and able to exploit usage patterns to actively guide linguistic divergence and change.

Citation:

Stadler K., Jamieson E., Smith K. and Kirby S. (2016). Metalinguistic Awareness Of Trends As A Driving Force In Language Change: An Empirical Study. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/145.html


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© Stadler, Jamieson, Smith, Kirby 2016

The Grammar Of The Body And The Emergence Of Complexity In Sign Languages

Rose Stamp and Wendy Sandler
University of Haifa

Keywords: language emergence, sign languages, language evolution, Grammar of the Body, compositionality, Israeli Sign Language

Abstract:

In all human languages, spoken and signed, complex expressions are compositional: their meanings are determined by the meanings of their constituents and the rules for combining them (e.g., Krifka 2001; Jackendoff 2011; Pfau et al 2012; Smith & Kirby 2012). In sign languages only, however, while the hands convey words, individual actions of face, head, and torso can manifest different linguistic functions, often simultaneously, creating visual compositional, complex configurations. Squinted eyes in Israeli Sign Language (ISL) signal the interlocutor to retrieve shared information (Dachkovsky & Sandler 2009); brow raise signals yes/no questions in ISL as in ASL (Liddell 1980); and the combination of squinted eyes and brow raise in ISL signals a yes/no question about shared information (Nespor & Sandler 1999). Similarly, different head and body postures can represent different participants (e.g., Lillo-Martin 1995; Metzger 1995), concepts (van der Kooji & Crasborn 2006), or places in a discourse. Figure 1 schematizes how movements of different articulators contribute to the overall meaning of an utterance in contemporary ISL, to create a Grammar of the Body (Sandler to appear).

However, corporeal and linguistic complexity do not emerge all at once. An earlier, preliminary study of a newly emerging sign language in a Bedouin community with a high incidence of deafness, Al Sayyid Bedouin Sign Language, shows instead that the different articulators are recruited gradually across generations to convey increasingly complex linguistic functions (Sandler 2012). This, suggests that the recruitment of the body in sign languages provides a visual map of the emergence of complexity in a new language. In our current project, we adopt this initial finding as a strategy to systematically trace the diachronic development of linguistic complexity across three generations (including the first generation) in another sign language that originated only 80 years ago: Israeli Sign Language (ISL). We coded and analyzed two-minute narratives from 15 signers, five in each of three age groups, focusing on the form and function of head and torso actions. In this way we are able to identify increasing systematicity and complexity of linguistic structure as the language gets older, with the body as our guide.

We find that the signers use the head and torso differently and with increasing complexity across generations. Specifically, (1) while older signers use their articulators more than younger signers, it is younger signers who exploit a more variegated head and torso movement pattern by activating the side-to-side axis in addition to the forward and back movement favored by older signers. (2) An analysis of the language functions conveyed reveals that younger signers exploit the additional axis exclusively for marking specific linguistic functions, including parentheticals, questions and coordination. (3) Older signers tend to move their head and torso together as a unit whereas younger signers are able to activate their head and torso independently more than older signers, assigning separate functions to each articulator simultaneously. Finally, (4) younger signers sign much faster than older signers, signalling an increase in efficiency in their language.

The expanded use of the spatial axes in young signers is compatible with the finding that younger signers locate different referents and concepts using the additional side-to-side axis (Padden et al. 2010; Meir 2012). Our study also provides a whole-body context for the finding that the use of eye and head signals on relative clauses becomes significantly more systematic and linguistic in younger ISL signers (Dachkovsky 2014). The study confirms that articulator use and linguistic complexity increase in tandem across generations, and that the bodily organization of articulators in sign languages is a key to the organization of emergent linguistic structure.

Citation:

Stamp R. and Sandler W. (2016). The Grammar Of The Body And The Emergence Of Complexity In Sign Languages. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/81.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Stamp, Sandler 2016

Failures Of Perspective Taking In An Open-ended Signaling Task

Justin Sulik and Gary Lupyan
University of Wisconsin-Madison

Keywords: salience, relevance, convention, signaling, pragmatics, audience design, inference

Abstract:

Imagine needing to communicate some meaning in the absence of a shared, conventional signal. A modern human might do this in a number of ways. Lacking the word 'snake', for example, one might vocally imitate its hiss, or gesturally imitate its slithering, or its biting strike. One could draw a simple stick-figure, or, if speaker and listener share conventional signals other than 'snake', one could say something like 'legless reptile'. Each of these choices foregrounds — makes salient — a different feature of the snake: its hiss, its movement, its bite, its anatomy, its taxonomy.

Some of these choices about salience will be better than others in conveying the meaning to the interpreter. If the slithering gesture were to make most people guess 'fish', the signal would be a poor choice for communicating about snakes. How does a signaler select a signal in the absence of convention? In particular, what information drives the inference about which out of several potentially salient features is most likely to lead to successful communication? Understanding how signaling occurs in the absence of convention is crucial for understanding the origins of convention (Cubitt & Sugden, 2003), and ultimately for understanding the evolution of language with its reliance on conventional signals.

According to several influential theories (Lewis, 1969; Sperber & Wilson, 1995), people are able to take their interlocutor’s point of view into account when deciding how to signal, inferring either what would be salient from an interlocutor’s perspective, or what information would be relevant to them. We experimentally tested the assumption that people are able to use information about salience or relevance from another’s point of view using a word-guessing game.

In the game, a signaler is given an item, such as 'bank'. He/she has to think of a one-word signal to help a guesser guess the item. A very good signal in this case is 'teller' because most people guess 'bank' given 'teller' (Nelson, McEvoy, & Schreiber, 1998). On the other hand, 'money' is a poor choice because very few people guess 'bank' given 'money' (Nelson et al., 1998). The challenge of choosing a good signal is twofold. First, communication of this sort is inherently asymmetric

in the sense that the signaler is given 'bank', while the guesser must infer 'bank'. In addition, salience is often asymmetric, in the sense that 'money' is likely to occur to a signaler given the item 'bank', but 'bank' isn’t likely to occur to the guesser given the signal 'money' (Nelson et al., 1998). The question, then, is whether the signaler is able to override the comparatively high salience of 'money' from his/her point of view to choose a signal that is more informative from the guesser’s point of view, such as 'teller'.

Our results show that signalers are more likely to use information about salience from their own perspective than the guesser’s perspective in an unconstrained task like the one just described. This leads to low communicative success. For example, 40% of signalers chose 'money' to signal 'bank' — the most common choice — while 0% chose 'teller' (we use this example as an illustration; the experiments used many other items). In an unconstrained task like this, signalers are quite poor in using information from the guesser’s perspective, and guessers are even worse at inferring salience from the signaler’s perspective.

In a second study we show that communicators do have access to information about salience from the opposite perspective, but they do not access this information outside of tightly constrained contexts. For example, when given a list of 5 potential signals including 'money' and 'teller', and asked to pick which would be most likely to help someone guess 'bank', 42% now chose 'teller' and just 25% 'money'. In a third study, we show that contextual information can promote perspective taking if the context is clearly shared between signaler and guesser. Participants were given a list of items, all of which made 'money' salient. They were then asked to help the guesser pick 'bank'. This partially inhibited their choosing 'money' as a signal for 'bank' if told that the guesser also had the list.

In sum, the results show that in a novel signaling task, people are sometimes able to take another’s perspective on salience, but this is difficult and achieved only under specific conditions. We give examples of such conditions and conclude that these impose severe limitations on any theory that relies on inference in perspective taking to explain the emergence of successful linguistic conventions.

Citation:

Sulik J. and Lupyan G. (2016). Failures Of Perspective Taking In An Open-ended Signaling Task. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/103.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Sulik, Lupyan 2016

Against The Emergent View Of Language Evolution

Maggie Tallerman
Newcastle University

Keywords: protolanguage, Integration Hypothesis, properties of animal communication, emergent vs. gradualist view of language evolution, functional/grammatical elements in language

Short description: Under a gradualist rather than an emergent view of language evolution, we can see how functional elements and linguistic morphology evolved.

Abstract:

The emergent view of language evolution argues that the language faculty appeared quite recently in human evolution and had no earlier pre-syntactic stages. Under this view, language is said to be an amalgamation of two pre-existing systems that also occur in animal communication: an ‘E’ system, for ‘expressive’, which is likened to systems of learned birdsong, and an ‘L’ system, for ‘lexical’, which is likened to monkey alarm calls. What occurred solely in the case of humans was the advent of a Merge operation, which integrated the two systems. Here we argue against this ‘integration hypothesis’. None of the proposed analogues in animal communication have the critical properties occurring in human language. The syntax of birdsong is unlike the syntax of language in all relevant respects. There are no analogues to the functional elements of the E-system in animal communication. However, under a gradualist rather than an emergent view of language evolution, we can see how functional elements and linguistic morphology evolved.

Citation:

Tallerman M. (2016). Against The Emergent View Of Language Evolution. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/6.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Tallerman 2016

Evidence Of Descent With Modification And Selection In Iterated Learning Experiments

Monica Tamariz1 and Joleana Shurley2
1 The University of Edinburgh
2 University of California Santa Barbara

Keywords: iterated learning experiment, descent with modification, selection, adaptation, phylogenetic tree, spectral analysis

Short description: Your intuition that Iterated Learning Experiments model cultural evolution by Descent with Modification and Selection will be confirmed quantitatively

Abstract:

Iterated learning experiments claim to model the evolution of languages. We report a study that tests to what extent the languages in these experiments actually follow evolutionary dynamics. Specifically, we look for the signature of (1) descent with modification, leading to diversity, and (2) selection, or adaptation to environmental factors, as languages change over generations.

Citation:

Tamariz M. and Shurley J. (2016). Evidence Of Descent With Modification And Selection In Iterated Learning Experiments. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/10.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Tamariz, Shurley 2016

What Is Unique About The Evolution Of Language Compared To Other Cultural Domains? An Experimental Study Of Language, Technology And Art

Monica Tamariz and Jon W. Carr
The University of Edinburgh

Keywords: iterated learning experiment, cultural evolution of language, cultural evolution of technology, cultural evolution of art, cross-domain cultural evolution, Lego, fidelity of transmission, mutation, selection

Short description: Comparing cultural evolution in diffusion chains of Lego constructions: linguistic signals, artworks and technological tools

Abstract:

A comparative approach examining the differences and similarities between the evolution of language and cultural evolution in other human cultural domains (e.g. technology, art, social and political institutions) would help us understand how our cognitive biases interact with different human needs and affordances to produce the astonishing cultural diversity observed in our species. While between-species comparative studies of cultural evolution have deservedly received much attention (e.g. Horner, Whiten, Flynn & de Waal 2006), between-domain comparative studies within humans hardly exist. Instead, studies of cultural evolution in various human cultural domains have proceeded in parallel. Here we present an experimental approach to comparative cross-domain cultural evolution. Using a new paradigm, we find that evolutionary variables such as fidelity of transmission and selection are differentially affected by the functions inherent to three cultural domains: language, technology and art.

Citation:

Tamariz M. and Carr J. W. (2016). What Is Unique About The Evolution Of Language Compared To Other Cultural Domains? An Experimental Study Of Language, Technology And Art. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/72.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Tamariz, Carr 2016

Learning To Learn From Similar Others: Approximate Bayesian Computation Through Babbling

Bill Thompson and Heikki Rasilo
Vrije Universiteit Brussel

Keywords: Bayesian Inference, Approximate Bayesian Computation, Speech Acquisition, Babbling, Cultural Transmission

Short description: Bayesian inference through babbling: a computational perspective on how production aids acquisition thanks to culture

Abstract:

To emerge and persist over cultural transmission, complex linguistic structures must be learnable from noisy, incomplete linguistic data. Rational statistical inference provides a compelling solution to inductive problems at many levels of linguistic structure. However, from an evolutionary perspective, a core question concerns how language learners could be equipped to approximate rational inferences under limited cognitive resources. We present a computational account of how self-simulation of linguistic data can aid otherwise challenging probabilistic inferences during language acquisition – a shortcut made possible because humans acquire language from linguistic data produced by other humans. Through an analogy with a class of computational techniques known as Approximate Bayesian Computation, we show how the capacity to produce language data can leverage computationally-cheap inductive leaps that approximate rational inference when learning language. We derive an approximate inference model for an idealised problem in the acquisition of speech sounds through babbling, simulate the dynamics of cultural transmission under this model, and discuss implications for the evolution of speech and language

Citation:

Thompson B. and Rasilo H. (2016). Learning To Learn From Similar Others: Approximate Bayesian Computation Through Babbling. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/136.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Thompson, Rasilo 2016

Interpreting Silent Gesture

Bill Thompson1 , Marieke Schouwstra2 and Henriëtte de Swart3
1 Vrije Universiteit Brussel
2 University of Edinburgh
3 Universiteit Utrecht

Keywords: Silent Gesture, Bayesian Inference, Semantics, Word Order

Short description: Word order variability influences interpretation of verbs in silent gesture. Bayesian Inference explains how.

Abstract:

Silent gesture, a lab methodology in which adult hearing participants describe simple events using only their hands, has shown to be valuable for investigating the origins of word order (the ordering of Subject, Object and Verb) in language. In particular, recent experiments using this methodlogy have uncovered word order biases in silent gesture production: people prefer SOV order when they describe extensional transitive events (boy-pail-swing; Goldin-Meadow et al., 2008), but prefer SVO order for events with different semantic properties, such as intensional events (man-think of-ball, witch-build-house; Schouwstra & de Swart, 2014). However, a core underexplored question is whether these biases also feature in silent gesture intepretation. In our presentation we describe experimental and computational analyses of silent gesture interpretation: first, we examine the influence of word order variation on silent gesture interpretation; second, we develop a Bayesian computational account of the results that allows us to directly compare production and interpretation behaviour.

In our experiment, we recorded silent gestures of ambiguous actions (e.g., a gesture that could mean build as well as climb). For each ambiguous action, we composed ambiguous gesture strings describing transitive events: one in SVO and one in SOV order (e.g., witch-climb/build-house and witch-house-climb/build). We predicted that participants would be more likely to interpret SVO ordered videos as intensional events than as extensional events, and vice versa. Forty one Dutch and forty Turkish participants watched twelve videos (6 in SVO and 6 in SOV order; we used two different versions to make sure that all participants saw each video in only one order), and were asked, after each video, tochoose an interpretation from two line drawings of the two target events (one intensional and one extensional). We found that, independently of native language, for SOV sequences, an extensional interpretation was given significantly more often (M=.711, SE=.019) than for SVO sequences (M=.569, SE=.020).

When people interpret silent gesture, they use word order as a key to the semantic distinction between intensional and extensional events: their preferences are semantically conditioned, like in silent gesture production. However, a key aspect of our findings is that this effect is much weaker in interpretation than in production. We propose a computational account for this finding, based on the idea that silent gesture interpretation can be understood as inductive inference under uncertainty. A Bayesian model allows us to specify the influence of word order biases – or prior beliefs – on judgements about the unseen intentions of another gesturer – or posterior beliefs which are updated after observing data. We specify two versions of the model: one in which participants are assumed to rely solely on their own biases during interpretation, as in production (model M1); and another which assumes participants account rationally for the uncertainty that surrounds the speaker’s usage of word order – consistent with the idea that the learner entertains emerging linguistic rules in this context (model M2). Our analysis is in two parts: first, we estimate an appropriate prior from production experiments and contrast the predictions of the two interpretation models (and their fit to the empirical data) under this prior (see figure 1); second, we fit both interpretation models to the entire dataset of productions and interpretations. Both analyses support model M2 over M1, suggesting that silent gesture interpretation is underpinned by domain-independent computational principles that balance word order biases with communicative uncertainty.

Citation:

Thompson B., Schouwstra M. and de Swart H. (2016). Interpreting Silent Gesture. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/94.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Thompson, Schouwstra, de Swart 2016

Arbitrariness Of Iconicity: The Sources (and Forces) Of (dis)similarities In Iconic Representations

Oksana Tkachman1 and Carla L. Hudson Kam2
1 The University of British Columbia
2 University of British Columbia

Keywords: iconicity, gesture, referential features

Abstract:

Compared to spoken languages, sign languages exhibit a significant amount of iconic signs (Perniss, Thompson, & Vigliocco, 2010). Interestingly, unrelated sign languages also have more overlap in their form and structures than unrelated spoken languages, and this overlap has often been attributed to the properties of visual-manual modality that enable or even encourage iconic forms (see Perniss et al., 2010, for discussion). Clearly, iconicity plays an important role in thedevelopment and evolution of signed languages. However, iconicity is a much more complex phenomenon than seems to be generally assumed. In particular, there is no single ‘iconicity’, there are many (Tolar, Ledeberg, Gokhale, & Tomasello, 2008). Signs can be based on culturally-specific (i.e., learned) relationships: for instance, EAT utilizes a grasping gesture in many Western sign languages and a V-handshape for chopsticks in many East Asian sign languages. Signs can also differ in which features of a referent are iconically represented. For example, a cat is referred to by whiskers in American Sign Language, by licking paws in Al-Sayyid Sign Language and by petting in Swedish Sign Language. However, note that even in the differences there are similarities: the signs for eat represent the action involved in prototypical eating events in the culture, including the tool(s) used, whereas the signs for cat more frequently represent some feature of the animal itself.

Our study investigates factors that might lead to favoring some features of referents over others in iconic representations. We investigate this by having hearing, sign-naïve adult participants invent gestured names for easily recognizable objects. The items participants were asked to create signs for differed along a number of dimensions that we hypothesize might influence the nature of the iconic representation, as shown in Figure 1. For instance, some of the items were man-made while others were part of the natural world, as it has been claimed that man-made objects are represented with handling (grasping) handshapes (Padden et al., 2013). We also investigated the effect of movement and size, for both man-made and natural categories. We anticipated that these categories would have impact on the choice of representational features; for example, the size and shape of natural objects would be encoded in the gestures, and the man-made objects would be represented by the prototypical interaction of humans with those objects.

50 native speakers of English with no knowledge of sign languages, ages 18-72, participated in the study. They saw 110 pictures of familiar objects and were asked to ‘name’ them with their hands. Responses were videotaped. Each response is currently being coded for the type of iconic information encoded, specifically, whether the invented sign encodes referent shape, characteristic movement, or human handling of the object.

This study helps us better understand the roots of iconic representations and the forces that might shape the specific information encoded in iconic signs.





References

Padden, C. A., Meir, I., Hwang, S. O., Lepic, R., Seegers, S., & Sampson, T. (2013). Patterned iconicity in sign language lexicons. Gesture, 13, 287-308.

Perniss, P., Özyürek, A., & Morgan, G. (2015). The influence of the visual modality on language structure and conventionalization: insights from sign language and gesture. Topics in Cognitive Science, 7 (1), 2-11.

Perniss, P., Thompson, R., & Vigliocco, G. (2010). Iconicity as a general property of language: evidence from spoken and signed languages. Language Sciences, 1, 227.

Tolar, T.D., Lederberg, A.R., Gokhale, S, & Tomasello, M. (2008). The development of the ability to recognize the meaning of iconic signs. Journal of Deaf Studies and Deaf Education, 13, 225–240.

Citation:

Tkachman O. and Hudson Kam C. L. (2016). Arbitrariness Of Iconicity: The Sources (and Forces) Of (dis)similarities In Iconic Representations. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/164.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Tkachman, Hudson Kam 2016

Experimental Evidence For Phonemic-like Contrasts In A Nonhuman Vocal System

Simon Townsend1 , Andrew Russell2 and Sabrina Engesser3
1 University of Warwick
2 University of Exeter
3 University of Zurich

Keywords: Phonology, Evolution, Phoneme-like contrasts, Birds, Duality of patterning

Abstract:

The capacity to generate new meaning through rearranging combinations of meaningless sounds, so-called phoneme structuring, is a fundamental component of language and is central to its productive nature (Hurford 2011). Despite its importance, surprisingly little is known about how unique this capacity is to humans or indeed the evolutionary steps that characterised its emergence. Animal vocalizations have been shown to often comprise combinations of meaningless acoustic elements, for example humpback whales (Megaptera novaeangliae), gibbons (Hylobates spec.), and a number of passerine birds, are capable of constructing elaborate and meaningful “song” vocalisations from a variety of meaningless call elements. A hierarchical structure is ultimately achieved through an assembling of the single units in a potentially rule‑governed way, in some cases reaching a considerable level of complexity (Payne and McVay 1971, Clarke et al. 2006, Berwick et al. 2012). However, evidence that rearranging such combinations generates functionally distinct meaning is lacking (Berwick et al. 2011). Here we provide evidence for this basic ability in calls of the chestnut-crowned babbler (Pomatostomus ruficeps), a highly social bird of the Australian arid zone. Using acoustic analyses, natural observations and a series of controlled playback experiments, we demonstrate that this species uses the same acoustically distinct elements (A and B) in different arrangements (AB or BAB) to create two functionally distinct vocalizations: the flight call (used during movement, AB) and the prompt call (used when provisioning nestlings, BAB). Specifically, the addition or omission of a contextually meaningless acoustic element at a single position generates a phoneme-like contrast that is sufficient to distinguish the meaning between the two calls. Our results indicate that the capacity to rearrange meaningless sounds in order to create new signals occurs outside of humans. We discuss the implications of our data for understanding the evolutionary progression of phoneme structuring and suggest that basic phonemic contrasts represent a rudimentary form of phoneme structure, and a potential early step towards the generative phonemic system of human language.

References

Berwick, R.C., Okanoya, K., Beckers, G. I. J. L., & Bolhuis, J. J. (2011). Songs to Syntax: the Linguistics of Birdsong. Trends in Cognitive Sciences, 15(3):113–121.

Berwick R.C., Beckers, G.J.L., Okanoya, K. & Bolhuis, J.J. (2012). A Bird’s Eye View of Human Language Evolution. Frontiers in Evolutionary Neuroscience 4: 5. pmid:22518103

Clarke, E., Reichard, U. H. & Zuberbühler, K. (2006). The Syntax and Meaning of Wild Gibbon Songs. PLoS ONE 1(1): e73.

Hurford, J.R. (2011). The Origins of Grammar: Language in the Light of Evolution. Oxford University Press.

Payne, R.S. & McVay, S. (1971). Songs of Humpback Whales. Science, 173(3997):585–597.

Citation:

Townsend S., Russell A. and Engesser S. (2016). Experimental Evidence For Phonemic-like Contrasts In A Nonhuman Vocal System. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/111.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Townsend, Russell, Engesser 2016

Modeling The Emergence Of Creole Languages

Francesca Tria1 , Vittorio Loreto2 , Vito Servedio3 and S. Mufwene Salikoko4
1 ISI Foundation
2 Sapienza Università di Roma
3 Institute for Complex Systems (ISC-CNR)
4 University of Chicago, Dept. of Linguistics

Keywords: contact languages, modeling, language games, creoles

Short description: A simple language game, enriched by a suitable contact ecology, predicts the emergence of creole languages in agreement with real data.

Abstract:

Creole languages offer an invaluable opportunity to study the

processes leading to the emergence and evolution of Language, thanks

to the short--typically a few generations--and reasonably well defined

time-scales involved in their emergence. In many well-documented

cases, creoles emerged in large segregated sugarcane or rice

plantations on which the slave laborers were the overwhelming

majority. Lacking a common substrate language, slaves were naturally

brought to shift to the economically and politically dominant European

language (often referred to as the lexifier) to bootstrap an effective

communication system among themselves.

Here, we focus on the emergence of creole languages

originated in the

contacts of European colonists and slaves during the 17th and 18th

centuries in exogenous plantation colonies of especially the Atlantic

and Indian Ocean, where detailed census data are available.

Without entering in the details of the creole formation at a

fine-grained linguistic level, we aim at uncovering some of the general mechanisms that determine the emergence of contact languages, and that successfully apply to the case of creole formation.

We demonstrate a dynamical processes leading to the emergence and stabilization of

creole languages,

suggesting ways in which modeling can be

used as a research tool to clarify accounts of where creoles emerged

and what specific ecological factors explain why they did not emerge

elsewhere.

We judged the language games framework as particularly suitable for this

task since it simulates how a population of individuals can bootstrap

linguistic consensus--on cultural timescale--out of the local

interactions of pairs of individuals. Inspired by the Naming Game

(NG), our modeling scheme incorporates demographic information about

the colonial population in the framework of a non-trivial interaction

network including three populations: Europeans, Mulattos/Creoles, and

Bozal slaves. We show how this sole information makes it possible to

discriminate territories that produced modern creoles from those that

did not, with a surprising accuracy (Fig.~\ref{res}).

We submit that these tools

could be relevant to addressing problems related to contact phenomena

in many cultural domains: e.g., emergence of dialects, language

competition and hybridization, globalization phenomena.

Citation:

Tria F., Loreto V., Servedio V. and Salikoko S. M. (2016). Modeling The Emergence Of Creole Languages. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/89.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Tria, Loreto, Servedio, Salikoko 2016

Dendrophobia In Bonobo Comprehension Of Spoken English

Robert Truswell
University of Edinburgh

Keywords: Constituency, Hierarchical structure, Dendrophilia hypothesis, Bonobo

Abstract:

Fitch (2014) proposed the Dendrophilia hypothesis as a description of the ubiquity of hierarchical structures in human cognition:



‘Humans have a multi-domain capacity and proclivity to infer tree structures from strings, to a degree that is difficult or impossible for most non-human animal species.’ (Fitch, 2014, 352)



Part of Fitch’s supporting evidence concerns Fitch and Hauser’s (2004) demonstration that humans learn to recognize sequences of the forms (ab)^n and a^nb^n, while cotton-top tamarins can only learn the former. Fitch takes this to support the Dendrophilia hypothesis because (ab)^n, but not a^nb^n, can be generated in the limit by constituency-free finite-state grammars. However, this result has been disputed on empirical and theoretical grounds, e.g. Perruchet and Rey (2005), Jäger and Rogers (2012).



This paper gives a complementary source of evidence for Fitch’s hypothesis. We examine a corpus of 660 utterances directed in parallel to a bonobo, Kanzi, and a human infant, Alia, together with descriptions of their behavior in response to those utterances (Savage-Rumbaugh et al., 1993). Unlike grammar induction experiments such as Fitch and Hauser (2004), these strings are paired with interpretations. We can then infer aspects of a subject’s interpretation of an utterance from their behavior, and aspects of the grammatical representation of the utterance from that interpretation. I argue that Kanzi fails to respond to requests precisely where correct interpretation requires hierarchical constituency.



Kanzi’s overall performance across the corpus (71.5% ‘correct’ responses according to Savage-Rumbaugh et al.’s criteria) is comparable to Alia’s (66.6% ‘correct’). Usually, though, a correct response could be achieved through common-sense combination of the concepts expressed by individual words, without using syntactic information (Anderson’s 2004 semantic soup strategy). One such example, carried out correctly by Kanzi, is "Put the backpack in the car": few other actions involving backpacks and cars suggest themselves.



In some cases (e.g. "Put the tomato in the oil" / "Put some oil in the tomato"), correct interpretation requires sensitivity to linear order, but not constituency. Kanzi’s accuracy on 43 such sentences in the corpus (21 pairs, with 1 example repeated) is 76.7%, in line with his 71.5% overall accuracy. This suggests that Kanzi can make use of linear order information in his understanding of spoken English.



However, Kanzi responded correctly to only 4/18 sentences containing coordinated NP objects (22.2%). When asked to "Show me the milk and the doggie", he shows only the dog; when asked to "Give the lighter and the shoe to Rose", he gives Rose only the lighter. Kanzi ignores the first conjunct on 9/18 trials, and ignores the second conjunct on 5/18 trials.



Despite the small number of critical sentences, this represents a highly significant drop relative to both Kanzi’s baseline accuracy (p < 10^−4) and Alia’s 68.4% accuracy on sentences containing the same construction (p < 0.01). This, then, is a species-specific, construction-specific drop in performance.



I propose that Kanzi’s performance dips precisely here (and not on many other constructions of comparable length) because correct interpretation of such sentences requires reference to hierarchical constituent structure. Specifically, unlike the previous examples, Kanzi must recognize that the object of give is the complex phrase the water and the doggie, and not just, for example, the next noun. Likewise, the patient of the action of giving should be the group of objects denoted by the complex phrase, not just the denotaton of a single noun. Kanzi’s generally impressive performance therefore only drops where reference to constituency is required, while Alia has no similar problem. In Fitch’s terms, Kanzi is more dendrophobic than Alia.



References



Anderson, S. (2004). Doctor Doolittle’s delusion. New Haven, CT: Yale University Press.

Fitch, W. T. (2014). Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuoscience and comparative cognition. Physics of Life Reviews, 11, 329-364.

Fitch, W. T., & Hauser, M. (2004). Computational constraints on syntactic processing in a nonhuman primate. Science, 303, 377-380.

Jäger, G., & Rogers, J. (2012). Formal language theory: Refining the Chomsky hierarchy. Philosophical Transactions of the Royal Society B, 367, 1956- 1970.

Perruchet, P., & Rey, A. (2005). Does the mastery of center-embedded linguistic structures distinguish humans from nonhuman primates? Psychonomic Bulletin & Review, 12, 307-313.

Savage-Rumbaugh, E. S., Murphy, J., Sevcik, R., Brakke, K., Williams, S., Rumbaugh, D., & Bates, E. (1993). Language comprehension in ape and child. Monographs of the Society for Research in Child Development, 58, 1-252.

Citation:

Truswell R. (2016). Dendrophobia In Bonobo Comprehension Of Spoken English. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/87.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Truswell 2016

A Constant Rate Effect Without Stable Functions

Robert Truswell and Nikolas Gisborne
University of Edinburgh

Keywords: Constant Rate Effect, Language change, Grammar competition, Middle English, Relative clause

Abstract:

Many grammatical changes progress towards completion at a uniform rate across linguistic contexts, but with different temporal offsets in different contexts. This is the Constant Rate Effect, or CRE (Kroch, 1989). According to Kroch, the CRE reflects competition between functionally equivalent forms. If a novel form occurs with probability p, an S-shaped trajectory of change (Bailey, 1973; Blythe & Croft, 2012) arises when the ratio p/(1 − p) grows exponentially with time.



p/1-p = e^(k+st) <--> p = e^(k+st)/(1+e^(k+st)) (1)



A CRE in these terms arises when s is held constant across linguistic contexts, but k is allowed to vary. This interpretation implies a direct link from CREs to grammar competition, in which competing forms realize the same function, and situates competition most naturally in adult communicative strategies.



We present a CRE that is better analysed in complementary terms. This involves headed wh-relatives in English, constructions like "the person [[to whom]i I spoke __i]", where a clause, introduced by a wh-phrase, modifies a noun phrase.



Headed wh-relatives emerged gradually in Middle and Early Modern English, (c. 1100–1700). The earliest examples had oblique and adverbial wh-phrases; argumental relatives with which followed by c.1350, with whom- and then who- relatives emerging over the next couple of centuries. Nevertheless, the rate of change in all of these linguistic contexts is near-identical. Fig. 1 demonstrates this for relatives with wh-PPs and with NP which, using data from the Penn Parsed Corpora of Historical English (Kroch & Taylor, 2000; Kroch, Santorini, & Delfs, 2004).[Footnote omitted]



(Fig 1 omitted)



Relativizer which has been in competition with that and ∅ as strategies for relativizing on argument positions over the last c.650 years. However, there was no competing strategy for relativizing PPs when wh-PPs emerged (earlier relatives with demonstrative PP relativizers disappeared in Old English). Accordingly, the constant rate of change across wh-PP and which-relatives cannot reflect a similar competition process across the two construction types.



Instead, this change reflects competing functional specifications of wh-forms. Such competition, unlike Kroch’s, is naturally located in acquisition, because a learner identifies a form before inducing a feature specification for it (Shipley, Smith, & Gleitman, 1969). This differs from competition among communicative strategies, but still maintains Kroch’s logic of competition and selection.



References

Bailey, C.-J. (1973). Variation and linguistic theory. Arlington, VA: Center for Applied Linguistics.

Blythe, R., & Croft, W. (2012). S-curves and the mechanisms of propagation in language change. Language, 88, 269-304.

Kroch, A. (1989). Reflexes of grammar in patterns of language change. Language Variation and Change, 1, 199-244.

Kroch, A., Santorini, B., & Delfs, L. (2004). Penn-Helsinki parsed corpus of Early Modern English.

Kroch, A., & Taylor, A. (2000). Penn-Helsinki parsed corpus of Middle English (2nd edition).

Shipley, E., Smith, C., & Gleitman, L. (1969). A study in the acquisition of language: Free responses to commands. Language, 45, 322-342.

Citation:

Truswell R. and Gisborne N. (2016). A Constant Rate Effect Without Stable Functions. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/88.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Truswell, Gisborne 2016

Norms For Constructing Language In Humans And Animals

Robert Ullrich
Free University Berlin

Keywords: norm, construction, animal, communication

Short description: Values in Science: How normative background assumptions influence the construction of 'language'.

Abstract:

"there is no need to include the "abstract" section in 2-page abstracts."

Citation:

Ullrich R. (2016). Norms For Constructing Language In Humans And Animals. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/9.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Ullrich 2016

Addressees Use Zipf's Law As A Cue For Semantics

Freek Van de Velde and Dirk Pijpops
University of Leuven

Keywords: Zipf, semantics, phonetic size

Short description: Zipf's size-meaning law benefits not only speakers but addressees as well.

Abstract:

See attached document

Citation:

Van de Velde F. and Pijpops D. (2016). Addressees Use Zipf's Law As A Cue For Semantics. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/117.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Van de Velde, Pijpops 2016

A Continuum Of Human Cognitive-linguistic Evolution

Olga Vasileva
Simon Fraser University

Keywords: evolution, cognition, language, phenotype, language acquisition, schizophrenia, autism

Abstract:

The current research explores a novel evolutionary hypothesis linking abnormal phenotypes of human cognition to ontogenetic variation in language acquisition styles.

A number of theories suggested that abnormal cognitive development in humans might have evolutionary underpinning with abnormal conditions being byproducts of specific human cognitive adaptations, that can be seen as windows into evolutionary development of our species in general, and language evolution in particular (e.g. Benitez-Burraco & Boeckx, 2014).

Schizophrenia and autism are two conditions that are frequently linked to human cognitive evolution in general and to language evolution in particular. A growing body of evidence suggests that these conditions are not only related, but present potentially opposite sides of a spectrum (Crespi & Badcock, 2008), with some symptoms manifested in the opposite manner in each condition.

Ontogenetic communicative development in humans presents another interesting continuum – so-called language-acquisition styles, with children exhibiting these styles being called “referential” and “expressive”. Cross-cultural research in North America and Russia (e.g. Bates et al., 1991; Dobrova, 2009) indicates that these styles can, first of all, be clearly defined only in a population segment where both clearly defined styles represent extreme points of a continuum; and second of all, main features of the styles are manifested in the opposite manner. For example, referential children are characterized by a fast rate of vocabulary growth and clear speech, while expressive children, on the contrary, by a slower vocabulary growth and less clear speech. While language acquisition styles were first distinguished based mainly on linguistic characteristics, further research showed that expressive and referential children also differ in social factors (socioeconomic status, parental education, family composition), biological factors (brain hemisphere dominance, gender) and certain cognitive parameters (imitation, ability for generalization) These same factors also play a significant role in abnormal cognitive phenotype development.

The current submission proposes a link between abnormal phenotypes and language acquisition styles, as a closer analysis suggests that there are some similarities between abnormal phenotypes and language acquisition styles respectively (schizophrenia – expressive style; autism – referential). The hypothesis proposes that there are not two independent spectrums of abnormal linguistic phenotypes and language acquisition styles, but a single continuum of human cognitive and linguistic development with abnormal phenotypes at the extreme points of it and normal variation in language acquisition in the central majority of the population. Existing evidence suggests that there are some similarities between abnormal phenotypes and language acquisition styles. For example, autistic children exhibit decreased self-reference; these referential children tend to discuss themselves in third person longer than typical children (delayed first-person reference).

This continuum might be present in both ontogeny and phylogeny, where existing variation can be explained by the interplay between the initial linguistic and cognitive biases common in humans and specific choices that an organism makes based on the cues from the environment for the former and specific biases in human cognition and language developed over the course of species evolution for the latter.

My presentation will review the main properties of autism and schizophrenia in relation to each other and to language evolution; analyze various linguistic, cognitive and social parameters common for two language acquisition styles; and show connections between abnormal phenotypes and their normal counterparts, the language acquisition styles, with precise examples for each step of the analysis. Implications for the evolution of human cognition and language will be discussed.



Acknowledgements

The author is grateful for the continuing guidance of senior supervisor Timothy Racine; and for helpful feedback and unwavering support from colleagues in the Human Evolutionary Studies Program.



References

Bates, E., Bretherton, I., & Snyder, L. (1991). From first words to grammar: Individual differences and dissociable mechanisms. New York: Cambridge University Press.

Benítez-Burraco, A. & Boeckx, C. (2014). Language disorders and

language evolution: Constraints on hypotheses. Biological Theory, 1–6.

Crespi, B. & Badcock, C. (2008). Psychosis and autism as diametrical disorders of the social brain. Behavioral and Brain Sciences, 31, 241-261.

Dobrova, G. R. (2009). On variability of language ontogenesis: Referential and expressive strategies of language acquisition. Questions in Psycholinguistics, 2009 (9).

Citation:

Vasileva O. (2016). A Continuum Of Human Cognitive-linguistic Evolution. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/68.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Vasileva 2016

Language Evolution In Ontogeny And Phylogeny

Olga Vasileva
Simon Fraser University

Keywords: ontogeny, phylogeny, genes, development, system

Abstract:

Research in language evolution frequently turns to studies of language

development for generating hypotheses about possible evolutionary trajectory of

communicative development in humans. One of the problems faced by such

hypothetical models is the necessity to explain both universal aspects and

individual variation in language development, which is essential for determining

the core characteristics of language capacity shared by all humans and estimating

the time of their emergence in evolution.

The current submission introduces a potentially fruitful tool, the Theory of

Functional Systems (TFS). It was developed by P. Anokhin, and is supported by

existing research (e.g., Toomela, 2010; Sudakov, 1997). This theory can

contribute to our understanding of language development by examining variation

in human communicative and linguistic abilities and its relation to the evolution of

our species.

The TFS explains the development of complex behaviors both in evolution

and in individuals by treating them and their supporting cognitive substrates as

complex systems, and postulating three levels of systemogenesis. In primary

systemogenesis, core abilities forming the basis of behavior are created; in

accompanying secondary systemogenesis, these abilities are modified through the

individual’s interaction with its environment and preserved through brain

plasticity in ontogenesis. If the environment creates stable selective pressure,

certain behavioral systems formed during individual development may prove

advantageous, and their maturation in the next generation’s ontogenetic

development may take place earlier. This third process is referred to as

evolutionary systemogenesis.

Evolutionary systemogenesis relies, among other things, on specific

mechanisms of gene expression in the nervous system. Growing evidence

suggests that genes expressed in an individual’s brain at the time of early Central

Nervous Systems (CNS) development are often expressed again in the adult brain

during various learning experiences. Moreover, the same genes can be involved

in different aspects of behavioral development and learning (e.g. Tokarev et al.,

2011). TFS has a potential for explaining (i) such genetic - behavioral interconnection, (ii) the process of transition from individual adult learned behavior over multiple generations into pre-adult species-specific behavior, iii) the links between phylogeny and ontogeny at the behavioral level, as well as (iv) the time sequence of emergence of certain abilities during development.

Application of TFS to the problem of language evolution requires examining genes associated with language development. For example, research in rodents demonstrated that one of the most prominent language associated genes - FoxP2 - is apparently involved not only in vocal development, but in certain aspects of learning in adult brains (Enard et al., 2009).

In ontogenetic language development, the core systems which have experienced strong evolutionary pressure will mature earlier and build the foundation for the organism’s tuning to the specific environment through plasticity and learning, allowing further development of linguistic capacity. Thus, those aspects of language that develop early in ontogeny are most likely to have experienced strong selective pressure during evolution. Aspects of language development that are variable might help us identify specific environmental cues that are essential for an infant’s growing linguistic system in secondary systemogenesis, and their developmental timing may unravel the evolutionary history of language. A TFS-based model can explain why human infants are capable of acoustic differentiation of speech sounds well before reaching motor independency and learning other aspects of language. Candidate abilities for primary and secondary systemogenesis are sound discrimination, word-object recognition, gestural communication and language acquisition styles. The ontogeny and phylogeny of language development has been a struggle to elucidate; TFS may greatly advance our understanding of human linguistics.



Acknowledgements

The author is grateful for the continuing guidance of senior supervisor Timothy Racine; and for helpful feedback and unwavering support from colleagues in the Human Evolutionary Studies Program.

References

Enard, W. et al. (2009). A humanized version of Foxp2 affects cortico-basal ganglia circuits in mice. Cell, 137, 961.

Sudakov, K. V. (1997). The theory of functional systems: General postulates and principles of dynamic organization. Integrative Physiological and Behavioral Science, 32(4), 392-414.

Tokarev, K., Tiunova, A., Scharff, C., Anokhin, K. (2011). Food for song:Expression of C-Fos and Zenk in the zebra finch song nuclei. PloS_One, 6:e21157.

Toomela, A. (2010). Biological roots of foresight and mental time travel. Integrative Psychological and Behavioral Science, 44(2), 97-125.

Citation:

Vasileva O. (2016). Language Evolution In Ontogeny And Phylogeny. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/69.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Vasileva 2016

Constituent Order In Pictorial Representations Of Events Is Influenced By Language

Anu Vastenius , Jordan Zlatev and Joost Van de Weijer
Lund University

Keywords: constituent order, word-order, non-verbal representations, motion events, thinking-for-speaking

Abstract:

The origin of word order in human language has been addressed in recent years in empirical research, and in some studies SOV has been found to be the most basic or default order. Goldin-Meadow et al. (2008) conducted a study to test how speakers of languages with different word orders represent events with pictures and gestures. The results showed that the participants predominantly used the order Actor-Patient-Act (ArPA) in their nonverbal representations, irrespective of their native language. Based on this, Goldin-Meadow et al. (2008: 9167) concluded: “there appears to be a natural order that humans, regardless of the language they speak, use when asked to represent events non-verbally”. Later on, other studies have thrown doubt on the universality of such a “natural order” (e.g. Schouwstra & de Swart, 2014). To investigate this issue, we replicated the experiment by Goldin-Meadow et al. using a slightly modified design. In the replication, no gestures were used, as they are intrinsically more related to language than pictures (Kendon, 2004), and therefore possibly more easily influenced by the native-language word order. Furthermore, contrary to the original study, the pictures were printed on separate, non-transparent cards, which needed to be placed in a particular order so as to produce a representation of the event. In the original study, the pictures were printed on transparencies, which always resulted in the same final product regardless of the order in which they were placed. Consequently, no consistent strategy of ordering was required. In our study, participants performed the task on a transversal plane with a sagittal directionality (from furthest to closest to them). More specifically, the participants had to place the picture cards below one another on a 13 x 52 cm board, with the narrow side facing them. The intention was that, in this way, they would be minimally influenced by the direction of motion shown in the pictures. Twenty-six native speakers of Kurdish (SOV) in the Kurdish region of Iraq and twenty-seven speakers of Swedish (SVO) were presented with 36 video clips showing the events. Half of each language group were asked to describe the event prior to ordering the pictures, and the other half only to order the pictures after each video. The results showed that, unlike in the original study, the constituent order of the native-language did have an impact on the order of the pictorial representations when using this experimental design. The speakers of Swedish were less consistent in using the ArPA order than the speakers of Kurdish, and this tendency was stronger for the participants who described the events verbally before representing them pictorially. This can interpreted as a moderate version of linguistic relativity, such as Slobin´s (1996) thinking-for-speaking, stating that language modulates the cognitive representations that are recruited during the process of language use. It is likely that the explicit linear order in which the pictures had to be placed was more analogous to word order, and hence was more easily influenced by it, than in previous designs.

Citation:

Vastenius A., Zlatev J. and Van de Weijer J. (2016). Constituent Order In Pictorial Representations Of Events Is Influenced By Language. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/116.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Vastenius, Zlatev, Van de Weijer 2016

Iconicity, Naturalness And Systematicity In The Emergence Of Sign Language Structure

Tessa Verhoef1 , Carol Padden1 and Simon Kirby2
1 University of California, San Diego
2 University of Edinburgh

Keywords: Sign Language, Cognitive Biases, Gesture, Systematicity, Iconicity

Short description: Conventions in sign languages emerge in the context of a subtle interplay between iconicity, systematicity and naturalness.

Abstract:

Systematic preferences have been found for the use of different iconic strategies for naming man-made hand-held tools (Padden et al., 2014) in both sign and gesture: HANDLING (showing how you hold it) and INSTRUMENT (showing what it looks like) forms are most frequently used for tools. Within those two, sign languages vary in their use of one strategy over the other (Padden et al., 2013). Nevertheless, despite having overall preferences, what variation exists tends to be conditioned by meaning. In ASL signers, handling forms are more likely to be used for actions and instrument forms for objects (Padden et al., 2014). These lexical preferences across different sign languages provide an ideal test case for understanding the emergence of conventions in language in which multiple types of bias are at play. Specifically, we argue that there may be distinct biases operating during production and interpretation of signs on the one hand, and learning a conventional system of signs on the other. It is crucial we understand how these distinct biases interact if we are to explain the emergence of systematicity in a linguistic system with iconic underpinnings.

We present three experiments that together help to form a picture of the interplay between naturalness, iconicity and systematicity in the origin of linguistic signals. The first experiment (N=720 participants, all non-signers) maps out the initial natural biases people have for pairing ACTION and OBJECT concepts related to tools (e.g. ‘using a toothbrush’ and ‘a toothbrush’) with HANDLING and INSTRUMENT forms in three different tasks. This complements data on biases found previously with spontaneous gesture productions (Padden et al., 2014). In our tasks, each participant only responds to one item, allowing us to rule out any influence of task learning or item order. In line with earlier findings, we show that non-signers have a strong preference for HANDLING forms in a production task. We also find a strong bias for ACTION concepts in interpretation and a strong bias for mapping HANDLING to ACTION and INSTRUMENT to OBJECT in a mapping task, demonstrating difference in naturalness of particular iconic strategies for signalling.

The second experiment (N=42 non-signers) investigates the effects of these biases on the learnability of artificial languages. In addition to reflecting naturalness on an item by item basis, languages can also vary in systematicity across sets of items (i.e. the extent to which all ACTIONS pattern the same way, and all OBJECTS pattern the same way). Three different languages were designed: (1) congruent with natural bias and systematic, (2) incongruent with bias and systematic, (3) random. As expected, we found languages in category (3) to be harder to learn than those in category (1). Surprisingly, languages in category (2) seem just as learnable as languages in category (1), even though the mapping runs completely counter to the strong naturalness bias we found in experiment 1. A closer look at the performance over time for participants in the different conditions reveals that participants who are exposed to (2) seem to need only a few examples before they detect and accept the unexpected pattern. The results show that even non-signers quickly detect a pattern for which they need to categorize abstract iconic gesture strategies; the handling-instrument distinction cannot be understood by simply relying on differences in form.

The third experiment looks in more detail at the flexibility of participant’s biases when they are exposed to data and whether even minimal exposure can nevertheless result in responses that are the reverse of the ones we saw in the first experiment. We exposed non-signers (N=864) to two example tools for which the form-meaning mapping was either (1) congruent with the bias for both, (2) incongruent with the bias for both, (3) one congruent and one incongruent. After this they were asked to respond to one of the three tasks taken from the first experiment for a third tool. Our findings show that, even after exposure to just two examples, the pattern of responses changes strongly, demonstrating that the bias for systematicity operating across sets of items can completely overturn the bias for naturalness operating on individual items.

Our experiments help to understand the subtle interplay between learning biases and mapping biases and how these may shape the emergence of language.

Citation:

Verhoef T., Padden C. and Kirby S. (2016). Iconicity, Naturalness And Systematicity In The Emergence Of Sign Language Structure. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/47.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Verhoef, Padden, Kirby 2016

Language Evolution And Language Origins In Teaching Linguistics At The University Level

Slawomir Wacewicz1 , Przemyslaw Zywiczynski1 and Arkadiusz Jasinski2
1 Center for Language Evolution Studies (CLES), Department of English, Nicolaus Copernicus University, Torun, Poland
2 Center for Language Evolution Studies (CLES), Torun, Poland

Keywords: teaching linguistics, language evolution, evolution of language, classroom, textbook

Short description: Survey of 14 popular introductions to linguistics. Shows that teaching of language evolution to students of linguistics needs improvement

Abstract:

Despite the vigorous development of language evolution research over the last three decades, very little of this progress has trickled into teaching linguistics: so far this important area of the academy has failed to accommodate the bulk of the empirical and theoretical advances. In this paper we report the results of a survey of fourteen popular introductory-level linguistic textbooks, which – with rare exceptions – show that the teaching of language evolution to students of general linguistics rests on out-dated and largely inadequate conceptual frameworks, and fails to communicate major theoretical breakthroughs and empirical results. Based on the feedback from the community of language origins researchers, we formulate an inventory of key messages that should be incorporated into textbooks and curricula.

We have evaluated fourteen introductory academic textbooks on general linguistics published in the last decade. We have established that:

— a majority of textbooks (e.g. Akmaijan et al., 2010) either fail to mention language origins/evolution completely, do it only perfunctorily, or anchor the discussion in inadequate theoretical contexts, such as the classifications by Max Müller or Charles Hockett (see Wacewicz & Żywiczyński, 2015, for why this latter influential framework should be abandoned);

— some of the recently published textbooks, including the most popular ones (e.g. Denham & Lobeck, 2013), show improvement in their coverage of language origins/evolution relative to their previous editions; this, however, mostly applies to their presentation of empirical findings rather than the theoretical backbone.

Overall, despite visible progress, the subject of the evolutionary emergence of language tends to receive inadequate treatment in linguistic textbooks. This leads to the propagation of such misconceptions as the continuity of language with monkey alarm calls, and the failure to understand the most fundamental prerequisites for the evolutionary language emergence, most notably those related to cooperativeness.

We call for a greater as well as more systematic representation of interdisciplinary language evolution research in basic level linguistic instruction. In particular, the following central messages should be included into teaching materials and curricula:*

• the newly constituted status of language evolution research, with its inherently interdisciplinary nature, methodological pluralism, and a growing reliance on empirical research (see e.g. Hurford, 2012);

• the cooperative underpinnings of language (e.g. Tomasello, 2008);

• the cognitive and socio-cognitive pre-adaptations (Dor et al., 2014);

• the role of cultural evolution (Smith & Kirby, 2008) and modelling approaches for simulating the emergence of linguistic structure (e.g. Steels, 2011);

• the very nature of language, as seen from the “origins” perspective.

Finally, linguistic textbooks would also benefit from showcasing some of the ways in which tools developed by linguists are applied in other related disciplines; for example, to analysing the compositional structure or turn-taking structure of monkey vocal signalling.

*This list will be supplemented with the results of a survey conducted among the participants of the Protolang language origins conference, 24-26 September 2015, Rome.



References

Akmajian, A., Demers, R., Farmer, A., Harnish, R. (2010). Linguistics: An Introduction to Language and Communication, 6th edition. Cambridge, MA: MIT Press.

Dor, D., Knight, C. & Lewis, J., ed. (2014). The social origins of language: Early society, communication and polymodality. Oxford: Oxford University Press.

Hurford, J. R. (2012). Linguistics from an evolutionary point of view. In R. Kempson, T. Fernando and N. Asher (Eds.), Philosophy of Linguistics (pp. 473–498). Amsterdam: North Holland/Elsevier.

Denham, K. & Lobeck, A. (2013). Linguistics for Everyone. An Introduction. Boston, MA: Wadsworth Cengage Learning.

Smith, K. & Kirby, S. (2008). Cultural evolution: implications for understanding the human language faculty and its evolution. Philosophical Transactions of the Royal Society of London B, 363(1509), 3591–3603.

Steels, L. (2011). Modeling the cultural evolution of language. Physics of Life Reviews, 8(4), 339–356.

Tomasello, M. (2008). Origins of Human Communication. Cambridge, MA: MIT Press.

Wacewicz, S. & Żywiczyński, P. (2015). Language Evolution: Why Hockett's Design Features are a Non-Starter. Biosemiotics, 8(1), 29–46.

Citation:

Wacewicz S., Zywiczynski P. and Jasinski A. (2016). Language Evolution And Language Origins In Teaching Linguistics At The University Level. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/62.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Wacewicz, Zywiczynski, Jasinski 2016

Languages Prefer Robust Phonemes

Andrew Wedel1 and Bodo Winter2
1 University of Arizona
2 University of California, Merced

Keywords: Robustness, Phoneme inventory, Signal evolution, Perceptibility

Abstract:

The signaling systems of all human languages are characterized by combinatorial structure: individually meaningless elements are combined to form meaningful units, words. A phoneme’s overall contribution to conveying meaning is constrained by the degree to which it is perceptually distinct from other phonemes. If language evolution is influenced by pressures to transmit semantic information efficiently (e.g., Graff, 2012; Piantadosi et al., 2012), spoken languages should evolve to favor more perceptually distinctive phoneme types. Existing theoretical work suggests that those phonemes categories that are more perceptually distinct are transmitted more robustly across linguistic populations (Blevins, 2004; Winter, 2014).

In this paper, we present a test of this prediction: do phoneme inventories evolve to favor more robustly transmitted phonemes? To operationalize robustness, we use Graff (2012)’s symmetric phoneme confusability index, which is based on an English phoneme perception study from Miller and Nicely (1955). For each consonant phoneme within the English phoneme inventory, we ask whether its distinctiveness predicts how likely the phoneme will be found in other languages. And indeed, using the 1,672 languages represented in the PHOIBLE database (Moran et al., 2014), we show that the confusability of a phoneme is inversely correlated with its cross-linguistic frequency.

The evolution of language sound-systems as a whole has been proposed

to be driven by changes in individual words, which then spread by diffusion

through the lexicon (e.g., Wang, 1969, Wedel 2012). If phoneme inventories evolve via transmission of phonemes through words, more robust phonemes should be on average found in more words (cf. Graff, 2012). We provide evidence in favor of this prediction by showing that scores on the Graff

confusability index also significantly correlate with a set of within-language phoneme lexical frequencies, computed for Turkish, Mutsun, Mawukakan/Mahou, English, Dutch, German and Nama (7 languages from 5 different families): More robust phonemes occur in more words. Finally, the frequency with which consonant phonemes appear across the languages in PHOIBLE is also significantly correlated with within-language lexical frequencies. Specifically, the more languages in PHOIBLE that a phoneme appears in, the more words it tends to appear in within a given language. These results survive controls for language area and language family.

Taken together, our findings support the prediction that phoneme inventories evolve indirectly via a pathway involving transmission of individual words, and that the transmission of phonemes within words is constrained by their perceptual robustness. Our results firmly fit within a view that sees phoneme systems as evolving to meet the functional demands of language use (Wedel et al., 2013), and a view that sees language as more broadly evolved to support robust communication (Winter, 2014).

Citation:

Wedel A. and Winter B. (2016). Languages Prefer Robust Phonemes. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/28.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Wedel, Winter 2016

The Structure Of Iconicity In The English Lexicon

Bodo Winter1 , Lynn Perry2 , Marcus Perlman3 and Gary Lupyan3
1 University of California, Merced, Cognitive and Information Sciencs
2 Department of Psychology, University of Miami
3 Department of Psychology, University of Wisconsin-Madison

Keywords: iconicity, ideophones, sound symbolism, arbitrariness, lexicon

Abstract:

Many theories of language evolution argue for a trajectory from iconic communication systems, where signals resemble their meanings, towards systems with more arbitrary relationships between words and meanings (e.g., Armstrong & Wilcox, 2007; Imai & Kita, 2014). In spoken languages, the end result of this process is often assumed to be a lexicon that is dominated by arbitrariness, with Indo-European languages, especially English, often cited as examples of arbitrariness par excellence (e.g., Vigliocco, Perniss, & Vinson, 2014). But, to what extent are the lexicons of present-day spoken languages truly arbitrary? Are there particular kinds of meanings that are more prone to being expressed iconically? Finally, are words either iconic or not, or is iconicity a graded quality of words?

Using the methods presented in Perry et al. (2015), we collected iconicity ratings for 1,952 English words from 705 participants who rated each word on a scale from -5 (“words that sound like the opposite of what they mean”) to +5 (“words that sound like what they mean”). Overall, the distribution of iconicity ratings skewed towards the iconic end of the scale (+0.9, t(1951)=35.5, p<0.0001). Hartigan’s dip test shows no evidence for bimodality in the iconicity ratings (p>0.1), suggesting that iconicity is indeed a continuous rather than categorical notion. In line with Perry et al. (2015), iconicity varied between lexical categories (F(4,1881)=35.41, p<0.0001), with verbs and adjectives receiving higher iconicity ratings. These results mirror the patterning of ideophones across the meaning space of languages (Dingemanse, 2012), i.e., they preferentially express manner of movement and sensory perceptions.

We further tested whether meanings related to specific sensory modalities are more or less prone to iconicity by using perceptual strength ratings from Lynott and Connell (2009), who asked participants to judge how much an object property such as “yellow” or “loud” is perceived through each of the five senses (seeing, hearing, touch, taste, smell). Overall perceptual strength was positively associated with iconicity (F(1,415)=6.88, p<0.01). Predominantly auditory words (“rustling”, “buzzing”, “muffled”) received the highest iconicity ratings (+2.3), followed by haptic (+1.8, “sticky”, “soft”), visual (+1.21, “shiny”, “yellow”), olfactory (+1.04, “fishy”, “aromatic”) and gustatory words (+0.8, “acidic”, “tasty”).

These results reveal that the English lexicon harbors a considerable amount of iconicity in its sound structure, something that native speakers can pick up on. Moreover, words rated as the highest in iconicity correspond to meanings that are commonly encoded as ideophones in the world’s languages. Vocal iconicity is particularly concentrated in sensory meanings, especially those relating to the auditory, haptic and visual senses, but less to the chemical senses. This suggests that in early communication systems, vocal iconicity may have been more useful for expressing some meaning categories compared to others.

Citation:

Winter B., Perry L., Perlman M. and Lupyan G. (2016). The Structure Of Iconicity In The English Lexicon. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/21.html


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© Winter, Perry, Perlman, Lupyan 2016

Rethinking Zipf’s Frequency-meaning Relationship: Implications For The Evolution Of Word Meaning

Bodo Winter and David Ardell
UC Merced

Keywords: lexicon, meaning, Zipf's law, semantic change, word frequency

Abstract:

In 1945, George Zipf discovered that more frequent words have more meanings listed in dictionaries than less frequent words. For example, a look at Google Ngram (Michel et al., 2011) reveals that the verb “follow” has a relative frequency of ~0.01%, 100 times more frequent than the verb “adorn” (0.0001%). On macmillandictionary.com, “follow” has 14 senses, “adorn” only 1.

Numerous proposals within cognitive and functional linguistics argue that words acquire new meanings through contextual re-interpretation (e.g., Evans & Wilkins, 2000; Traugott & Dasher, 2001). If a word occurs often enough in a novel context, meaning imbued by the context may become conventionalized. We argue that Zipf’s frequency-meaning relationship is in fact reflective of this fundamental mechanism by which semantic systems evolve over time.

Word frequency is positively correlated with contextual diversity (Adelman et al., 2006), a measure of the number of different contexts that a word occurs in. In this paper, we perform a series of statistical analyses (negative binomial regressions) of the frequency-meaning relationship, showing that if contextual diversity is controlled for, word frequency is, in fact, not positively correlated with the number of dictionary senses. Zipf’s frequency-meaning relationship is driven by contextual diversity, consistent with cognitive/functional accounts of the evolution of word meaning.

This general result holds for a number of different ways of operationalizing contextual diversity (No. of different Google volumes a word occurs in, No. of different movies, No. of different Ngrams), as well as for sense data from several dictionaries (MacMillan, WordNet, Webster). It is furthermore known that verbs tend to have more dictionary senses than nouns (Fellbaum, 1990), which our data suggests is due to verbs being more contextually diverse. We also show that the more morphemes a word has (adding semantic specificity), the weaker the relationship between contextual diversity and senses: Words with many morphemes (e.g., “antidisestablishmentarianism”) are less prone to acquiring new senses in novel contexts. Finally, we show that contextual diversity from 200 years ago predicts present dictionary senses.

Our results suggest a considerable re-interpretation of Zipf’s frequency- meaning relationship, and they suggest avenues for novel computational models of evolutionary semantics. We outline implications for models of the evolution of vocabularies (Smith, 2004), as well as for models of Zipfian distributions.

References

Adelman, J. S., Brown, G. D., & Quesada, J. F. (2006). Contextual diversity, not word frequency, determines word-naming and lexical decision times. Psychological Science, 17(9), 814-823.

Evans, N., & Wilkins, D. (2000). In the mind's ear: The semantic extensions of perception verbs in Australian languages. Language, 76, 546-592.

Fellbaum, C. (1990). English verbs as a semantic net. International Journal of Lexicography, 3(4), 278-301.

Michel, J. B., Shen, Y. K., Aiden, A. P., Veres, A., Gray, M. K., Pickett, J. P., Hoiberg, D., Clancy, D., Norvig, P., Orwant, J., Pinker, S., & Nowak, M. A., Aiden, E. L. (2011). Quantitative analysis of culture using millions of digitized books. Science, 331, 176-182.

Smith, K. (2004). The evolution of vocabulary. Journal of Theoretical Biology, 228(1), 127-142.

Traugott, E.C., & Dasher, R. B. (2001). Regularity in semantic change. Cambridge: Cambridge University Press.

Zipf, G. K. (1945). The meaning-frequency relationship of words. The Journal of General Psychology, 33, 251-256.

Citation:

Winter B. and Ardell D. (2016). Rethinking Zipf’s Frequency-meaning Relationship: Implications For The Evolution Of Word Meaning. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/58.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Winter, Ardell 2016

Signal Autonomy Is Shaped By Contextual Predictability

James Winters , Simon Kirby and Kenny Smith
University of Edinburgh

Keywords: Pragmatics, Communication Games, Context, Shared Knowledge, Language Structure, Autonomy

Abstract:

At the heart of any communication system is the goal of reducing uncertainty about the intended meaning of the speaker. In achieving this aim, speakers not only rely on the conventional meaning of linguistic forms, but also on how these forms interact with the contextual information at hand. In short, when context is known and informative, it helps in reducing uncertainty about the intended meaning (Piantadosi, Tily & Gibson, 2012). This relationship between context, meaning and uncertainty has important consequences for how cultural evolutionary processes shape the structure of linguistic systems. A recurrent observation is that languages vary in their signal autonomy: the degree to which a signal can be interpreted in isolation, without recourse to contextual information (Wray & Grace, 2007). One hypothesis is that signal autonomy is causally related to contextual predictability: to what extent can a speaker estimate and therefore exploit the contextual information that a hearer is likely to use in interpreting an utterance.



To investigate these claims, we experimentally simulate the relative pressures from speakers and hearers in a communication game, with the main manipulation being to the referential context: the relationship between a target object and a set of distractor objects, and how these impact upon the task of discrimination (Winters, Kirby & Smith, 2015). For the training phase, participants were trained on an artificial language, which consisted of randomly generated sets of 2-3 syllable signals. These signals were then assigned to four images that differed from each other on the dimensions of shape and colour (e.g., blue blob, grey oval, red square, yellow star). The trained language was therefore ambiguous with respect to whether the signals referred to colour, shape, or both colour and shape. Participants were then assigned fixed roles of either a speaker or a hearer for the communication phase. In each trial, speakers typed a signal for a target image, and hearers used this signal to discriminate the target from a set of three distractors (the context). There were a total of 16 target images a speaker needed to convey over three blocks of 32 trials.



To test for the effect of referential context on signal autonomy we manipulated two variables: (i) context-type (across trial predictability) and (ii) access to context (within trial predictability). Context-type is the extent to which a particular dimension (e.g., shape) is relevant for discrimination across successive trials. For the {\em Shape Different} referential contexts, the context-type remains consistent across trials, as targets and distractors differ in shape, but share the same colour. {\em Mixed} context-types vary across trials: half of the trials consist of contexts in which the target and distractors differ in shape (but share the same colour) and half in which they differ in colour (but share the same shape). We also manipulated whether the speaker had knowledge about the relevant distinctions needed to communicate with the hearer. In the Shared conditions, speakers had access to the context (i.e., the target and distractors that hearer was confronted with), whereas in the Unshared condition speakers only saw the target in isolation (although the hearer's task remained the same: to distinguish a target from a set of three distractors). This gives us four conditions: Shape-Different Shared, Shape-Different Unshared, Mixed Shared, Mixed Unshared. By decreasing contextual predictability within and across trials we predict that speakers will respond by creating more autonomous signals (and vice versa).



Our results show that context does shape the degree of signal autonomy: when the context is predictable, languages are organised to be less autonomous (more context-dependent) through combining linguistic signals with context to reduce uncertainty. When the context decreases in predictability, speakers favour strategies that promote autonomous signals, allowing linguistic systems to reduce their context dependency. For the Shape-Different Shared condition, which was maximally predictable in terms of context-type and access to context, speakers only conveyed the shape dimension in their linguistic systems, leaving out the colour dimension as this was irrelevant to communicative success (resulting in low autonomy). Conversely, in the Mixed Unshared condition, which had the lowest contextual predictability, speakers consistently opted for strategies that promoted compositional structure: this allowed for autonomous systems that specified both colour and shape within the linguistic system. For the conditions in-between these two extremes of contextual predictability -- Shape-Different Unshared and Mixed Shared -- speakers were more heterogeneous in their strategy choice, with the resulting systems varying in their degree of autonomy. Taken together, these results show how pragmatic factors can play a salient role in the cultural evolution of language, with manipulations to contextual predictability shaping the types of systems that emerge over repeated interactions between speakers and hearers.

Citation:

Winters J., Kirby S. and Smith K. (2016). Signal Autonomy Is Shaped By Contextual Predictability. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/92.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Winters, Kirby, Smith 2016

The Cultural Co-evolution Of Language And Mindreading

Marieke Woensdregt , Kenny Smith , Chris Cummins and Simon Kirby
University of Edinburgh

Keywords: Agent-Based Modelling, Cultural Evolution, Co-Evolution, Mindreading, Bayesian Inference

Short description: Have language and mindreading co-evolved (through culture)? This agent-based model explores their interaction in development and evolution.

Abstract:

One defining feature of human language is its heavy reliance on entertaining (meta-)representations of each others’ mental states (e.g. Scott-Phillips, 2014). The development of such mindreading skills was a crucial step in hominin evolution because it allowed for the expression and recognition of communicative intentions, thereby paving the way for the cooperative information sharing that we find in humans today. The ability to recognize and infer communicative intentions also plays an important role in language development on an ontogenetic timescale, as evidenced by studies correlating mindreading skills and word learning (e.g. Parish-Morris, Hennon, Hirsh-Pasek, Golinkoff, & Tager-Flusberg, 2007).

This relationship between language and mindreading may be reciprocal; the acquisition of language has been shown to unlock further levels of mindreading development in the individual (e.g. Lohmann & Tomasello, 2003; Pyers & Senghas, 2009). Furthermore, Heyes and Frith (2014) have argued that mindreading is a skill that has developed (partly) through cumulative cultural evolution - just like language (Kirby, Tamariz, Cornish, & Smith, 2015).

In this paper we present a computational model that investigates the implications of such a bidirectional interaction between the capacities of language and mindreading. This new agent-based model builds on previous models of crosssituational word learning (Siskind, 1996), but extends this framework by allowing agents to learn a way of inferring an interlocutor’s communicative intentions, simultaneously with learning the lexicon.

In the model, agents communicate (probabilistically) about objects that are proximate to them. The learner’s task is to establish the relations between the words and the objects (i.e. the lexicon), and the perspective of the speaker, which may be the same as the learner’s own or may differ. The communicative intentions of the speaker are a product of the context that the agents find themselves in and the speaker’s perspective (which renders some objects more salient than others). Utterances in turn are a product of a speaker’s communicative intention and lexicon.

The data that a learner has access to in order to learn a speaker’s lexicon and perspective consists of the speaker’s word use in context. From these two observable variables the learner has to infer two unobservable variables simultaneously: the speaker’s lexicon and their perspective. This learning happens through Bayesian inference, where accumulating evidence allows the learner to weigh different combinations of lexicon and perspective hypotheses against each other - based on the likelihood of the incoming data given a specific hypothesis.

Although a speaker’s utterances are as much a product of their perspective as they are of their lexicon, simulations with this model show that given consistent input, a learner is able to infer both the correct lexicon and the correct perspective from scratch, by reciprocally bootstrapping the learning of the one variable with partial knowledge of the other. The learning trajectory that is revealed in these simulations is one where acquiring a bit of the lexicon helps the learner infer the speaker’s perspective, which in turn allows the learner to acquire the rest of the lexicon.

We will discuss the dynamics of this model on two different levels: exploring the emergence of language and mindreading capacities both within individual agents and across generations of a population. This model thus gives insight into the effects of an individual-level interaction of cognitive capacities on populationwide dynamics such as establishing and maintaining a stable signalling system; thereby connecting proximate and ultimate causes of language evolution.

Citation:

Woensdregt M., Smith K., Cummins C. and Kirby S. (2016). The Cultural Co-evolution Of Language And Mindreading. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/84.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Woensdregt, Smith, Cummins, Kirby 2016

Genetic Drift Explains Sapir's ``drift'' In Semantic Change

Igor Yanovich
Universität Tübingen / Carnegie Mellon University

Keywords: language change, semantic change, evolutionary modeling, genetic drift, Sapir's drift

Short description: Why would separate genetically related languages show parallel changes (Sapir's drift)? Genetic drift from population genetics explains it!

Abstract:

The linguistic notion of ``drift'' going back to Sapir refers to the phenomenon when genetically related languages, long after their separation, undergo the same linguistic change. Such ``drift'' may seem almost magical: given that language change is generally a random process, why would separate linguistic varieties exhibit the same change? There exist possible explanations demystifying ``drift'', including \cite{Joseph-2012} who argues that if the sister languages all possessed the same variation in a given (sociolinguistic) variable, that variation can serve as the basis for parallel changes long after the languages separate. Here, I propose a different explanation for Sapir's ``drift'' based on evolutionary modeling in the finite-population setting (complementing rather than replacing Joseph's). All finite populations show the effect called \emph{genetic drift} (unrelated to Sapir's ``drift'') that delays the effect of mutation and selection forces. For grammatical reanalysis, and for semantic change in particular, this means that even when reanalysis of individual utterances could already occur in the proto-language, under the right conditions it will lead to a takeover by the innovation many centuries later, in the proto-language's descendants. Given the introduced model, it would be surprising if Sapir's ``drift'' did \emph{not} arise.

Citation:

Yanovich I. (2016). Genetic Drift Explains Sapir's ``drift'' In Semantic Change. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/24.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Yanovich 2016

A Game Theoretic Account Of Semantic Subjectification In The Cultural Evolution Of Languages

Eva Zehentner , Andreas Baumann , Nikolaus Ritt and Christina Prömer
University of Vienna

Keywords: semantic change, cultural evolution, subjectification, evolutionary game theory, diachrony, animal communication

Short description: The paper accounts for subjectification in semantic change in terms of evolutionary game theory, and argues that it is driven by mind-reading listeners, not by speakers trying to express themselves.

Abstract:

Linguistic research from the past decades has revealed a pathway in semantic change by which cultural transmission causes word meanings to become increasingly subjective, i.e. increasingly based in speakers’ beliefs and attitudes (see Davidse, Vandelanotte & Cuyckens 2010 and the references therein).

A prototypical – but far from the only – case of subjectification is the rise of ‘epistemic’ meanings (1b) in ‘deontic’ modals (1a) (Traugott 1989).

(1) a. John must work hard to survive. (objective necessity)

b. John looks tired. He must be working hard. (speaker’s subjective certainty)

While subjectification is often taken to reflect the need of speakers to express their inner selves, we consider this hypothesis as shallow and little informative. Instead, we propose an account in terms of evolutionary game theory and take subjectification to emerge through sender-receiver interactions where senders may attempt to manipulate receivers (e.g. by altering their construal of reality), while receivers may exploit signals for reading speakers’ minds (i.e. beliefs, goals and intentions) (cf. Dawkins & Krebs 1984).

In our model, interlocutors may intend or interpret a message as either objective (about external reality) or subjective (about beliefs etc.). They may be cooperative or uncooperative (at a proportion that we fix a priori at q E (0,1). Cooperative speakers are honest, uncooperative ones lie. Cooperative listeners are credulous, uncooperative ones disregard the encoded message, but try to infer hidden speaker beliefs.

The evolutionary dynamics of the populations of subjective and objective interlocutors are modeled as an asymmetric role game (Hofbauer & Sigmund 1998: 122ff.) with two positions (speaker and listener) and two strategies (subjective and objective), yielding four different behavior types (subjective speaking & subjective listening; objective speaking & subjective listening, etc.). This yields a 4-by-4 game with 16 different encounter types.

Payoffs resulting from pairwise speaker-hearer interactions are divided into four ordinal categories (no benefit/loss, small benefit/loss, medium benefit/loss, and large benefit/loss), which are numerized from 0 to +-3. Information about external reality is taken to be more valuable when true (and more harmful when false) than information about speakers’ intentional states.

For each combination of cooperative or uncooperative individuals choosing one of the available strategies in one of the two positions the payoff is determined heuristically and weighted according to the assumed proportions of cooperative and defective players.

An analysis of the resulting dynamics reveals two qualitatively different evolutionary outcomes: if the proportion of cooperative players does not ex-ceed a certain threshold (about 0.7), the behaviour type ‘objective speaking & subjective listening’ represents the only evolutionarily stable strategy-combination. Otherwise, i.e. if the proportion of cooperative speakers is extraordinarily large, the replicator dynamics exhibit a cyclic behavior where speakers switch periodically from one strategy to the other, followed by subsequent peri-odic listener-strategy adaptations.

We take this to suggest that subjectification is driven by listener’s interest in (potentially hidden) beliefs and intentions of speakers rather than by speakers’ desire to express their inner selves. At the same time, our account shows that concepts developed in the study of animal communication can be productively applied in the study of language diachrony as well.



References

Davidse, K., L. Vandelanotte & H. Cuyckens (eds.) (2010). Subjectification, intersubjectification and grammaticalization. Berlin: De Gruyter Mouton.

Hofbauer, J.; Sigmund, K. (1998): Evolutionary games and population dynamics. Cambridge: Cambridge University Press.

Krebs, J. R. & R. Dawkins. (1984). Animal signals: mind reading and manipulation. In J. R. Krebs & N. B. Davies (eds.), Behavioural ecology: An evolutionary approach, 380–402. Sunderland, Mass.: Sinauer Associates.

Traugott, E. C. (1989). On the rise of epistemic meanings in English: An example of subjectification in semantic change. Language 65(1). 31–55.

Citation:

Zehentner E., Baumann A., Ritt N. and Prömer C. (2016). A Game Theoretic Account Of Semantic Subjectification In The Cultural Evolution Of Languages. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/110.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© Zehentner, Baumann, Ritt, Prömer 2016

Deep Learning Models Of Language Processing And The Evolution Of Syntax

Willem Zuidema
university of Amsterdam

Keywords: Deep learning, syntax, evolution of grammar

Abstract:

Researchers that assume discrete, symbolic models of syntax, tend to reject gradualist accounts in favor of a 'saltationist' view, where the crucial machinery for learning and processing syntactic structure emerges in a single step (as a side-effect of known or unknown brain principles or as a macro-mutation). A key argument for the last position is that the vast amount of work on describing the syntax of different natural languages makes use of symbolic formalisms like minimalist grammar, HPSG, categorical grammar etc., and that it is difficult to imagine process of any sort that moves gradually from a state without to a state with their key components. The most often repeated version of this argument concerns 'recursion' or 'merge', which is said to be an all or nothing phenomenon where you can't have just a little bit of it.

In my talk I will review this argument in the light of recent developments in computational linguistics where a class of models has become popular that we can call 'vector grammars', that include the Long-Short Term Memory networks, Gated Recurrent Networks and Recursive Neural Networks (e.g., Socher et al., 2010). These models have over the last two years swept to prominence in many areas of NLP and shown state of the art results on many benchmark tests including constituency parsing, dependency parsing, sentiment analysis and question classification.

Citation:

Zuidema W. (2016). Deep Learning Models Of Language Processing And The Evolution Of Syntax. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/151.html


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© Zuidema 2016

Language-biology Coevolution Fixation Times

Bart de Boer
Vrije Univeriteit Brussel

Keywords: Language-Biology Co-evolution, Cultural evolution, Mathematical analysis, Moran process

Short description: A mathematical model of the co-evolution of language and biology shows that biological adaptations to arbitrary culture can evolve

Abstract:

In order to understand language evolution, we need to understand the interaction between biological and cultural evolution (e. g. Deacon, 1997). This paper presents a modification of a standard approach from theoretical biology (the Moran process, Moran, 1958, explained below) for calculating how quickly biological specializations to culturally transmitted traits can evolve: the fixation time. This addresses two issues that thwart the analysis of language-biology coevolution. The first issue is that of the speed of biological and cultural evolution. Although it is often assumed that languages change much faster than biological evolution, (cultural) language change may be very slow in some cases while biological evolution can operate rapidly in small populations. The second issue is that evolution operates in finite populations, so randomness plays a role. This means that one cannot just look for fitness advantages, but must calculate the probabilities of the spread of a trait.

It turns out that for a range of fitness advantages and a range of population sizes, fixation times in the presence of culture are not much longer than when there is no culture. This indicates that it is possible for biological adaptations for arbitrary culture to evolve.

Citation:

de Boer B. (2016). Language-biology Coevolution Fixation Times. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/80.html


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© de Boer 2016

Catergory Learning In Audition, Touch, And Vision

Sabine van der Ham , Bill Thompson and Bart de Boer
Vrije Universiteit Brussel

Keywords: specializations for speech learning, distributional learning, category learning, continuous signals, comparing modalities

Short description: Learning bimodal distributions of auditory, visual, and tactile signals: is category learning specialized for speech?

Abstract:

The evolution of linguistic cognition is a notoriously difficult problem: as all learning mechanisms are intertwined with human development and behavior, it is hard to tease apart which aspects of language are the result of cultural processes and which are evolved cognitive traits (De Boer, 2015). We explore whether humans have specializations related to language, and specifically, whether statistical learning of categories is specialized for language-like signals. While it is well-known that statistical learning is not restricted to language learning, the way the mechanism operates in different domains may not be the same, and may be affected by perceptual and cognitive constraints (Conway & Christiansen, 2005).

We present an experiment in which participants learn, categorize, and reproduce signals in the acoustic, visual, and tactile modalities. Participants are trained on bimodal distributions of tones, images and buzzes with some variation in duration, resulting in a `long' and `short' category (Maye et al., 2002). After training, participants rate individual signals on a 6 point scale from `definitely short' to `definitely long'. The production task consists of creating 3 signals for both categories by pressing the mouse button. The signal is presented as long as the button is pressed. If there is indeed a specialization for learning language-like signals, then we expect that participants reveal: a) more certainty in the categorization task in the auditory and visual conditions, and b) better estimation of the peaks in the distributions, resulting in more reliable reproductions of the categories in the auditory and visual conditions. Alternatively, in normal hearing adults there may be a linguistic training effect, for instance from distinctions between long and short vowels in their native language, in which case better performance in the auditory condition compared to the other conditions is expected, following earlier perception studies (Jones et al., 2009).

A within subjects design provided us with categorization and production data from 29 participants in all modalities. Results from a logistic regression reveal three interesting trends. First, statistical learning of categories and (reliably) reproducing them is possible in all domains, including the tactile modality. Second, and most importantly, the categorization and production processes are remarkably similar in the tactile and auditory domains (odds ratios 0.99 and 1.10; 95\% confidence intervals 0.84-1.15 and 0.94-1.28, respectively), but not in the visual domain (OR = 1.30; CIs = 1.12-1.53), suggesting that there is no cognitive specialization for learning language-like signals, nor that there is an effect from previous language experience. Finally, comparing the categorization and production data reveals an interesting tension: on the one hand, we demonstrate that across all three modalities participants were able to induce representations of distinct categories that respect the statistics of the training distributions. On the other hand, we consistently find more variation in duration among participants' productions than was present in the training distributions. This pattern of variation differs between modalities, with more variation in the visual modality. We fitted a Bayesian model of inference for Gaussian distributions to the data in order to investigate whether this variation reflects a meaningful component of the learning process, and to provide a quantitative characterization of the pattern of differences in how participants formed categories in this experiment. The model correctly predicts the overall pattern of categorization behaviour that we find in the empirical data.

Our study further explores the extent to which statistical learning is domain general, as it is becoming increasingly clear that we cannot approach it as a unitary mechanism (e.g. Frost et al., 2015). Teasing apart differences between modalities will help us find out how the perceptional biases of each modality potentially affect these learning mechanisms, which is necessary for understanding whether certain forms of learning are specialized for language.

Citation:

van der Ham S., Thompson B. and de Boer B. (2016). Catergory Learning In Audition, Touch, And Vision. In S.G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Fehér & T. Verhoef (eds.) The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Available online: http://evolang.org/neworleans/papers/38.html


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
© van der Ham, Thompson, de Boer 2016

Triadic Ontogenetic Ritualization: An Overlooked Possibility

Ekaterina Abramova


Brain Mechanisms Of Human Acoustic Communication: A Phylogenetic Approach And Its Ontogenetic Implications

Hermann Ackermann, Wolfram Ziegler


Towards A Rigorous Motivation For Zipf's Law

Phillip M. Alday


Pre And Post Partum Whistle Production Of A Bottlenose Dolphin (Tursiops Truncatus) Mother-calf Dyad

Audra Ames, Sara Wielandt, Dianne Cameron, Stan Kuczaj


Noise In Phonology Affects Encoding Strategies In Morphology

David Ardell, Noelle Anderson, Bodo Winter


Evolution Of Language From The Aphasia Perspective

Alfredo Ardila


Towards An Action-oriented Approach To The Evolution Of Language And Music

Rie Asano


On A Music-ready Brain: Neural Basis, Mechanisms, And Their Contribution To The Language Evolution

Rie Asano, Edward Ruoyang Shi


Adult Language Learning And The Evolution Of Linguistic Complexity

Mark Atkinson, Kenny Smith, Simon Kirby


Evolution Towards An Optimal Management Of Linguistic Information

Lluis Barcelo-Coblijn


A Lotka-volterra Model Of The Evolutionary Dynamics Of Compositionality Markers

Andreas Baumann, Christina Prömer, Kamil Kazmierski, Nikolaus Ritt


The Antiquity Of Musicality And Its Role In Prehistoric Culture

Ted Bayne


Evolution Of What?

Christina Behme


The Low-complexity-belt: Evidence For Large-scale Language Contact In Human Prehistory?

Christian Bentz


Redundant Features Are Less Likely To Survive: Empirical Evidence From The Slavic Languages

Aleksandrs Berdicevskis, Hanne Eckhoff


A Scientometric Analysis Of Evolang: Intersections And Authorships

Till Bergmann, Rick Dale


Spontaneous Dialect Formation In A Population Of Locally Aligning Agents

Richard A. Blythe, Alistair H. Jones, Jessica Renton


How The Brain Got Grammaticalized: Globularization And (self-)domestication

Cedric Boeckx, Constantina Theofanopoulou, Antonio Benítez-Burraco


Signature Whistles In An Introduction Context

Megan Broadway, Jamie Klaus, Billie Serafin, Heidi Lyn


How Do Laughter And Language Interact?

Greg Bryant


Cultural Evolution And Communication Yield Structured Languages In An Open-ended World

Jon W. Carr, Kenny Smith, Hannah Cornish, Simon Kirby


Lasting Impacts Of The Code Model On Primate Communication Research

Erica Cartmill


Are Emotional Displays An Evolutionary Precursor To Compositionality In Language?

Federica Cavicchio, Livnat Leemor, Simone Shamay-Tsoory, Wendy Sandler


Functionally Flexible Vocalizations In Wild Bonobos (pan Pansicus)

Zanna Clay, Jahmaira Archbold, Klaus Zuberbuhler


Relationship Between Nonverbal Social Skills And Language Development

Hélène Cochet, Richard Byrne


Dwarf Mongooses Combine Meaningful Alarm Calls

Katie Collier, Andrew N. Radford, Balthasar Bickel, Marta B. Manser, Simon W. Townsend


Word Order Universals Reflect Cognitive Biases: Evidence From Silent Gesture

Jennifer Culbertson, Simon Kirby, Marieke Schouwstra


The Emergence Of Rules And Exceptions In A Population Of Interacting Agents

Christine Cuskley, Vittorio Loreto


The Evolution Of Collaborative Stories

Christine Cuskley, Bernardo Monechi, Pietro Gravino, Vittorio Loreto


Empirically Assessing Linguistic Ability With Stone Tools

Cory Cuthbertson


Anatomical Biasing Of Click Learning And Production: An MRI And 3d Palate Imaging Study

Dan Dediu, Scott Moisik


Using Causal Inference To Detect Directional Tendencies In Semantic Evolution

Johannes Dellert


The Fidelity Of Iterated Vocal Imitation

Pierce Edmiston, Marcus Perlman, Gary Lupyan


Meaningful Call Combinations And Compositional Processing In A Social Bird

Sabrina Engesser, Amanda R. Ridley, Simon W. Townsend


The Emergence Of The Progressive To Imperfective Diachronic Cycle In Reinforcement-learning Agents

Dankmar Enke, Roland Mühlenbernd, Igor Yanovich


Using HMMs To Attribute Structure To Artificial Languages

Kerem Eryilmaz, Hannah Little, Bart de Boer


Stick Or Switch: A Simple Selection Heuristic May Drive Adaptive Langauge Evolution

Nicolas Fay, Shane Rogers


Processing Preferences Shape Language Change

Maryia Fedzechkina, Becky Chu, T. Florian Jaeger, John Trueswell


Communicative Interaction Leads To The Elimination Of Unpredictable Variation

Olga Feher, Kenny Smith, Elizabeth Wonnacott, Nikolaus Ritt


Word Learners Regularize Synonyms And Homonyms Similarly

Vanessa Ferdinand, Matt Spike


Kauffman's Adjacent Possible In Word Order Evolution

Ramon Ferrer-I-Cancho


Multimodal Processing Of Emotional Meanings: A Hypothesis On The Adaptive Value Of Prosody

Piera Filippi, Sebastian Ocklenburg, Daniel Liu Bowling, Larissa Heege, Albert Newen, Onur Güntürkün, Bart de Boer


Humans Recognize Vocal Expressions Of Emotional States Universally Across Species

Piera Filippi, Jenna V. Congdon, John Hoang, Daniel Liu Bowling, Stephan Reber, Andrius Pašukonis, Marisa Hoeschele, Sebastian Ocklenburg, Bart de Boer, Christopher B. Sturdy, Albert Newen, Onur GÜntÜrkÜn


Do Lab Attested Biases Predict The Structure Of A New Natural Language?

Molly Flaherty, Katelyn Stangl, Susan Goldin-Meadow


Phoneme Inventory Size Distributions And The Origins Of The Duality Of Patterning

Luke Fleming


Cooperative Communication And Communication Styles In Bonobos And Chimpanzees In The Wild: Same Same But Different?

Marlen Fröhlich, Paul H Kuchenbuch, Gudrun Müller, Barbara Fruth, Takeshi Furuichi, Roman M Wittig, Simone Pika


Integration Or Disintegration?

Koji Fujita, Haruka Fujita


Migration As A Window Into The Coevolution Between Language And Behavior

Victor Gay, Daniel Hicks, Estefania Santacreu-Vasut


Effects Of Task-specific Variables On Auditory Artificial Grammar Learning And Generalization

Andreea Geambasu, Michelle J. Spierings, Carel ten Cate, Clara C. Levelt


Intentional Meaning Of Bonobo Gestures

Kirsty Graham, Catherine Hobaiter, Richard Byrne


The Impact Of Communicative Network Structure On The Conventionalization Of Referring Expressions In Gesture

Matt Hall, Russell Richie, Marie Coppola


Plain Simple Complex Structures: The Emergence Of Overspecification In An Iterated Learning Setup

Stefan Hartmann, Peeter Tinits, Jonas Nölle, Thomas Hartmann, Michael Pleyer


Language Origins In Light Of Neuro-atypical Cognition And Speech Profiles

Wolfram Hinzen, Joana Rosselló


Deictic Tools Can Limit The Emergence Of Referential Symbol Systems

Elizabeth Irvine, Sean Roberts


Inferring The World Tree Of Languages From Word Lists

Gerhard Jaeger, Soeren Wichmann


Effort Vs. Robust Information Transfer In Language Evolution

T. Florian Jaeger, Maryia Fedzechkina


Nonlinear Biases In Articulation Constrain The Design Space Of Language

Rick Janssen, Bodo Winter, Dan Dediu, Scott Moisik, Sean Roberts


Simple Agents Are Able To Replicate Speech Sounds Using 3d Vocal Tract Model

Rick Janssen, Dan Dediu, Scott Moisik


Protolanguage Possibilities In A Construction Grammar Framework

Sverker Johansson


Modeling Language Change Triggered By Language Shift

Anna Jon-And, Elliot Aguilar


The Evolution Of Zipf’s Law Of Abbreviation

Jasmeen Kanwal, Kenny Smith, Jennifer Culbertson, Simon Kirby


The Spontaneous Emergence Of Linguistic Diversity In An Artificial Language

Deborah Kerr, Kenny Smith


Evolution Of The Language-ready Brain: Warfare Or ‘mother Tongues’?

Chris Knight, Camilla Power


A General Auditory Bias For Handling Speaker Variability In Speech? Evidence In Humans And Songbirds.

Buddhamas Kriengwatana, Paola Escudero, Anne Kerkhoven, Carel ten Cate


Cumulative Vocal Cultures In Orangutans And Their Ontogenetic Origin

Adriano Lameira, Jeremy Kendal, Marco Gamba


The Emergence Of Argument Marking

Sander Lestrade


Learnability Pressures Influence The Encoding Of Information Density In The Lexicon

Molly Lewis, Michael C. Frank


A Developmental Perspective On Language Origin: Children Are Old Hands At Gesture

Casey Lister, Tiarn Burtenshaw, Nicolas Fay, Bradley Walker, Jeneva Ohan


Emergence Of Signal Structure: Effects Of Duration Constraints

Hannah Little, Kerem Eryılmaz, Bart de Boer


Differing Signal-meaning Dimensionalities Facilitates The Emergence Of Structure

Hannah Little, Kerem Eryılmaz, Bart de Boer


Correlated Evolution Or Not? Phylogenetic Linguistics With Syntactic, Cognacy, And Phonetic Data

Giuseppe Longobardi, Armin Buch, Andrea Ceolin, Aaron Ecay, Cristina Guardiano, Monica Irimia, Dimitris Michelioudakis, Nina Radkevich, Gerhard Jaeger


The Evolution Of Redundancy In A Global Language

Gary Lupyan, Justin Sulik


Nonhuman Animals’ Use Of Ostensive Cues In An Object Choice Task

Heidi Lyn, Stephanie Jett, Megan Broadway, Mystera Samuelson


Language Adapts To Signal Disruption In Interaction

Vinicius Macuch Silva, Sean Roberts


Biological Systems Of Interest To Researchers Of Cultural Evolution

Luke Mccrohon


Preliminary Results From A Computational Multi Agent Modelling Approach To Study Humpback Whale Song Cultural Transmission

Michael Mcloughlin, Luca Lamoni, Ellen Garland, Simon Ingram, Alexis Kirke, Michael Noad, Luke Rendell, Eduardo Miranda


Human-like Brain Specialization In Baboons: An Invo Anatomical MRI Study Of Language Areas Homologs In 96 Subjects

Adrien Meguerditchian, Damien Marie, Konstantina Margiotoudi, Scott A. Love, Alice Bertello, Romain Lacoste, Muriel Roth, Bruno Nazarian, Jean-Luc Anton, Olivier Coulon


Linking The Processes Of Language Evolution And Language Change: A Five-level Hierarchy

Jérôme Michaud


Interaction For Facilitating Conventionalization: Negotiating The Silent Gesture Communication Of Noun-verb Pairs

Ashley Micklos


The Evolution Of Repair: Evidence From Online Conversations

Gregory Mills


Arbitrary Hierarchy: A Precedent For Language?

Dominic Mitchell


How Selection For Language Could Distort The Dynamics Of Human Evolution

William Mitchener


Make New With Old: Human Language In Phylogenetically Ancient Brain Regions

Marie Montant, Johannes Ziegler, Benny Briesemeister, Tila Brink, Bruno Wicker, Aurélie Ponz, Mireille Bonnard, Arthur Jacobs, Mario Braun


Frequency-dependent Regularization In Iterated Learning

Emily Morgan, Roger Levy


The Effect Of Modality On Signal Space In Natural Languages

Hope Morgan


Linguistic Structure Emerges In The Cultural Evolution Of Artificial Sign Languages

Yasamin Motamedi, Marieke Schouwstra, Kenny Smith, Simon Kirby


Self-organization In Sound Systems: A Model Of Sound Strings Processing Agents

Roland Mühlenbernd, Johannes Wahle


A Social Dimension Of Language Evolution

Albert Naccache


Edward Sapir And The Origin Of Language

Albert Naccache


Shared Basis For Language And Mathematics Revealed By Cross-domain Syntactic Priming

Tomoya Nakai, Kazuo Okanoya


Measuring Conventionalization In The Manual Modality

Savithry Namboodiripad, Daniel Lenzen, Ryan Lepic, Tessa Verhoef


Quantifying The Semantic Value Of Words

Dillon Niederhut


The Arbitrariness Of The Sign Revisited: The Role Of Phonological Similarity

Alan Nielsen, Dieuwke Hupkes, Simon Kirby, Kenny Smith


Semantic Approximation And Its Effect On The Development Of Lexical Conventions

Bill Noble, Raquel Fernández


Domestication And Evolution Of Signal Complexity In Finches

Kazuo Okanoya


Parrot " Phonological Regression": Expanding Our Understanding Of The Evolution Of Vocal Learning

Irene M. Pepperberg, Katia Zilber-Izhar, Scott Smith


Early Learned Words Are More Iconic

Lynn Perry, Marcus Perlman, Gary Lupyan, Bodo Winter, Dominic Massaro


Cooperative Communication: What Do Primates And Corvids Have To Tell?

Simone Pika


Construction Grammar For Apes

Michael Pleyer, Stefan Hartmann


The Evolution Of Im/politeness

Monika Pleyer, Michael Pleyer


What Kind Of Grammar Did Early Humans (and Neanderthals) Command? A Linguistic Reconstruction

Ljiljana Progovac


The Cultural Evolution Of Structure In Music And Language

Andrea Ravignani, Tania Delgado, Simon Kirby


Languages Support Efficient Communication About The Environment: Words For Snow Revisited

Terry Regier, Alexandra Carstensen, Charles Kemp


Strategies In Gesture And Sign For Demoting An Agent: Effects Of Language Community And Input

Lilia Rissman, Laura Horton, Molly Flaherty, Marie Coppola, Annie Senghas, Diane Brentari, Susan Goldin-Meadow


Social Biases Versus Efficient Communication: An Iterated Learning Study

Gareth Roberts, Mariya Fedzechkina


Vocal Learning And Homo Loquens

Joana Rosselló


The Cultural Evolution Of Complexity In Linguistic Structure

Carmen Saldana, Simon Kirby, Kenny Smith


Skepticism Towards Skepticism Towards Computer Simulation In Evolutionary Linguistics

Carlos Santana


From Natural Order To Convention In Silent Gesture

Marieke Schouwstra, Kenny Smith, Simon Kirby


Active Control Of Complexity Growth In Naming Games: Hearer's Choice

William Schueller, Pierre-Yves Oudeyer


Mind The Gap: Inductive Biases In Phonological Feature Learning

Klaas Seinhorst


Children's Production Of Determiners As A Test Case For Innate Syntactic Categories

Catriona Silvey, Christos Christodoulopoulos


Vocal Learning In Functionally Referential Chimpanzee Food Calls

Katie Slocombe, Stuart Watson, Anne Schel, Claudia Wilke, Emma Wallace, Leveda Cheng, Victoria West, Simon Townsend


Chimpanzees Process Structural Isomorphisms Across Sensory Modalities

Ruth Sonnweber, Andrea Ravignani


Rule Learning In Birds: Zebra Finches Generalize By Positional Similarities, Budgerigars By The Structural Rules.

Michelle Spierings, Carel ten Cate


Minimal Pressures Leading To Duality Of Patterning

Matthew Spike, Kenny Smith, Simon Kirby


Information Dynamics Of Learned Signalling Games

Matthew Spike, Simon Kirby, Kenny Smith


Metalinguistic Awareness Of Trends As A Driving Force In Language Change: An Empirical Study

Kevin Stadler, Elyse Jamieson, Kenny Smith, Simon Kirby


The Grammar Of The Body And The Emergence Of Complexity In Sign Languages

Rose Stamp, Wendy Sandler


Failures Of Perspective Taking In An Open-ended Signaling Task

Justin Sulik, Gary Lupyan


Against The Emergent View Of Language Evolution

Maggie Tallerman


Evidence Of Descent With Modification And Selection In Iterated Learning Experiments

Monica Tamariz, Joleana Shurley


What Is Unique About The Evolution Of Language Compared To Other Cultural Domains? An Experimental Study Of Language, Technology And Art

Monica Tamariz, Jon W. Carr


Learning To Learn From Similar Others: Approximate Bayesian Computation Through Babbling

Bill Thompson, Heikki Rasilo


Interpreting Silent Gesture

Bill Thompson, Marieke Schouwstra, Henriëtte de Swart


Arbitrariness Of Iconicity: The Sources (and Forces) Of (dis)similarities In Iconic Representations

Oksana Tkachman, Carla L. Hudson Kam


Experimental Evidence For Phonemic-like Contrasts In A Nonhuman Vocal System

Simon Townsend, Andrew Russell, Sabrina Engesser


Modeling The Emergence Of Creole Languages

Francesca Tria, Vittorio Loreto, Vito Servedio, S. Mufwene Salikoko


Dendrophobia In Bonobo Comprehension Of Spoken English

Robert Truswell


A Constant Rate Effect Without Stable Functions

Robert Truswell, Nikolas Gisborne


Norms For Constructing Language In Humans And Animals

Robert Ullrich


Addressees Use Zipf's Law As A Cue For Semantics

Freek Van de Velde, Dirk Pijpops


A Continuum Of Human Cognitive-linguistic Evolution

Olga Vasileva


Language Evolution In Ontogeny And Phylogeny

Olga Vasileva


Constituent Order In Pictorial Representations Of Events Is Influenced By Language

Anu Vastenius, Jordan Zlatev, Joost Van de Weijer


Iconicity, Naturalness And Systematicity In The Emergence Of Sign Language Structure

Tessa Verhoef, Carol Padden, Simon Kirby


Language Evolution And Language Origins In Teaching Linguistics At The University Level

Slawomir Wacewicz, Przemyslaw Zywiczynski, Arkadiusz Jasinski


Languages Prefer Robust Phonemes

Andrew Wedel, Bodo Winter


The Structure Of Iconicity In The English Lexicon

Bodo Winter, Lynn Perry, Marcus Perlman, Gary Lupyan


Rethinking Zipf’s Frequency-meaning Relationship: Implications For The Evolution Of Word Meaning

Bodo Winter, David Ardell


Signal Autonomy Is Shaped By Contextual Predictability

James Winters, Simon Kirby, Kenny Smith


The Cultural Co-evolution Of Language And Mindreading

Marieke Woensdregt, Kenny Smith, Chris Cummins, Simon Kirby


Genetic Drift Explains Sapir's ``drift'' In Semantic Change

Igor Yanovich


A Game Theoretic Account Of Semantic Subjectification In The Cultural Evolution Of Languages

Eva Zehentner, Andreas Baumann, Nikolaus Ritt, Christina Prömer


Deep Learning Models Of Language Processing And The Evolution Of Syntax

Willem Zuidema


Language-biology Coevolution Fixation Times

Bart de Boer


Catergory Learning In Audition, Touch, And Vision

Sabine van der Ham, Bill Thompson, Bart de Boer