Simple kinship systems are more learnable

AbstractNatural languages partition meanings into labelled categories in different ways, but this variation is constrained: languages appear to achieve a near-optimal trade-off between simplicity and informativeness. Across 3 artificial language learning experiments, we verify that objectively simpler kinship systems are easier for human participants to learn, and also show that the errors which occur during learning tend to increase simplicity while reducing informativeness. This latter result suggests that pressures for simplicity and informativeness operate through different mechanisms: learning favours simplicity, but the pressure for informativeness must be enforced elsewhere, e.g. during language use in communicative interaction.


Return to previous page