Text

Automated Software language and Software engineering

Industrial Software Engineering

Software Testing Laboratory

Operationalising usage-based learning: A minimal cognitive architecture approach

The main purpose of this project is to test how far a domain-general approach to language learning can go. Most models of artificial language learning available are not cognitively plausible, and our aim is to demonstrate that sequence memory, chunking, and generalizing are sufficient to learn language.

Start

2023-01-01

Planned completion

2026-12-31

Collaboration partners

Project manager at MDU

No partial template found

What cognitive mechanisms enable people to learn complex languages? Based on a holistic view of human cognition, inspired by models in artificial intelligence, our hypothesis is that a minimal cognitive architecture based on sequence memory, chunking, and generalization can learn grammar. We assume that these three components are general mental abilities of humans that are not specifically adapted for language. Sequence memory is central because new research shows that animals cannot perceive sequences exactly, while sequences are meaningful in language.

Chunking, the ability to mentally combine several units into one, has been identified as important for human information processing, language understanding, and language learning. Generalization is a well-known mental ability of both humans and other animals. Our idea is that chunking and generalization will make it possible to make decisions based on symbolic categories rather than just words or sequences of words, thereby reducing the combinatorial explosion problem of language learning.

Central to the project is that symbolic categories and how they can be combined, i.e. grammar, arise during the learning process. Thus, our model operationalizes usage-based learning, as well as the construction of grammar through learning. The learning model's task is to identify sentences in a stream of language where clues such as punctuation and capital letters have been removed. The model is expected to explore and discover the categories and combinations that facilitate the task and, in this way, build a functioning grammatical system. The architecture will be evaluated on both artificial and natural languages.

The results can generate groundbreaking progress in our understanding of the human language ability and have implications in the fields of language learning, cognition, cultural evolution, biological evolution, artificial intelligence, and natural language processing.

The main goal of this project is to design and implement a minimal cognitive architecture for learning complex grammar based on sequence memory, chunking, and schematizing, driven by reinforcement learning. This architecture will be evaluated on both artificial and natural languages to determine how fast and accurately it can identify sentences, its ability to assign productive grammatical categories to unknown elements, and its ability to extract parsimonious grammatical categories for all elements in a natural language.

We will design and implement a minimal domain-general cognitive architecture for learning grammar. We will construct a computational system that can learn to extract categories and their relations from a stream of linguistic information, relying only on sequence memory, chunking and schematizing. Learning will be based on simple principles of reinforcement of associative strengths, where the reinforcement is driven by the correct identification of sentences. We will also analyse and evaluate the performance of the model on both artificial and natural linguistic input.

 

This research relates to the following sustainable development goals