Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Interest in large language models (LLM) has exploded lately, driving speculation about their implications as models of human language processing. Language models have proven useful in isolating language processing functions in the brain; however, debate continues regarding whether or not the best characterization of these language-processing functions employs hierarchical syntax. This study investigates this question by comparing two language models with the same underlying Transformer-XL architecture: one informed by hierarchical syntax (Transformer Grammar) and one not (Transformer-XL). Coupling these language models with human data previously collected via fMRI, results re-affirm the role of hierarchical structure in linguistic processing, implicating Broca’s Area, the right Middle Temporal Gyrus, the left Temporal Pole, and the right Pre-Frontal Cortex in these aspects of processing.