Brain Score Reveals Shared Properties Across Languages in Neural Models
Researchers used Brain Score to evaluate language models trained on diverse languages and structured sequences, finding shared processing properties. The study suggests neural models capture universal linguistic features beyond specific language structures.

Researchers have used the Brain Score framework to assess how well language models (LMs) trained on various natural languages and structured sequences align with human brain activity during reading. The study, published on arXiv, found that models trained on different languages and even structured sequences like DNA or formal languages exhibited similar processing properties when evaluated using Brain Score.
The findings imply that neural language models capture universal linguistic features that transcend specific language structures. This challenges the notion that models trained on one language can only understand that language, suggesting instead that they learn fundamental aspects of language processing applicable across languages. The study highlights the potential of Brain Score as a tool for understanding the similarities between human and machine language processing.
The research raises questions about the extent to which language models can generalize across languages and structured sequences. Future work could explore how these shared properties influence model performance on multilingual tasks and whether they can be leveraged to improve cross-lingual transfer learning. The study also opens avenues for further investigation into the cognitive mechanisms underlying language processing in both humans and machines.