Biological neural networks are shaped both by evolution across generations and by individual learning within an organism's lifetime, whereas standard artificial neural networks undergo a single, large training procedure without inherited con- straints. In this preliminary work, we propose a framework that incorporates this crucial generational dimension—an "outer loop" of evolution that shapes the "in- ner loop" of learning—so that artificial networks better mirror the effects of evo- lution and individual learning in biological organisms.
We use the Tuckute2024 neural benchmark, which can be downloaded from the following public repository or brain-score language. The cross-validation neural predictivity score can be run from NeuralAlignment/fit_mapping.py and looped across layers and models using NeuralAlignment/loop_fit_mapping.py.
In some of the analyses, we first localize the LLM language units, per the approach established in AlKhamissi et al., 2025 (ACL), from the following repository. We adapted this code to output a binary mask which marks the LLM language units as 1. The NeuralAlignment/apply_langloc_mask.py script takes the the numpy binary mask for a given model, and saves the masked embedding values as a csv file, which can then serve as the input to NeuralAlignment/fit_mapping.py.
The binary langloc masks, the model embeddings, and the regression outputs can be downloaded below.