Biology

“Birdsong Chat” may shed light on how language is wired in the human brain

Credit: Images generated by AI

Just as ChatGpt and other generative language models create grammatically correct sentences to train human texts, a new modeling method by researchers of the Pennsylvania State Train is to create accurate bird angs. A new modelling method for bird recording. The results could improve understanding of the structure of birds and their underlying neurobiology, which could provide insight into the neural mechanisms of human language, the team said. . A paper describing the study was recently published in the Journal of Neuroscience.

Just like humans arrange words in a particular order to form grammatically correct sentences, birds tend to sing sets of notes called syllables in a limited number of combinations.

“It’s much easier, but the syllable sequences of bird songs are organized in a similar way to human language, so birds provide a good model for exploring the neurobiology of language,” the paper says. Lead author.

Finishing the sequence of sentences and songs for both humans and birds often depends on what has already been said. For example, the phrase “Flies like” could be part of the analogy, like “Time flies like time” or “Fruit fly like banana” there is. However, if you mix and match what comes after “like a fly”, it will be a “banana-like flies of time” or a “arrow-like fruit fly”, but it’s meaningless. In this example, the phrase “like Flies” is what researchers call context-dependent.

“Previous works have shown that Bengali Finch’s songs are also context-dependent,” Jin said. “This study developed a new statistical method that better quantifies the contextual dependence of individual birds and begins to understand how they are wired in the brain.”

Researchers analyzed previously recorded songs from six Bengal Finch, which sing about 7-15 syllables in each sequence. The new method allows researchers to create the simplest models that accurately reflect the sequences that individual birds actually sing.

The model resembles a large linguistic model in that it depicts the probability of words that are likely to follow a particular word/syllable based on previously analyzed text or song sequences. They are a type of Markov model, a way of modeling a series of events. They are presented as a kind of flow chart starting with syllables that refer to the various syllable options that follow. The possibility of syllables following is indicated by the arrows between them.

“The basic Markov model is very simple, but it tends to generalize, meaning it can lead to sequences that don’t actually exist,” Zinn said. “We’ll use a specific type of model called partially observable Markov models that can incorporate context dependencies, and add more connections to what the syllables usually go. It’s done.

Researchers’ new methods create a set of potential models that can explain individual bird songs based on recorded sequences. Start with the simplest model that uses statistical tests to see if the potential model is accurate or if it generates and generates excessively non-existent sequences. They go through increasingly complex models until they determine the simplest model that accurately captures what the bird is singing. From this final model, researchers can see which syllables have context dependencies.

“All six birds we studied had context-dependent syllable transitions, suggesting that this is an important aspect of birds’ miniature,” Jinn said. “However, the number of syllables with context-dependent nature depends on the individual bid. This can be due to several factors, such as aspects of the bird’s brain, in their tutor song. ”

To begin to understand the neurobiology behind the context-dependent syllable transitions, researchers also analyzed the inauspicious bird songs.

“We see a dramatic decline in context-dependentness in these birds, suggesting that auditory feedback plays a major role in creating context-dependentness in the brain,” Jinn said. “Birds listen to themselves and adjust songs based on what they hear, and the related machines in the brain may play a context-dependent role. In the future, neuronal states can be turned to specific syllables. I’d like to map it, suggesting that different sets of neurons can become active even when birds are singing the same syllables.”

Researchers said their new methods provide a more automated and robust way to analyse not only bird songs but also vocalizations and behavioral sequences of other animals.

“We actually used this method in English and were able to generate almost grammatical text,” Jin said. “Of course, we are not trying to create new generative language models, but it is interesting that the same kind of models can handle both bird and human languages. Perhaps the underlying neural mechanisms are similar. Masu.

“Many philosophers have described human language, especially grammar, as exceptional, but if this model can create sentences like language, and behind birds chirping and human language. If neural mechanisms are actually similar, you can’t wonder if our languages ​​are really really similar. It’s so unique.”

More details: Jiali Lu et al, partially observable Markov model inferred using statistical tests reveals the context-dependent syllable transitions in Bengali Finch song, The Journal of Neuroscience (2025). . doi: 10.1523/jneurosci.0522-24.2024

Provided by Pennsylvania State University

Quote: “Bird Singing Chat” is from https://phys.org/news/2025-02-chatgpt-birdsong-language- on February 12, 2025 How is language in the human brain? It may shed light on whether it is wired (February 12, 2025) Wired-Human.html

This document is subject to copyright. Apart from fair transactions for private research or research purposes, there is no part that is reproduced without written permission. Content is provided with information only.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button