Scientists uncover cross-cultural regularities in songs

(Photo credit: Adobe Stock)

Language and music are universal aspects of human culture, yet they manifest in highly diverse forms across different societies. A recent study published in Science Advances aimed to understand the shared features and distinct differences between speech and song across cultures. The findings revealed significant cross-cultural regularities in the acoustic features of songs.

The motivation behind this study stemmed from a longstanding curiosity about the evolutionary functions of language and music. Both forms of vocal communication use rhythm and pitch, leading researchers to speculate on their possible coevolution. Despite these speculations, there has been a lack of empirical data to determine what similarities and differences exist between music and language on a global scale.

Previous studies have explored neural mechanisms and identified some universal features within music and language, but comparative analyses of their acoustic attributes, especially across diverse cultures, have been limited. The new study aimed to fill that gap by examining the acoustic features of speech and song from various cultural contexts to identify potential universal patterns and unique distinctions.

To achieve this, a diverse team of 75 researchers, representing speakers of 55 languages from across Asia, Africa, the Americas, Europe, and the Pacific, was assembled. These researchers included experts in ethnomusicology, music psychology, linguistics, and evolutionary biology. Each participant recorded themselves performing four types of vocalizations: singing a traditional song, reciting the song’s lyrics, describing the song verbally, and performing the song instrumentally.

One of the most striking results was the consistent use of higher pitches in songs compared to speech. This pattern was observed across all cultures studied, suggesting that higher pitch is a defining characteristic of musical vocalization universally.

Additionally, songs were found to have a slower temporal rate than speech. This slower pace may facilitate synchronization and social bonding, which are essential functions of music in many cultural contexts.

Another significant finding was the greater pitch stability observed in songs compared to speech. Stable pitches are a hallmark of music, and this consistency likely aids in harmonization and the creation of melodious sequences.

Interestingly, while both speech and song displayed similar timbral brightness, indicating shared vocal mechanisms, there was no significant difference in pitch interval size between the two forms of vocalization. This similarity suggests that both speech and song use pitch in comparable ways, despite their different communicative purposes.

However, the study’s initial hypothesis that pitch declination would show a significant difference between speech and song was not supported. This feature, which measures the change in pitch over time, did not vary significantly across the vocalizations, indicating that both forms might use pitch declination in similar ways, contrary to what was previously thought.

The study provides “strong evidence for cross-cultural regularities,” according to senior author Patrick Savage, the Director of the CompMusic Lab at the University of Auckland.

The similarities in timbral brightness and pitch interval size suggest underlying constraints on vocalization that apply broadly to both speech and music. On the other hand, the differences in pitch height, temporal rate, and pitch stability highlight the unique characteristics of musical vocalization, which may have evolved to fulfill specific social and communicative functions distinct from those of speech.

Savage suggested that songs are more predictably regular than speech because they serve to facilitate social bonding. This regularity in rhythm and pitch likely helps individuals harmonize and connect with one another. “Slow, regular, predictable melodies make it easier for us to sing together in large groups,” he explained. “We’re trying to shed light on the cultural and biological evolution of two systems that make us human: music and language.”

In addition to their original recordings, the researchers analyzed an alternative dataset consisting of 418 previously published recordings of adult-directed songs and speech. These recordings were collected from 209 individuals who spoke 16 different languages. This additional dataset provided a valuable opportunity to validate the study’s findings and explore whether the observed patterns held true across an even broader range of languages and cultural contexts.

The analysis of the alternative dataset confirmed many of the key findings from the original recordings. Similar to the primary dataset, songs in this collection generally used higher pitches, were slower, and exhibited more stable pitches than speech. These consistent results across two independent datasets reinforce the conclusion that these acoustic features are robust indicators of the differences between song and speech globally.

The study, “Globally, songs and instrumental melodies are slower and higher and use more stable pitches than speech: A Registered Report,” was authored by Yuto Ozaki, Adam Tierney, Peter Q. Pfordresher, John M. McBride, Emmanouil Benetos, Polina Proutskova, Gakuto Chiba, Fang Liu, Nori Jacoby, Suzanne C. Purdy, Patricia Opondo, W. Tecumseh Fitch, Shantala Hegde, Martín Rocamora, Rob Thorne, Florence Nweke, Dhwani P. Sadaphal, Parimal M. Sadaphal, Shafagh Hadavi, Shinya Fujii, Sangbuem Choo, Marin Naruse, Utae Ehara, Latyr Sy, Mark Lenini Parselelo, Manuel Anglada-Tort, Niels Chr. Hansen, Felix Haiduk, Ulvhild Færøvik, Violeta Magalhães, Wojciech Krzyżanowski, Olena Shcherbakova, Diana Hereld, Brenda Suyanne Barbosa, Marco Antonio Correa Varella, Mark van Tongeren, Polina Dessiatnitchenko, Su Zar Zar, Iyadh El Kahla, Olcay Muslu, Jakelin Troy, Teona Lomsadze, Dilyana Kurdova, Cristiano Tsope, Daniel Fredriksson, Aleksandar Arabadjiev, Jehoshaphat Philip Sarbah, Adwoa Arhine, Tadhg Ó Meachair, Javier Silva-Zurita, Ignacio Soto-Silva, Neddiel Elcie Muñoz Millalonco, Rytis Ambrazevičius, Psyche Loui, Andrea Ravignani, Yannick Jadoul, Pauline Larrouy-Maestri, Camila Bruder, Tutushamum Puri Teyxokawa, Urise Kuikuro, Rogerdison Natsitsabui, Nerea Bello Sagarzazu, Limor Raviv, Minyu Zeng, Shahaboddin Dabaghi Varnosfaderani, Juan Sebastián Gómez-Cañón, Kayla Kolff, Christina Vanden Bosch der Nederlanden, Meyha Chhatwal, Ryan Mark David, I. Putu Gede Setiawan, Great Lekakul, Vanessa Nina Borsan, Nozuko Nguqu, and Patrick E. Savage.