The Arts and Sciences used to share much of the same intellectual space. Only recently have they diverged to the degree that they seem diametrically opposed. The Exchange is our attempt to rekindle some of the dialogue that occurred between the two fields.
Dr. Juanita Marchand Knight is a researcher at McGill University’s Schulic School of Music. Her interests include Voice Science; Auditory Perception; Multimedia Composition and Sound Design; Gender Theory, Perception, and Semiotics; and Critical Disability Theory. She is also an accomplished soprano.
Bee Bee Sea is a punk band from Italy whose hyper-kinetic songs launch at full speed and only slows when they smash into the final notes of their instruments. NPR Music referred to the three-piece on glowing terms, declaring “Bee Bee Sea makes a kind of revved-up garage-punk leaked from the mutant brains of Thee Oh Sees and Black Lips — bands with which the Northern Italian trio has toured — with a little bit of The Cramps’ psycho-surf and Can’s hypno-funk.”
High praise indeed.
Giacomo from the Bee Bee Sea: The question might be very simple but, as band, we’ve always been very interested in learning from the great songrwriters of the past (and present) and trying to walk our own path in terms of composition. We are a punk band but we’re very interested in pop music. Minor chords are always related to sadness and melancholia while happy songs employ major chords (at least in the Western music). We don’t have many happy songs that uses minor chords and vice versa.
Do you think this has more to do with chemistry and synapses in the listeners and composers brain or might be related to a cultural fact or traditions?
Dr. Juanita Marchand Knight: Verily, this question is far from simple, and there is some disagreement within the very broad field of music research. ‘Music and emotion’ is the subject of countless papers, studies, and ruminations. My initial reaction is that the major-happy/minor-sad association has more to do with culture, exposure, and training than with anything biological or innate. We are taught this association in our elementary music classes and the idea stays with us. As we advance further, we might even learn that diminished chords sound scary.
In real songs though, it’s just not so cut and dry! To begin with, in major keys, we only have three major chords to work with (I, IV, and V, which in the key of C-major, would be C, F, and G). Harmony can become pretty ambiguous when we start doing things like playing with voicings by inverting chords, leaving some notes out, or doubling notes we don’t normally emphasize. Many pop songs also love to visit the vi chord, using the progression I-vi-IV-V-I (vi being an a-minor chord in the key of C-major), like “Every Breath You Take” by the Police, “Stand By Me” by Ben E. King, and the classic “Heart and Soul” which every kid loves to learn on the piano.
As I see it, we have two distinct problems to disentangle in this question.
1 – Do we truly associate minor keys/chords with sadness and major keys/chords with happiness in Western musical traditions?
2 – If so, is this a cultural/learned/semiotic phenomenon or is it a biological/chemical/intuitive thing?
To tackle the first issue, let’s start by looking at some Western, English language pop songs. Several sad songs like “Someone Like You” by Adele, “Everybody Hurts” by REM, “In Between Days” by The Cure, and “The First Cut is The Deepest” by Cat Stevens are in major keys. “Four out Five” by Arctic Monkeys, “Party Rock Anthem” by LMFAO, “Just Dance” by Lady Gaga, and “Happy” by Pharell are examples of happy songs in minor keys. Let’s also consider Western music that falls outside of this major/minor debate: Early music composed before Jean Philippe Rameau and company began codifying harmony in the 18th century shouldn’t necessarily be thought of as major or minor and some newer classical and electronic music might not be in any key but this doesn’t mean it is entirely without affect.
It turns out that our perception of emotion in music is not based solely on harmonic content. Texture, timbre, meter, tempo, melodic contour, lyrics, unique features of a given performance, and even sight and touch/haptics, all contribute to our perception of emotion in music. While researchers attempt to isolate elements of music and get to the root of this question, I don’t think it’s really possible to say that most listeners listening to most pop songs are able to decouple this stuff in the moment (tempo from key from lyrics, and so on). When we hear “Everybody Hurts” by REM, we know it’s sad, even though it’s in a major key because it’s slow, the melody weeps in big leaps followed by descending steps, the text is fragmented, the accompaniment is arpeggiated, and it’s in a relaxed 6/8 meter which is often used for sad songs. Just to make this even more of a grey area, there are plenty of examples in Western Music of sad and melancholy songs in minor keys (or nearly minor modes), such as “Nights in White Satin” by The Moody Blues and “House of the Rising Sun” by The Animals. Some theorize that the instability of the minor third and the similarity it seems to bear to the flatter speech patterns of sad people contribute to our perception of minor keys/chords as sad. So, we aren’t wrong to associate minor chords and keys with sadness, it’s just that this isn’t always the case in Western pop. Basically, lots of different things can add up to make a song sound sad.
Now, on to the second issue: Is this a cultural thing?
Kind of! Other cultures don’t share the same scales, keys, or chords as Western music. This is similar to the way different language groups use different alphabets or characters and have different syntax. We can approximate Russian or Mandarin words using the same alphabet we use for English but some of the sounds are impossible for us to accurately spell due to the limitations of our letters and sound library. In fact, because of what we have grown up hearing and saying, we (as English speakers) may not even be able to physically reproduce certain sounds from other languages. It’s the same with music. Many Western musicians cannot sing in quarter tones, which are commonly used in Eastern musics, because we either cannot hear them at all, or we actually perceive them as flat or sharp semitones. We can’t really analyze the music of other cultures by Western standards. Those standards simply do not apply (and vice versa).
This being said, cross-cultural experiments often find, even when the tuning system is totally different, that participants can identify the emotional affect in the musics of other cultures, though not as well as they can identify the emotional affects in the music of their own culture. The belief is that even when the harmonic and melodic content is pretty foreign, the other factors mentioned above (tempo, texture, performance, etc.) can still signal joy, sadness, or fear. Whether the music actually moves cross-cultural listeners is another story – but they can at least identify the intent of the composer or, more accurately, the performer(s). In this sense, different musics are more universal than spoken languages from different language groups but we probably lose bits of the meaning when we move from place to place or style to style. This would be like a native Polish speaker understanding a lot of what a Russian speaker is saying or like what happens when two English speakers from different regions have a conversation that mostly makes sense – until it doesn’t (think of the word ‘boot’ which in Canada typically just means footwear, but in England might mean the trunk of the car). We miss some of the nuance, but we get the gist.
Dr. Juanita Marchand Knight: Throughout Western music history, songwriters and composers have taken different approaches to infusing music with emotion. In earlier periods, they thought about what would change the mood of the listeners. By the Romantic period, composers were said to be writing to express their own thoughts and points of view, rather than trying to act on the audience. I’ve heard that today, anything goes! I’m curious – when you write songs, do you consciously think about what will make a song sound happy, sad, or angry to others? That is, are you concerned with how listeners might perceive your work and how it will make them feel? Or do you write what you feel, what comes out in the moment, what is in your heart, without consciously trying to change the listener’s mood or perspective?”
Giacomo from the Bee Bee Sea: It’s probably a mix of the two things.
Firstly, we write what instictly comes out according to our mood or emotions we feel in that particular moment, not caring much about the result whether it’s from a song or a melody or whatever you have created. Once we’ve written the general structure of the songs and the lyrics, then it’s time to think about the arrangments that best fit the song in order to consciously lead the track’s mood. There are many components that can influence the perception of music. In our case perhaps the most important are: instruments, tempo and effects. The possible combinations of these elements are almost infinite.
The perception of these elements is very personal. Listeners’ musical background, their state of mind at that precise moment, and many other variables can make us feel different sensations all the time. I guess that more a song can deliver the same feeling to the listeners, the more the song is well produced.