WHY EVEN ARE CHORDS – PART 1
Chords are everywhere – even where they shouldn’t be, historically speaking.
The concept of a “chord” grew out of European polyphony. But chords are now found in the contemporary musics of the Arab sphere, China, India, Japan – places once defined by deeply sophisticated monophonic music. Chords were also integrated into the historical polyphonic traditions of Polynesia, the Andes, and a large group of styles throughout sub-Saharan Africa.
You can hear, below, some examples of chords’ current broad use:
(Contemporary, but traditionalistic, koto and shakuhachi composition. Among coloristic sonorities, many triads are also found. The role of the koto as an accompanying instrument is ancient, but its role as a harmony instrument is fairly new.)
(An Egyptian pop song, although interestingly, it also features an Indian guest artist singing in Hindi. Triads are absolutely integral to the texture of this piece, although the melody could also be interpreted as being in maqam Nahawand.)
(An entertaining performance! Start at 5:08 for a clear example of how a very old native Hawaiian melodic technique was adapted into a European tonal framework. I quote: “A similar break between registers called ha'iha'i is used as an ornament in some traditional chanting styles.” - Na Leo Hawai’i, Smithsonian Folkways Recordings)
So, although polyphony itself was emphatically not a Western invention – singing on multiple pitches or melodic lines simultaneously is a phenomenon that has arisen independently literally dozens of times across the world – chords more or less were. Why were chords so easy to adopt, such that they are now ubiquitous?
WHAT IS A CHORD
To answer that, we need to first figure out what a chord is. “If polyphony arose dozens of times,” you may ask, “Doesn’t that mean that chords also did?” And you’re right to question me: by “chord,” I actually mean something more specific than “a moment in time when multiple pitches are heard.” The latter did arise, tautologically, every time polyphony occurred.
By “chord,” what I really mean is this: A unit of harmony, consisting of more than one simultaneous pitch, that has an identity independent of the melodic lines that may give rise to it.
(This is what we all were taught when we were tiny music theory babies, but it’s a powerful concept.)
WHEN DID CHORDS ARISE
Zarlino, in 1558, published a treatise which included a brief discussion of triads. He is generally recognized as the first theorist to do so. Chords were sufficiently ingrained in common practice by 1596, such that Joan Carles Amat published a catalogue of “natural” (i.e. root position) chord fingerings for guitar in that year, and gave them shorthand names, rather than spelling all the pitches out on a staff.
An entertaining aside, related to Amat's early chord dictionary: Praetorius complained in 1614 about "the charlatans and saltimbanques" who use the guitar for strumming, "to which they sing... foolish folk songs." Or take Sebastian de Covarrubias, who said "The guitar is no more than a cowbell: so easy to play, especially strummed, that there is not a stable lad who is not a musician on the guitar." (Quotes from Graham Wade, A Concise History of the Classic Guitar.)
As a guitarist, I take some pride in these Wonderwall-players of yesteryear.
Anyway, theorists really didn’t describe a good concept of “chord,” including inversions as we know it now, until around 1608 with Otto Sigfried Harnish. That’s when the idea first gained currency that a chord is a stack of three thirds, and that these could appear in various vertical orders and still be the same thing. (Although that last bit wasn’t applied in all cases until Rameau in 1722, with his system of harmonically defined root relationships. That’s a long time to figure this concept out!)
HOW DID CHORDS ARISE
Before the late Renaissance, moments of harmony during counterpoint were taken on a case-by-case basis, with each different possible interval stack getting individual attention and treatment by theorists. For a lovely introduction to medieval European polyphonic practice through this lens, see Margo Schulter, “Were there triads in medieval music?”
To paraphrase Schulter, it’s important to note that the tuning systems in use during the Middle Ages produced thirds that were significantly more “dissonant” than the ones we’re used to. This is one reason that thirds weren’t used as stable harmonies, in deference to the fifth and octave – and the fourth, in the earliest European polyphony.
(About which, more later, in a different blog post. I love fourths, and I’m fascinated by the way they developed from a consonance into a “dissonance” in tonal writing.)
As the prevalent tuning system (in churches and courts – we don’t know a lot about folk tunings in this era) gradually went from Pythagorean to meantone, thirds gained prominence and stability, and eventually ubiquity in European music. This makes sense: Pythagorean temperament is based on a chain of perfect fifths, which led to the aforementioned rough-sounding thirds; whereas meantone is a compromise, which led to sweeter thirds and a couple of awful fifths that were avoided.
After the Renaissance, though, a critical shift occurred. In my mind, it is the critical shift in European musical thinking. That is: the rise of the galant style, and the culmination of the idea inherent in basso continuo.
THE CRITICAL SHIFT: GALANT STYLE
In galant style, pieces were characterized by a simpler, cantabile style of melody; a slower harmonic rhythm (i.e. less frequent changes of background harmony); and a clearer distinction between melody and accompaniment. Complex polyphony was no longer desired.
(Listen to this lovely interpretation, and observe how strictly divided the roles are, compared to earlier music!)
The ideal of listening to multiple simultaneous and independent melodies – a real feat, honestly, for most listeners – gave way to this simpler kind of music. Bach disdained it, and wrote in an “old-fashioned” polyphonic style, but galant is the style that gave rise to normative practice in the West, up to and including modern pop music.
In short: you have your melody; you have your bass; and in between, you have your chords. It is the natural conclusion of basso continuo, which – incidentally – first developed within a few decades of 1600, roughly contemporaneous with the emerging concept of chords.
Basso continuo originated as a supporting framework for complex polyphonic works, so that the singers all had something to hang on to. A composer would analyze/condense the prevailing polyphony into a numerical shorthand, written above a fixed bass melody. A keyboardist or lutenist would then improvise a realization, which was not expected to exactly duplicate every voice in the polyphony, but rather to support the harmony at any given moment – keeping the singers aligned and in tune, especially in multi-choir situations.
As a sole accompaniment, outside of any properly polyphonic texture, basso continuo appeared throughout the 1600s – especially in lightly-accompanied recitativo contexts. But by the time galant came into vogue (1720-1760), these shorthand harmonies had grown their own independent identities. The transformation was complete.
They no longer needed specific inner-voice melody lines to generate them, even in courtly genres. A chord had finally become only a chord.
A “TIMBRAL UNIT”
After Rameau (1722), chord theory developed in various ways according to the perceived “function” of major and minor triads built on various degrees of a scale. But in this blog post, I’m not concerned with functional harmony – except to mention that the prevailing system of tonal analysis that’s taught in the United States, i.e. figured Roman numerals with an emphasis on circle-of-fifths root progressions, was conceived by none other than Arnold Schoenberg!
My primary interest right now is in the independent identity of chords, the classic “more than the sum of their parts” situation. There are tantalizing passages in the literature, starting with Carl Stumpf around 1890 (“fused intervals”) and continuing through Imre Lahdelma in 2017 (perceived emotional qualities of single chords), that hint at the experience of chords as integral perceptual units – as complex "tones" with a signature timbre, rather than complexes of tones. In this conception, naive listeners must learn to separate chords into their component pitches, and until this learning happens, a chord is an indivisible aural event.
I haven’t done my entire literature review on this little facet yet, but I can offer two anecdotes in support of this hypothesis.
First, who doesn’t remember first-year ear training? All of us either experienced firsthand – or saw our young colleagues experience – the struggle of discerning the alto and tenor lines in an SATB dictation exercise. Even though we were all sophisticated enough musicians to be in a college-level music theory course, it took intensive additional training to hear all the component tones of a chord accurately enough to notate them.
Second, I clearly remember my first exposure to the idea of major and minor chords. I was in sixth grade choir, and the director played a massive major chord across the whole piano with the pedal down; then did the same thing with the parallel minor chord. I was absolutely floored – I had no idea, until then, that music could so intentionally and elementally map to “happy” or “sad” (because of course that’s how the director presented it). But more importantly, I was floored by the subtlety of the difference between major and minor. I knew there was an essential change, but at that time, I didn’t know enough to understand what had changed. Therefore, my impression was of the entire chord – even though it had been rolled up!
To me, this strongly suggests the likely experience of other unsophisticated listeners in a Western-tonal musical culture.
(Of course, this all starts to break down when we get into arpeggiated accompaniment figures with short decay times. But then, maybe simultaneous chords and dry arpeggiated chords are not perceptually equivalent, even though they are functionally equivalent. Maybe they are distinct, to the naive listener, in ways that are less relevant to the trained musician.
In a suggestive parallel, Lahdelma’s work shows that inverted chords are not equivalent to root position chords in their affective qualities – and this holds true for both musicians and nonmusicians! So perhaps equivalence in a theoretical system of harmony is not a great predictor of perceptual equivalence.)
LANGUAGE PARALLEL: TRANSFER OF CONTRAST
Let’s assume that a chord really is a timbral unit. If so, that would be an example of a change in musical "grammar," which parallels a certain kind of change that occurs in language: the transfer of contrast.
For an illustration, let’s take the development of tone in Chinese languages. In modern Mandarin Chinese, the word 四 (“four”) is pronounced sì, with falling pitch. The word 死 (“death”) is pronounced sǐ, with falling and then rising pitch.
In Old Chinese, 四 has been reconstructed with the pronunciation *hljids, and 死 as *hlji’ (ending in a glottal stop). Old Chinese words, like modern English, had no defined pitch.
Different ending consonants exert a subtle effect on the preceding vowel of all words in all languages – including, for our purposes right now, an effect on the pitch contour. This is an unconscious, acoustic phenomenon. It is usually unnoticed in English, because it is not relevant to distinguishing the word.
However, as time progressed, consonant clusters in Old Chinese began to simplify (hence *hl→s) – and eventually, most ending consonants were deleted entirely. But their pitch effect remained. Thus, differing vowel pitch – previously only incidental – replaced final consonants as the salient feature for distinguishing pairs like 四 and 死!
I believe chords can be interpreted the same way in music. They began as incidental moments of harmony between multiple simultaneous voices – but they are now perceptual entities in their own right, with distinctive emotional qualities!
BUT WHY WERE CHORDS ADOPTED WORLDWIDE, ALREADY?
I hate to end on a cliffhanger.
However, this blog post grew TOO long, so tune in next week for the thrilling conclusion! Happy New Year.