Stress and Parody

December 17th, 2011

I’m teaching a songwriting class next semester, and I’ve been putting gether the course packet.  Here’s what I have to say about one of my favorite musicians, “Weird Al” Yankovic:

When it comes time to set text to music, you should try to have the stressed words or syllables of your lyrics line up with the rhythmic and metrical stresses of your music.  In technical terms, find the stressed syllable in a word or the most important word in a group of short words, and put it on the beat.  More simply, when your song sounds awkward, or words sound like they don’t fit with the music, try moving them earlier or later, or try putting more or fewer in before you change the chord.

A poem that is a great example of this is Clement Moore’s A Visit From St. Nicholas.  Moore’s poem begins:

‘Twas the night before Christmas,
And all through the house,
Not a creature was stirring,
Not even a mouse.

This is light poetry of the mid-19th century, published just as Christmas was beginning to become the central holiday in the United States.  It could be quite conceivably set to music, much more believably so than the e.e. cummings poem cited earlier.  In fact, several successful popular-song versions of the poem have been created over the years, attesting to its utility, but even in simply speaking the lines above, a clear poetic meter is established, and the poem displays a relatively strict rhythm:

‘Twas the night before Christmas
And all though the house,
Not a creature was stirring,
Not even a mouse.

Or, to really highlight the “two-three-ONE-two-three-ONE-two-three” pattern of stress in these words:

           ‘Twas the
night before
Christmas and
all though the
house, not a
creature was
stirring, not
even a
mouse.

Almost every Christmas, though, some coworker, family-member or friend sends a parody version of the poem, with the words altered to include the names of acquaintances, humorous events from the past year or the like.  As well-meaning and fun as these parodies are, they almost invariably create disruptions in the pattern of vocal stress that is a part of what makes the original so successful, and for a musician, they seem particularly forced and ungraceful—a really great parody would maintain the stress patterns perfectly, and that fact that most parodies aren’t successful in this regard points out just how difficult this is.

One artist who succeeds consistently in this respect is “Weird Al” Yankovic, who for thirty years has been creating parodies of popular songs that resonate extraordinarily well with the originals, often preserving not only stress patterns but complex rhyme schemes while also succeeding as humor of varying degrees of sophistication.  As an example, take Yankovic’s early “Another One Rides the Bus,” a parody of John Deacon’s and Queen’s “Another One Bites the Dust.”

First, Deacon’s original lyrics for the first two verses:

Steve walks warily down the street
With the brim pulled way down low
Ain’t no sound but the sound of his feet
Machine guns ready to go

Are you ready, hey, are you ready for this
Are you hangin’ on the edge of your seat
Out of the doorway the bullets rip
To the sound of the beat – yeah

Now Yankovic’s parody of the same material:

Riding in the bus down the boulevard
And the place was pretty packed
Couldn’t find a seat so I had to stand
With the perverts in the back

It was smelling like a locker room
There was junk all over the floor
We’re already packed in like sardines
But we’re stopping to pick up more, look out

One of Yankovic’s first songs, this parody didn’t benefit from the full arrangements and studio production values of his later work.  It survived only on the success of Queen’s original and Yankovic’s ability to create a version that played on Queen’s bombast, ambiguous meaning and the strange mock-seriousness of a British rock band singing about some sort of urban warfare.  Yankovic maintained the stress patterns of the original, as well as its rhyme scheme (although not the rhymes themselves, as he would in some later work), but his choice of individual words is often parodistic, too—the gritty street becomes the more urbane and more relatable boulevard, where an American would likely find a city bus.  The deadly machine guns become perverts, and the bullets become sardines—things which are unpleasant, but not a surely lethal as the images in Queen’s original.  In the last line of the example, Yankovic adheres more closely to the stress patterns than the original does when he adds the syllables But we’re stopping, which have analogues in other verses but don’t appear in the second verse of Queen’s song.  To truly understand the role of stress in songwriting, you should undertake a survey of Yankovic’s output in comparison with the originals on which it is based.

Software Worries and Creative Comfort

November 1st, 2011

Like many composers, I rely (rather heavily) on a computer notation program to do the heavy lifting required when revising, editing and polishing my music, and also to create individual parts from scores.  The program I have used for the last decade, Sibelius, recently came out with a new version, the first since the company founded by the original designers of the software was bought out by a larger firm, Avid.  A perhaps-ill-conceived post on facebook (I try not to be negative on facebook) has led me to an exchange of concerns about the upgrade with Jesse Ayers, a fellow composer on the faculty of Malone College in Canton, Ohio.  Jesse and I had met previously at conferences but hadn’t really gotten to know each other, but somehow I found myself sending this rather personal email, and I’d like to make it an open letter:  It started out being about Sibelius and ends up being about my art and my understanding of myself.

Dear Jesse,

The linked divisi parts is a problem, and I have never liked the methods for inputing piano pedalling… I’ve suggested a solution for that, but it hasn’t been adopted yet.  Of course, I’ve learned to deal with both, and countless other quirks (so much so that I’m always surprised how many things I don’t even think about when I have to help my composition and orchestration students make their scores look presentable).  I dread the thought of changing to another program, but at some point, I’m sure that Sibelius will have run its course and we’ll all be switching over to the next thing. 

I’m at a funny age–people a few years older than me have a devil of a time with anything to do with computers, but people a few years younger than me never knew anything different–my first year of college was the same year the World Wide Web debuted; I didn’t know what email was my first term, but by Christmas, I couldn’t live without it.  In composition, it’s the same: Sibelius has become a second language to me, and I wouldn’t dream of trying to compose a major piece without it, but folks just a few years older than me completed their master’s theses in manuscript.  My first experience with notation software was with Encore on Macintosh in the early 90s, and I took away the notion that it was more trouble than it was worth and spent several years learning to write manuscript, which I think, in the end, was good experience, but after I graduated from college and got my first computer, it wasn’t long before I wanted a notation program.  I fiddled around with NoteWorthy composer for a while, and was able to make some readable but pretty cruddy-looking scores.  In late 1998, though, Sibelius came out, and I was one of the first thousand people in the US to buy it.  I read the manual cover-to-cover (a much more reasonable proposition then!) and dove in.  I was teaching middle school band at the time, and having a terrible time of it… so bad that I was looking at law schools, but having an outlet in my arranging and composition probably saved me for music (for better or for worse!). 

Sibelius is probably the reason that I’m a composer, although I’m loathe to admit that to anyone.  Just as I wouldn’t have even attempted to write the book I just finished without a word processor, I couldn’t possibly have become serious about composing without help from the computer.  I don’t think I lean on it too much–I do more sitting at the piano than I used to, especially for vocal music–but even if the first draft of a piece is manuscript, the second draft is in Sibelius.  If it goes away or changes into some unrecognizable form, I’m at the point now where I will do what needs to be done, but I will miss it terribly.  As psychopathic as it sounds, its interface has been the most constant thing in my life over the last ten years as I went through divorce, job changes, graduate school, a second marriage, too many out-of-town moves.  I would miss it like I would a friend–more than some people I have called “friend,” even.  Don’t think I’m strange about this–perhaps you understand what I’m saying–Shakespeare would miss The Globe, Bill Clinton misses the White House, a blinded astronomer misses her observatory.  Sibelius is where I work, and where what I think of as my most meaningful work of the last decade was accomplished (I hope that my students find and found my teaching meaningful, but it isn’t meaningful to me in the same way that my art is meaningful).  I was already worried by the buyout, and yesterday my worries proved correct: I’m accustomed to working with people who view Sibelius the same way I do–as a friend, as a key component of their work.  I’m sure there is some of that at Avid, but Sibelius is not their creation, not in spirit.  I worry that it will become like a superficial film adaptation of a great novel. 

Sometimes I worry about stupid things, I guess.  But this is the problem that we all face as artists in the 21st century: the means and methods by which we create our art are continually shifting around us.  For all his “agony and ecstasy,” Michaelangelo knew that marble was marble and would respond to his chisel in reasonably predictable ways.  Changing Sibelius too drastically would be like substituting a new, better, synthetic marble and still expecting David to appear.  Perhaps this is what his “agony and ecstasy” were about–the Sistine Ceiling is a masterpiece, but the powers that be forced Michaelangelo to work in a way that was more or less foreign to him.  The result was stunning, of course, but a wrenching experience for the artist.

You caught me after band rehearsal, so I apologize for waxing philosophical… someone gets this email just about every week lately!  I’m going to head home to my wife now.  I believe this is going to become a blog post.

Best,

Matt

 

What is an instrument? and Electric and Electronic Instruments

October 25th, 2011

An excerpt from chapter 8 my book, Music: Notation and Practice in Past and Present, available for course adoption in Spring 2012 from National Social Science Press.

The acoustic phenomena discussed in Chapter 7 would exist with or without human (or other) intelligence and culture.  Music, however, would not.  In fact, music may itself be an indicator of intelligence, and has been interpreted as such by some of those who study whale-song or bird calls.  Tellingly, when American space scientists wished to include a message to other intelligent beings on the Voyager space probes, in addition to an engraved “license plate” depicting humans and our location in space, they used precious space and weight on the probes to include recorded music from around the world.

How then, do acoustical laws meet the mysteries of consciousness and intelligence to allow musical creativity to be made manifest?  The answer is through a musical instrument, which in its broadest definition is an interface between a mind with its musical ideas and the broader physical world.  Almost any object can be used in such a way, of course.  One need only think of an eight-month old baby slapping the tray of his high chair in a steady pulse, whether to communicate his desire to be fed or for the sheer joy of the resonant thump that results.  At the same time, he may be vocalizing, and within a year will likely be clanging together pots and pans in the kitchen.  Humans are innately musical beings, and we will always find ways to express our fascination with controlled sound.

Many musical instruments are found sounds, that is, an object—a body part, a part of the natural world, or a human-created artifact—is pressed into service as a music-making device.  An example would be the use of the washboard as a percussion instrument in American styles such as zydeco, the folk music of the bayou country of Louisiana.  On the other hand, just as in every other human culture, there is a tendency to create artifacts with the intention of using them in music making.  Frequently, these artifacts are copied and refined over periods of decades and centuries, sometimes achieving a relatively fixed, specialized form.  What follows is a survey of the musical artifacts of Western culture, which includes representatives of every major type of instrument.

Readers born before the year 2000 had the distinction of living in a unique century in the musical history of our species, namely, the era in which an entirely new means of making music was invented, developed and brought to mass consciousness.  This new category of instrument is termed the electrophone, and is an instrument in which the primary source of the sound is the conversion of electrical energy to acoustic energy. 

Some ethnomusicologists include instruments that merely rely on amplification of a traditional sound source in the category of electrophones.  For example, an electric guitar is a version of a fretted string instrument (a chordophone) in which the motion of the metal strings agitates a magnetic field, creating an electric current that is transmitted to an amplifier.  The strings themselves do produce a certain amount of sound, although the characteristic sound of the instrument is dependent on the amplification and processing of the electric signal, which is then converted to sound.  Similar electric instruments include the electric piano and electric violin.  By this definition, the electrophones might even include a vocalist singing into a microphone.  Other ethnomusicologists recognize a narrower definition for electrophones, namely, instruments in which the sound is wholly generated by electrical energy rather than by electronic modification of an acoustical source.

The first electrophones fit this narrower definition, and were conceived and created at the beginning of the electric era, with the first electronic instrument generally understood to be Thaddeus Cahill’s Telharmonium of 1897, which was an early form of electronic organ, in which spinning tone wheels created an oscillating electrical field that was then transformed to sound using telephone receivers.  Later versions of the Telharmonium were capable of basic additive synthesis, in which signals from two or more oscillators were combined to create more complex sounds.  Because of its massive size—later versions weighted up to 200 tons—the Telharmonium was impractical for all but demonstration purposes.

Many of the basic concepts of the Telharmonium—its organ-like keyboard interface, its use of additive synthesis and oscillators—were incorporated into later electronic instruments, and the most ubiquitous electronic instruments have been similar organ-like devices, often referred to as synthesizers.  Because of the rapid changes in electronic and computer technology through the 20th century, no single synthesizer has become a standard instrument in the same way that, say, the guitar has become a standard fretter string instrument in Western music.  Some models of synthesizer, however, have become iconic for their appearance in widely-known music.  In the concert hall, French composers of the 1940s and 1950s made effective use of the ondes martenot, a keyboard instrument whose portamento effects can be heard in Olivier Messiaen’s Turangalila-Symphonie, among other works.  The most important and iconic sounds, however, were those produced by the Moog synthesizer, and its smaller successor the Minimoog.  Produced by Dr. Robert Moog and the company that bears his name, these instruments were among the first commercially available synthesizers that could be operated by a musician rather than a technician.  Beginning with the iconic 1968 classical album Switched-On Bach by Wendy (nee Walter) Carlos and continuing with the use of the Minimoog by rock musicians of the 1970s such as Kraftwerk, Yes and Tangerine Dream, the sounds of Moog instruments became for most listeners the sound of electronic music.

One of the most interesting early electronic instruments was the theremin, named after its inventor Leon Theremin, patented the instrument in 1928.  Unlike almost every other instrument, the performer does not have to make physical contact with the theremin.  Instead, the magnetic field of the performer’s hands disrupts the magnetic fields generated by two rods that protrude from the box containing the electronics of the instrument.  One hand affects pitch and the other affects dynamics.  The sound of the theremin is most familiar to modern ears from the Beach Boys’ 1966 song Good Vibrations, although the precise instrument used was a similar instrument known as a tannerin.   Film composers of the 1950s and 1960s often used the eerie, wobbly sound of the theremin to great effect in science-fiction and suspense-themed scores, including well-known uses of the instrument in Bernard Herrmann’s music for The Day the Earth Stood Still and Miklos Rosza’s score for Spellbound.

The Moog instruments and similar instruments up to the 1970s can be termed analog synthesizers because they generate tone directly from electrical oscillators.  With the development of microprocessor technology in the 1970s, a shift to digital synthesizers, led by the Yamaha Corporation, began to place electronic music into the hands of the mass market to an even greater extent by the 1980s.  Rather than incorporating a bank of oscillators, a digital synthesizer stores the waveforms of sounds in computer memory in the form of tables of numbers known as wave tables. A command to the processor to play a note—perhaps by the pressing of a key on a piano-style keyboard—results in the wave table for that note being played back, either in a continuous loop, or with appropriate attack and decay envelopes.  The major development in the 1980s was the sampler, which permitted a user to load their own wave tables with sounds from the outside world—either instrumental or vocal noises, sounds taken from recordings or from natural or manmade noises.  Sampling techniques, including the drum machine, a device for creating short loops of samples, usually taken from percussion instruments, had a revolutionary effect on popular music in the last two decades of the 20th century.

Finally, the availability of ever-cheaper, ever-faster computing power, led to developments that allow not just the production of digital sound but its recording in real-time from multiple sources on home computers.  The first years of the 21st-century saw the development of software synthesizers, in which the characteristics of an analog or digital synthesizer are mimicked by a software application, perhaps as a plug-in in a larger sequencing program.  Most importantly, recording and sound synthesis technology that, in the 1960s, would have required the resources of a large corporation or other institution, was effectively placed into the hands of millions of amateur and professional musicians with the resulting revolution in the production and distribution of music having impacts which are still being felt, not the least of which has been the dismantling of the recording industry as it existed as late as the year 2000.

Music Theory, Musical Ideals and Morals

August 20th, 2011

With a new crop of Music Theory students, as I began teaching this week, I generally try to give some indication of what music theory actually is. On Thursday, the first day of Theory I, my usual description expanded to include some philosophical ideas about music. One goal of the study of music theory should be for students to expand and solidify their personal philosophy of music. I gave as examples the following two ideas that have been kicking around this summer–one highly abstract and metaphysical, and one somewhat more moral in nature.

First, metaphysics. What is a musical composition? Where does the essence of a piece of music reside? For a listener in the 21st century, it may well seem that the actual music is contained in a recording, either in a physical medium or in the data that that medium contains. Most musicians, however, would disagree. For some, the written score would be the ultimate embodiment of a composition, but experience soon tells us that the score is no more a piece of music than a recipe is a meal. For other musicians, then, each and every performance is a separate and distinct musical item. This fails to explain how many separate renditions can be identified as the same piece of music. My experience as a composer is often akin to that of an author who feels the characters in her novel assuming their own destinies and “writing” the ending differently than the author initially imagined. It seems to me that my compositions, once begun with some initial inspiration, unfold in ways that surprise me. It makes little difference whether I begin with a detailed plan or not. Similarly, Michaelangelo claimed that his sculptures were already in the stone and he only had to chip away the excess.

To me, this suggests that before I even begin, the composition exists in the form of a Platonic ideal, independent of any work that I will do on it. It exists in an ideal form, and my training allows me to somehow reveal aspects of the ideal, although, because I am a finitely-abled human, I can’t hope to conform to the ideal. My free will as a composer is still there–I can make decisions that impact the way that I will write down the composition, and I can even choose to stop in the middle and leave my work incomplete. The piece exists whether I compose it or not, just as it exists if I write it down but no one performs it.

A more practical problem plagues the full-time musician. Our work may be spiritually uplifting to ourselves and others, but there are problems that music will not solve. Music will not stop global warming or end drug abuse, nor will it cure AIDS or keep children safe from abusive adults. Is there not a moral imperative for intelligent, talented humans to attempt to make the world a better place, to try to solve problems of injustice? Of course, there is. About ten years ago, I attended the Ohio Music Education Association’s annual conference in Columbus, Ohio, and on the last day, my father, who worked downtown, gave me a ride to and from the convention center. As I was waiting for him on High Street, I noticed that the banners for the next event at the convention center had already been put up. The group following the music teachers were reading-recovery specialists–people helping kids gain the skills they will need to survive rather than skills that will merely bring them a little pleasure and win the school a trophy or two. I had a blast of perspective that, I will admit, hurt a little bit. Knowing that there is such suffering in the world–and much worse–is it right that I have devoted my time and energy to music?

The result of these thoughts, for me, is yet another reason to be the best musician that I can. If I am to spend my life doing something other than solving problems that impact our entire species, then whatever I do–whatever we do–the least we can do is do it to the best of our abilities, treating it as if it were as important as the big problems. We may be wrong, but we must not be mediocre.

Aspen Composers Conference

August 4th, 2011

After what seems like years of sweltering heat here in the Oklahoma Panhandle, it was nice to take a few days and visit Aspen, Colorado so that I could present quintuplous meter at the Aspen Composers Conference, where I also performed Twenty Views of the Trombone, my work-in-progress that attempts to explore what it is like to play the trombone.  It seemed like all my college friends headed to Apen every summer and now, fifteen years later, I made it there myself.

The drive from Goodwell to Aspen takes about nine hours, and gives one the pleasure of sampling an enormous variety of flora and fauna.  Goodwell, of course, is squarely in the Southern High Plains, and those plains keep getting higher through Cimarron County and into northeastern New Mexico.  The further west you drive, the more old volcanoes like Capulin start to rise from the range, and by the time you are in Raton, there are bona fide mountains.  Then, heading north on I-25, I passed the daily westbound Amtrak train–the Southwest Chief–as I went through Raton Pass and into Colorado.  North of Trinidad, Google instructed me to get off the interstate at Walsenburg, and I headed across more range, but now with the Sangre de Cristo Mountains ahead of me.  Lunch was at the Wildflower Cafe in Gardner, a tiny place with a fantastic burger, and I was on the road again.  I picked up US50 in Cotopaxi, and followed the Arkansas River and eventually US24 through Salida and Buena Vista.  US 50 also winds through Cincinnati, Ohio, where I spent my college years, and I feel a special twinge every time I drive on a road that connects me to somewhere I used to know.  It’s sort of like when Matt Specter and I worked at schools that were on the opposite site of Ohio Route 41–Northwestern High School in Springfield and Peebles High School in Peebles–I felt somehow connected even though they were 125 miles or so apart!  The final turn before Aspen was on to Colorado Route 82, a road that closes down in the winter.  I knew that I would at some point go over some mountains, but I wasn’t quite prepared for the switchbacks that my company car and I had to take.  A light drizzle didn’t stop us, though, and we emerged at gorgeous Independence Pass, 12,000 feet above sea level.  It was fascinating to watch scrub give way to glades of aspen trees, which then turned into pine forest, and finally, the pines gave way to tundra, and even a little snow.  After enjoying the breathtaking view of the Pass, which is located on the Continental Divide, I wound my way down into Aspen to find my hotel.  Dinner and some composing in the hotel room, and I was ready for some sleep.

A conference quickly develops its own rhythm as participants stake out their space and figure out how everything works.  The Aspen Composers Conference is organized annually by Natalie Synhaivsky, and allows composers to meet to share their work, opinions and ideas.  In addition to my presentation on quintuplous meter, topics ranged from analyses of works that continue to inspire various composers, to working techniques and philosophical concerns.  Keane Southard’s presentation of Frederic Rzewski’s The People United Will Never Be Defeated! was fantastic and gave me insight into a piece that I first encountered when I was teaching middle-school general music.  The textbook for eighth grade included numerous excerpts from the work, and I’ve decided that it now needs to be on my list of pieces to investigate more fully.  The spectre of Beethoven haunted the room, as not one but two composers chose to address his late music.  Anne Goldberg, a composer and choreographer working in New York City discussed her approach to collaboration, in which collaborators are given enormous latitude to create a somewhat improvisatory approach.  The day ended with a brief recital, and I represented the trombone with six pieces from Twenty Views, including two world premieres, “What it’s not Quite Like,” which explores quintuplous meter, and “What it Will (Not) Be Like,” a twelve-tone piece using a nifty little tone-row that I came up with last month.  I don’t know when Twenty Views will be finished.  I keep adding to it as I can, and as I have need to–it can turn any occasion I have to play into a world premiere at this point.  I’d love to hear any suggestions for titles for new movements.

The drive home was uneventful, but for being held up by a painting crew before I could go back over Independence Pass.  It gave me about an hour to pull out the laptop and work on my current project, a band arrangement of the Prelude to Carmen that we will be playing on our first concert.  Surrounded by aspen trees with the windows down on a mild mountain morning isn’t a bad way to compose.

Tuning and Intonation

July 7th, 2011

An excerpt from Chapter 7 of my book Music: Notation and Practice in Past and Present, now in press with National Social Science Press.

The issue of how to accurately tune a musical instrument, especially one which depends on a number of independent strings, has long been a source of concern for musicians and mathematicians.  One of the earliest documents of Western music is a stone tablet written in Babylonian cuneiform that has been interpreted as a set of directions for the proper tuning of a seven-string harp or lyre.

The question, then, of what tuning is and what it means to be “in tune” is central to the understanding of music and to the training of musicians.  Simply put, two pitches are considered to be in tune if they have the same fundamental frequency, or if their fundamental frequencies are related to each other by a low whole-number ratio.  Because human beings are by nature somewhat imprecise, both in perception and in manipulation of pitch, the practical meaning of being “in tune” is somewhat more complex.  Because various cultures have accepted differing musical intervals as sounding “correct,” intonation for human ears is highly dependent on training and cultural norms, at least beyond the tuning of perfect unisons or octaves.  The practice of accurately producing a desired pitch is known as intonation, and the ability to play with good intonation is a skill that takes many musicians years to master.

A useful approach may be to examine the experience of various musicians with regard to tuning in order to understand what role intonation and tuning play in the practice of music.  To begin, consider a vocalist who may perform with or without accompaniment.  An accomplished singer will have practiced many years to develop his “ear,” that is, his inner sense of intonation, which is usually based on an understanding of scales, keys and the relationships between notes in those keys.  Most musicians do not possess the set of abilities collectively referred to as absolute pitch, also known as perfect pitch.  Absolute pitch is the ability to identify a pitch by note name when it is heard or to accurately produce a requested pitch without reference to a standard.  A great deal of research has been done in this area to determine just how absolute pitch is developed and expressed, and what its limitations are.  It is somewhat ironic that many—if not most—musicians are unable to identify the basic materials of their art without reference to some known standard, as if a painter were able to relate colors to each other within the context of a painting, but not be able to certainly identify red as red without reference to a color wheel.  A singer’s tuning and intonation, then, rely on constant comparison of notes to each other and to outside reference pitches, such as might be found in an instrumental accompaniment.  It is quite possible for a person to possess an exquisite vocal instrument but to utterly lack the ability to control intonation, whether through a lack of training or a simple lack of musical aptitude.[1]  Conversely, a person may possess an inner ear, or ability to audiate musical sound, that is without peer, but be unable to control the vocal apparatus sufficiently to communicate music to others, whether through physiological structure, lack of kinesthetic ability or auditory dysfunction.

A performer on a brass instrument, then, faces a similar set of dilemmas to a vocalist, but with the additional challenge of making music using a device outside the body.[2]  The brass player relies on her training and internal ear to audiate pitches prior to playing them, but also on her knowledge of the tendencies of her instrument.  Professional brass players must spend many years becoming comfortable with their personal instrument, as every trumpet, for example, is subtly different from every other trumpet, even those by the same manufacturer.  The intonation tendencies of various pitches and groups of pitches are basic knowledge for a brass player.  When playing in an ensemble setting, in a brass section of an orchestra, perhaps, the brass player will also be aware of the pitches played by other players and adjust her own pitch accordingly.  To play a pitch in tune requires first that a player anticipate the fine adjustments that will be required, but also the ability to detect several acoustical phenomena.

For unison and octave pitches, most musicians, including brass players, utilize a technique known as beatless tuning.  When two pitches are very close to one another (or close to being a perfect octave apart), the crests and troughs of their waveforms become very closely aligned, and reinforce or cancel each other in predictable patterns, resulting in beats.  The closer to each other two pitches are, the slower these beats sound, and when a player detects beats, she knows to make adjustments in order to eliminate them.

Other intervals require a different approach to intonation.[3]  Beats are one aspect of an acoustical phenomenon known as difference tones.  The frequency in Herz of the beats between two pitches is the arithmetic difference between the frequencies of the two pitches.  Thus, when a tone is played at 440Hz simultaneously with a tone at 444Hz, the beats, or difference tones, will sound at a rate of 4Hz.  When pitches are in small whole-number ratios to each other, difference tones can become quite prominent, especially to listeners in close proximity to the sound sources.  For example, when two brass players play pitches a major third apart, say, A4 and C#4, when the pitches are in tune with each other, they will produce a tone reflecting the difference between their frequencies, 440 Hz and 550 Hz, respectively, for a low additional pitch of 110 Hz, or A2.  Most musicians perceive this as a “buzz” that indicates that the chords have been “locked-in.”  A similar phenomenon that is exploited by musicians is the summation tone, which is heard as a tone with a frequency that is the sum of the two component frequencies.  In the example given above, the summation tone for 440 Hz and 550 Hz would be 990Hz, or B5.  As intervals are added to each other to become chords, the achievement of summation and difference tones becomes crucial to good intonation.

At times, passagework on a brass instrument becomes too rapid for the tuning of each individual chord or interval, and a player must fall back on her experience and technique on the instrument to ensure good intonation.  This comes about through years of practice and preparation, especially work on scales, arpeggios, or chords played one note at a time, and etudes, or music that incorporates common patterns or technical difficulties and is intended mainly for the practice room.  Effective practice for many musicians does not mean solely the preparation of music that is to later be played in public, but rather the development of a daily routine that emphasizes the technical fundamentals for the performer’s instrument in order to be prepared for whatever music may appear in the future.

In contrast to the emphasis that vocalists or brass players must place on intonation, some instruments offer the player very little control over intonation or even none at all.  The piano, for example, while certainly tunable by any musician with the proper equipment, is generally tuned prior to performance or more likely only periodically, leaving the pianist with a fixed set of pitches at his disposal.  The tuning of pianos, then, is frequently left to specialized piano tuners, who may or may not be accomplished pianists, while pianists of any skill level are fairly unlikely to be skilled piano tuners.

In the era before the commercial availability of first analog and then digital tuners, piano tuning was a skill acquired only through great effort and refined through extensive experience.  At one time, visually-impaired persons were frequently trained as piano tuners because it was assumed, often correctly, that the other sensory organs acquired greater acuity in those lacking the visual sense.  With perfect vision not required, and the piano at one time enjoying the place in middle class homes now given over to the television, piano tuning was a logical career for persons in this situation.[4]  The difficulty in tuning a piano lies in that instrument’s use of equal temperament, in which only the octaves are tuned to the precise frequencies generated by the harmonic series.  In addition, the lower strings of a piano are often tuned deliberately low, while the higher strings are tuned deliberately high, meaning that the instrument as a whole has a certain inharmonicity that has become the preferred norm in many parts of society.   A final challenge is that most of the pitches on the modern piano involve not just one string but two or three, meaning that every note must be in tune with itself before it can be in tune with the rest of the instrument.  Thus, while the performer at a piano may think little of intonation—except perhaps to notice that the instrument is in need of a tuning, or to notice intonation deficiencies in the performers with whom he collaborates—the process of tuning a piano is very complex and can be very time-consuming.[5]

Other musicians are responsible for calibrating their own instruments.  A common example in this regard is the guitar, either in acoustic or electric forms.  Before a guitarist can play, she must tune the strings of the instrument, which on six-string models are tuned to six different pitches at varying intervals from each other.  In addition, basic guitar technique often involves various alternate tunings to allow the guitarist to play certain music more easily.  A guitarist will begin by tuning one string to a reference pitch, and then tune other strings, one at a time, to the first string.  The frets on the neck of the guitar guarantee that one string can be tuned to another fairly easily, and many guitarists also make use of harmonics to check strings even more closely.  Once the guitar is in tune, the guitarist can begin to play with relatively little attention to intonation, but she must continually listen to her instrument to insure that it hasn’t gone out of tune, and make adjustments as needed.

Both the piano and the guitar have had turns as mass-produced, popular instruments, with the piano’s heyday occurring in the 19th century, and the guitar taking the piano’s place in the late-20th century.  These instruments have the advantage of being instruments that can supply complete musical textures, but can also be played while singing.  A very basic technique on either instrument can supply needed accompaniment to vocal music, and both were brought into the economic reach of the European and American middle class by mass production and the rise in living standards that followed the Industrial Revolution.  The piano and guitar, however, were not the only instruments to benefit from the advantages listed above.  What they seem to have in common that other instruments lack is a relatively simple approach to intonation.  A piano that is once tuned by a specialist and then kept within reasonable limits of temperature and humidity will remain relatively in tune for quite some time, perhaps six months or more.  The guitar is easily tuned at the beginning of a session, and with high-quality strings and only minor adjustments, will remain in tune for a performance without the guitarist having to constantly worry about intonation in the manner of a brass player or violinist.  This approach to intonation has certainly contributed to the intense popularity of both of these instruments.

A very few instruments are impossible to tune once manufactured and have a fixed pitch.  An example of this would be so-called keyboard percussion instruments such as the xylophone, which consists of a set of hardwood or synthetic bars in graduated sizes which are struck with mallets.  While it would conceivably be possible to shorten these bars, and thus raise their pitch, this is rarely done, and it would be patently impossible to lengthen the bars to lower their pitch.  Thus, the percussionist can exercise no control whatsoever over intonation and is completely dependent on the skill of the manufacturer and the quality of the materials and design, which hopefully will be only minimally susceptible to changes in temperature and humidity.  It is likely that other musicians will find it necessary to adjust their intonation to these instruments.

All serious musicians, and most amateurs, are able to calibrate their own instruments, with the exception of those who play piano, organ or other instruments that rely on specialists for tuning or which cannot be adjusted.  When a musician performs as a soloist, with no accompaniment, all tuning and intonation can be done internally, with only the musician’s internal ear to guide the performance.  A more likely situation, however, is that of the ensemble performance, in which instruments must not only be played with good intonation, but must be calibrated to each other.  Since the 19th century, the international standard for ensemble performance in Western music has been to tune A4 to 440Hz, sometimes referred to as A440.[6]  Tuning forks, pitch pipes and analog and digital tuners have all come to be manufactured to this standard, as have fixed-pitch instruments, such as the keyboard percussion mentioned above.[7]  A musical ensemble, then, must develop a procedure—a ritual, even—for this calibration.  The ritual calibration for an American orchestra is almost a cliché—the principal oboist, usually with the aid of a digital tuner, sounds A4, providing a standard to which the other musicians of the orchestra will then adjust their own instruments.[8]  By contrast, in Europe, orchestras do not tune onstage, with the result that if the temperature onstage is different than that offstage, the first part of the first piece to be performed may be grossly out of tune until the musicians can make the necessary tuning adjustments.


[1] Psychologist Carl Seashore developed a very common and very simple test for musical aptitude that measures aptitude by the ability to correctly determine which of two played pitches is higher or lower than the other.

[2] This may be an advantage in some senses; while the instrumentalist can inspect her instrument in great detail, a vocalist’s instrument is largely hidden from view.  While the vocal folds are easily viewed using a laryngoscope, the specific skeletal, muscular and tissue structure that contributes to a fine vocal instrument are often a complete mystery, even to expert performers.

[3] Some musicians will also use the technique of beatless tuning for perfect fifths.

[4] Traditionally, piano tuning was accomplished using the phenomenon of beats—a correct equal-tempered tuning will have a certain number of beats when notes, usually “perfect” fifths, are played together.  Since the development of cheap and accurate digital tuners, piano tuning in the United States has shifted from an occupation for the visually-impaired to a side-job or post-retirement business, although skilled piano technicians, who can not only tune but repair and maintain pianos, are often employed full-time by college and university music departments and urban performing arts centers.

[5] The modern piano is a very complex and very precise piece of technology.  As a piece of engineering, it stands beside that other great achievement of the early-19th-century, the steam locomotive, in its ingenuity, and greatly surpasses the locomotive in terms of reliability, safety and longevity, as many pianos from the 1880s are still in use.

[6] The standard tuning of earlier times was frequently somewhat lower, and this is reflected today by the tendency of ensembles specializing in authentic 18th-century practice to tune A4 to 415Hz, or very nearly A$4 in modern calibrations.

[7] This sort of standardization was made possible by first the knowledge provided by the Enlightenment and the Scientific Revolution and then the mass-production of musical instruments and equipment to high tolerances and standards made possible by the Industrial Revolution.  This sort of standardization of pitch would not have been practical prior to the 19th century, when local standards of pitch prevailed.

[8] In truth, for professional orchestras, this ritual is just that—a ritual performed for the benefit of the audience.  Most musicians will have already calibrated their instruments backstage.

What makes standard notation standard?

June 14th, 2011

An excerpt from Chapter 6 of my forthcoming book, Music: Notation and Practice in Past and Present, now in press with National Social Science Press.

Like any technological standard, Western standard musical notation has gone through a process of innovation, widespread adoption and now what technologist Jaron Lanier refers to as lock-in—it is now more or less impossible to radically alter or reform the standard notation because of its widespread use.  Musicians who read standard notation have a vested interest in maintaining its place in the culture—any written language is relatively difficult to learn.  Similarly, it seems highly unlikely that the published music of the last four centuries or so—all written in notation that is more-or-less readable to modern eyes—might be transcribed into some new system (although in at least one case, efforts have been made in this direction).

Standard musical notation is largely a product of the Renaissance, and by 1600, composers and publishers were producing scores that generally adhere to the descriptions given in Part 2 of this book.  Most of the notational concepts for music were invented much earlier, but a period of consolidation in notation—aided by the development of the printing press and subsequent dissemination of printed music after 1500—meant that many alternatives continued to be tried.  Even to the present day, reforms and changes to the notational system have been suggested, sometimes by composers who wish to incorporate a new musical effect, sometimes by pedagogues who hope to simplify the reading and teaching of music, sometimes by hopeful amateurs and entrepreneurs who desire to give back to society or enlarge their bank accounts.  Even this author has suggested a notational innovation, made use of it in compositions and presented it in a scholarly forum.

What then allowed our system of notation to become an international standard, and what keeps it in its position as the primary means of communicating musical intentions in written form?  A few considerations:

  • Completeness:  As demonstrated in the previous chapters, the standard notational system allows for the adequate (although not always ideal) description of all seven musical elements, plus lyrics in vocal music.  Each element has a separate means of description and set of symbols, but at the same time, the use of one symbol to denote the pitch and duration of each musical event allows for relative ease of reading, once the system has been learned.  The crucial information about melody, harmony and rhythm is encoded in a central way, so that the mind focuses on these things, while the other, less fluid, elements are notated in more ancillary ways.
  • Readability:  For a notation that must present seven elements simultaneously, the standard notation is surprisingly clean and clear in most circumstances.  The decision, for example, to limit the staff to five lines plus ledger lines, means that no note is ever very far from the landmarks provided by the two outer lines, allowing musicians to easily recognize what pitch they are looking at.  Similarly, the tendency to favor the quarter-note and dotted quarter-note as beat length notes means that a great deal of squinting at multiple flags or beams is eliminated, but that all notes shorter than a beat are beamed to reflect where they fall within a beat.
  • Writeability:  Standard musical notation has a limited set of relatively simple symbols, meaning that anyone with sufficient motor control can quickly and easily learn to write and copy music by hand.  In the 21st century, this may seem less than important, but prior to music notation software, all notated music began life as handwritten marks on physical paper.  As late as the 1990s, college music programs in the United States included coursework in music manuscript in their baccalaureate programs, so that their graduates would have the skills required to produce legible notation.  Prior to the widespread availability of photocopied music, members of school ensembles often had to make handwritten copies of music arranged for them.[1]  More importantly, it was often by copying music that composers of earlier generations learned their craft.  Johann Sebastian Bach, scion of a long-established family of musicians, learned the family trade in part through this means.
  • Printability:  Without a doubt, the technological innovation that allowed musical notation to truly become an internationally standardized language was the printing press.  While early printing methods for music involved experiments with woodcuts and movable type, the means which remains the standard for music publishing even to this day is engraving, first by hand-etching and later by photographic means.  The symbols of Western musical notation are relatively simple—straight lines and a few curves and circles—allowing them to be engraved relatively simply.  Mozart is known to have owned a set of engraving tools for the preparation of copper plates for printed music, and the relative ease of engraving music allowed Bach—a middle-class wage earner in the best of times—to self-finance the engraving and publication of some of his pieces.  The design of music and music fonts continues to allow printing on the same equipment used for the printing of text and illustrations, so a complete redesign of the printing technology was unnecessary.  Alternatives that require the use of colored inks, perhaps, or textured paper in the manner of Braille (see below) have universally failed to enjoy the same success as the monochromatic, two-dimensional standard system.
  • Unambiguity:  The standard system of musical notation employs unique symbolism for every element of music.  Two different musical events look different on the page, while two identical events look the same.  While any system would hopefully have solved these problems, Western notation has done so in a way that not only communicates necessary information, but also ingeniously incorporates the human ability for chunking, especially in the elements of melody, harmony and rhythm.  Furthermore, despite being unambiguous to a high degree, the system leaves a great deal of freedom for performers to interpret the same notation in different ways.

 

The above considerations, then, have all contributed to the success of Western musical notation.  Since around 1600, the standard system of notation has remained remarkably stable, with changes and adaptations more the result of innovations in instrumental capabilities and stylistic preferences than any wholesale revision of notational practice.  However, the system is by no means perfect, and can be confusing in some respects.  While it is doubtful that any system to fully notate all seven musical elements could be truly termed “simple,” there are some aspects of notation, and the nomenclature of Western music in general, which are somewhat confusing.  For consideration, some of these are listed below:

  • Ambiguity of pitch.  An odd property of musical nomenclature, and thus of standard musical notation, is that pitches can have any number of different names.  Eb, for example, can also be named D#, or Fbb.  Although there is, at least conceptually, a difference between these enharmonic pitches in many musical styles, the fact remains that in the system of equal temperament now widely adopted in Western music, there is no difference in the frequency of enharmonic pitches for keyboard or electronic instruments.  If these notes are practically, if not conceptually, identical, perpetuating a system of notation that attempts to differentiate between them may be unnecessary.
  • Limited rhythmic potential.  The current Western system of notation deals best with what are termed simple and compound meters, i.e., meters in which the beat divides equally into twos and threes.  Awkwardly, when other prime divisions are required, the somewhat clumsy tuplet notation is brought into play.  Five-to-a-beat and seven-to-a-beat music, although rare in the history of Western composition, may represent a direction that musicians may wish to pursue.  Until adequate notation for these meters is found, their full potential may not be explored.[2]
  • Minimal emphasis on certain elements.  The notation for dynamics, timbre and form in standard notation is extremely rudimentary, and frequently falls back upon linguistic cues such as “crescendo,” “pizzicato” and “D.S. al Coda.”  Since Western music has traditionally emphasized the elements of melody, harmony and rhythm, the other elements have been relegated to subsidiary roles in traditional notation.  It is possible, though, to imagine music that is centered upon other elements, as was attempted first by Arnold Schoenberg in his experiments with klangfarbenmelodie (tone-color melody) in the first decade of the 20th century.  Indications for timbre and dynamics, in particular, are very crude in standard notation, and leave a great deal to the performer (although this may also be desirable in some styles).
  • Adaptability.  For some instruments and styles, standard notation is simply not effective.  Western notation gives a picture of a composer’s musical intent in absolute musical terms, and does not give a precise description of the physical means of producing the correct sounds on any instrument.  Notation developed as a mnemonic shorthand for vocal music, particularly plainchant, but even in this form, it only tells what to sing, not how to sing.  Many instrumentalists find it a relatively simple thing to relate pitches notes on a staff to fingerings, keys or embouchure adjustments, but for others, the entire system seems entirely counterintuitive.  For example, guitar players, with their six strings arrayed with the lowest at the top of the instrument, find the standard staff, with low pitches depicted at the bottom, to be much more difficult than other notational solutions, including guitar tablature, which will be discussed below.
  • Numeracy.  From tuplet numbers to fingerings and string numbers to the indication of first and second endings, standard musical notation has a tendency to rely on Arabic and Roman numerals to a great extent.  This can cause some confusion, but is ameliorated by the tendency of some uses of numbers to be employed only in the absence of others—fingerings for pianists mean that violin string numbers are unnecessary.
  • Accessibility.  All humans are capable of musical expression and of enjoying music.  It has been this author’s privilege to work with, at various times, persons who are visually-impaired, hearing-impaired and mentally challenged who not only enjoy music but can engage in it with creativity and passion.  Musical notation, however, does not always assist in this regard.  Efforts have been made in this direction, but standard notation more-or-less assumes so-called “normal” capabilities in the person using it.  As with so many other areas, improved access to assistive technology has allowed great strides, along with continuing social reforms, legislative and otherwise, that have allowed individuals with disabilities to take a freer and more meaningful role in society.

What attributes, then, does good notation have?  In thoughts about Western notation, five qualities seem to be required of effective symbolism for music, and perhaps for notations of any type.  The first attribute is simplicity.  For maximum effectiveness and success, symbols should be relatively simple, although complex enough to give all needed information.  While some musical symbols in traditional notation are somewhat complex, most are relatively easily learned, easily remembered and easily duplicated.  While some symbols may seem difficult to modern eyes, consideration of their origins as marks to be made with a quill pen brings the realization that most are actually quite simple.  An example of this is the quarter-rest, which seems relatively complex, but when drawn with a quill, requires only two movements.  Other more complex symbols are only rarely drawn, such as the coda sign.

The second requirement for successful notation is clarity.  Symbols for one element of music must not physically interfere with each other, and must be immediately recognizable.  Engravers and copyists spend as much time ensuring that symbols do not collide with each other in published music as composers do writing the music in the first place.  In standard notation, a few conflicts occur—the shape of notes, usually reserved for rhythmic information, is frequently altered to indicate timbre, for example—but for the most part, standards of engraving have developed to make allowances for these conflicts to be resolved.  In addition, the spacing and layout of symbols on the page is a science unto itself, to a greater extent than one might think.  In well-engraved music, not only do the shapes of notes describe their rhythm, but their spacing within a measure also is a cue to their location.  The eyes are drawn across the page from left to right by the oval-shaped noteheads, and sixteenth-notes are always closer together than quarter-notes.  The style sheet of a reputable music publisher is very detailed, and its development over the centuries accounts for the difference between music that is easy to read and that which is more of a struggle.  Modern digital notesetting software, the musical equivalent of word processing, incorporates these rules without a thought from the user, thus representing a major advance in the clarity of musical notation.

A third necessary attribute of successful notation is uniqueness: under no circumstances can the same symbol be used to mean different things.  While some symbols, such as Arabic numerals, are used in multiple roles (meter signatures, tuplet notation, fingerings, etc.), they are also used in such a way as to remain distinct from each other in most cases, although not without the potential for confusion, as a numeral placed above the beam for a group of eighth-notes could be either a fingering or a tuplet numeral).  In addition, these are relatively subsidiary markings that do not lie at the core function of notation—the indication of pitch and duration.  Another symbol that might be perceived as being overused is the dot, which serves to augment the duration of a note when placed to the right of a notehead, but actually shortens that duration when placed above or below to indicate a staccato articulation.  Just as the experienced reader of English has learned to differentiate between the words though, tough and through, the experienced reader of music automatically finds the difference between the staccato mark and the augmentation dot.

This ability of the human mind raises the question, then, of just how fine a distinction can be made in notation, and how adaptable is the musical mind.  This is a question, certainly, for neuroscientists, but years of teaching the standard system of musical notation to students from kindergarten through college have given this author some insight in this area.  The human mind is quite capable of making rapid and accurate distinctions based on quite minute details—no doubt a trait selected for in the course of human evolution as our ancestors depended on fast and accurate perception of the natural world for their survival.  As with any language, those who, in the end, read music most fluently seem to be those who are exposed to it at an early age, both as musical performance and notation.  Many (although not most) children are reasonably fluent readers of their native language by the age of five, and there is no reason that children can’t be trained in reading music at that early age as well.  As with much other knowledge, it is the combination of opportunity and desire that leads some to pursue and succeed in music at a young age while others remain musically illiterate for their entire lifespans.


[1] The author’s father, who played in high school band in the 1960s describes having a personal copy book into which arrangements of his school songs were inscribed.  As young composer in the 1990s, the author’s earliest compositions had instrumental parts copied out by hand and then photocopied as needed.

[2] This author has proposed a solution to this particular notational problem, summarized in his poster session “On Rhythmic Notation and Nomenclature of Five-to-a-Beat Music” at the 2010 national conference of the College Music Society.  Other solutions have been proposed, but would require a complete revision of the musical system.

Notating Timbre

May 18th, 2011

An excerpt from Chapter 5 of my book Music: Notation and Practice in Past and Present, currently in press.

The conception of the staff as a form of the xy-coordinate plane is adequate for an understanding of the notation of pitch events and their temporal relationships.  However, music is far more subtle than the four elements of melody, harmony, tempo and rhythm that can be described in this way.  In fact, staff notation as we have studied it ignores the musical element that frequently gives the first and most basic aural cues to listeners:  timbre.  Many listeners who fail to be excited by the use of unique scales, obscure keys, unorthodox meters or complex chords are instantly enthralled by the sound of familiar or favorite instruments or the specific timbral qualities of the voice of a beloved singer.  The system of musical notation simply cannot be considered complete without a means for indicating timbre.

For centuries, though, there were no such indications in Western musical notation.  While the earliest forms of what would become modern notation date back to the 10th century A.D., no composer indicated what instruments were to play the specific parts in a score until Giovanni Gabrieli (c. 1554/1557-1612) published his Symphoniae Sacrae (1597, 1601), which indicated timbre by naming the desired instrument to the left of each staff in a system, or group of staves.  For music prior to that time, performers and musicologists must often rely on other clues left by the compositional process.  The inclusion of poetic text, for example, indicates that a composition was likely for voices, although it is always possible that instrumentalists would double the vocal parts, as continues to be the case in much music written to this day.  This practice is prevalent in Protestant hymnody, where it was traditional for the music to be written in four vocal parts (soprano, alto, tenor and bass), but doubled on a keyboard instrument such as piano or organ.  Specific melodic or harmonic figurations may indicate that a composition was intended for a specific instrument, such as a keyboard instrument or a fretted string instrument.  In general, however, during much of the history of music, music, whether notated or not was performed by the available forces at any given moment.  It is remarkable that the first composer to call for specific instrument worked in Venice, one of the most prosperous cities at the time, where he supervised the music for St. Mark’s Cathedral, the central church of the city.  The musical forces available to Gabrielli were likely some of the most reliable, professional and well-paid in Europe, and it has typically been in environments such as this that musical experimentation thrives and new standards are set that eventually become adopted throughout a culture.

It is likely that an experienced musician, presented with an unlabelled instrumental part for a standard instrument, could identify the instrument being called for by investigating certain factors.  Instrumental range is a possible clue, as all instruments have at least an absolute lowest pitch, and while their range may theoretically extend upward to infinity, there is usually a generally-accepted upper limit to the pitches that a standard instrument can play.  For example, the modern flute can play no lower than B3, and only to C4 on some student models[1].  Its upper range extends to approximately C7 for professional players.  In addition, as with all woodwind instruments, it is common to write trills, in which a player rapidly alternates between two notes a step apart, for flute.  However, some notes are more easily trilled than others, and an astute composer will avoid the difficult trills while utilizing the more convenient ones.  If the hypothetical mystery part, then, had no notes below C4, but a range extending up to B$6, all the while treading delicately around the more tricky trills (such as that between C4 and D$4), it would be reasonable to conclude that it was music for the flute.  This music would be said to be written idiomatically for the flute.

Idiomatic writing for an instrument or voice is crucial to successful composing and arranging.  While more basic music is generally playable on any instrument if transposed into the correct range, more complex and difficult music can quickly overwhelm the abilities of even a professional performer if utmost consideration is not given to the unique properties of an instrument or voice.  For vocal music, it is even reasonable to compose with a specific performer in mind, as human voices are as wondrously varied as human faces.  For example, British composer Benjamin Britten (1913-1976) had a life-long collaboration—both personally and professionally—with tenor Peter Pears (1910-1986), resulting in several sets of songs, operatic roles and the incomparable Serenade for Tenor, Horn and Strings (1943) being composed with Pears’ somewhat unremarkable voice in mind.  These works reflect the limitations and advantages of Pears’ instrument and, like others composed for specific performers, they must be approached with this in mind.  The Britten-Pears collaboration has been the subject of much musicological study, and is a prime example of the ways in which a composer’s life can impact his work and vice versa.

Composers also write for specific instrumentalists.  For example, Aaron Copland (1900-1990) composed his Clarinet Concerto (1947-9) on a commission from clarinetist Benny Goodman (1909-1986).  Goodman, better known for his jazz improvisations than his classical performances, felt a responsibility to add to the repertoire for his instrument, despite his own feelings of inadequacy about his ability as a classical performer and admitted discomfort with notated music.  Copland’s concerto, then, is scrupulously notated and, while not in a jazz idiom, works very closely with what is natural to the clarinet while at the same time providing a piece worthy of Goodman’s substantial technical prowess.  Goodman has not been the only performer to feel compelled to contribute to the repertoire for his instrument.  In the 19th century, virtuoso violinist Niccolo Paganini (1782-1840) not only composed music for himself but commissioned others, with the result that Paganini’s revolutionary technical skills, many of which were innovative developments for the instrument, became standard violin technique within a few generations.  In the late-20th and early-21st century, a generation of gifted trombone players, such as Swedish virtuoso Christian Lindberg (b. 1958) and the American Joseph Alessi (b. 1959), has been creating a repertoire for their instrument by commissioning and performing new music.  These new pieces will impact the study of the trombone for generations to come as students begin to view these new works as the standard aims of serious study on the instrument.

Similarly, a performer may possess unique or innovative abilities that a composer wishes to exploit in a musical composition, or that a performer may employ in his own music.  Vocalist Bobby McFerrin (b. 1950) possesses a miraculous instrument capable of an enormous variety of vocalizations that he has employed in jazz, popular and classical-style compositions.  Similarly, soprano and composer Cathy Berberian (1925-1983) possessed a vocal instrument whose range, expressiveness and accuracy inspired not only her own compositions, but also those of her husband Lucciano Berio (1925-2003) and many other composers.  The ongoing collaboration between composer and performer can be as vital and important as that between two actors or an actor and a director with onscreen chemistry (Tom Hanks and Meg Ryan, for example, or Jack Lemmon and Walter Mathau, or Jimmy Stewart and Alfred Hitchcock), and it can define careers and musical legacy, as is currently being seen in the collaborations of composer Osvaldo Golijov (b. 1960) and soprano Dawn Upshaw (b. 1960), or composer Magnus Lindberg’s (b. 1958) on-going work as Composer-in-Residence with the New York Philharmonic.  For this author, who is trained as a composer, collaboration with a performer who gives significant practice time to a new piece is a joy.  When a performer becomes as emotionally involved as the composer with the new piece, however, the experience is as thrilling—and absorbing—as a new love affair.[2]

It is likely that most composers in the era before 1600 simply knew their performers personally, never intended to distribute their music widely and wrote for the individual at hand.  In our technological, text-saturated world, duplication of the written word (or composed note) is cheap and easy.  A composer can write a piece today that will be played, sung or heard  by thousands tomorrow, making it impossible for the composer to know every performer—or potential performer—personally.  Every musician’s abilities are different, and every musician’s musical language is slightly different.   This is the challenge that the composer of music for a specific timbre faces, then, and the answers to this challenge are highly varied, and often quite subtle.


[1] The flute of Mozart’s day had a lowest note of D4.  The additional notes are operated by keys in what is known as the foot joint of the flute, an extension of the main body of the instrument.  Student-model flutes typically only have two keys in the foot, allowing C#4 and C4, while professional-model instruments add a third key to bring B3 into action.  A few flutes have a fourth key to allow the performer to play B$3.

[2] In an interesting case of emotional transference, German composer Richard Wagner (1813-1883), who enjoyed a coterie of musicians who were passionately devoted to his controversial operas, had a tendency to seduce the wives and daughters of the men who conducted his music.  Such was Wagner’s spell that at least some of these men continued to support Wagner’s musical efforts after being cuckolded.

Articulations

May 10th, 2011

An excerpt from Chapter 4 of my forthcoming book, Music: Notation and Practice in Past and Present:

There is an unfortunate tendency among music readers to view rhythmic notation as defining points of sound that occur at discrete moments.  On certain instruments, this is effectively true.  For example, once some percussion instruments are struck, there is very little the performer can do to alter the resulting sound, which at any rate, has a relatively short duration.  This is certainly the case for some drums, such as the snare drum, and for some of the idiophones such as the claves, woodblock or anvil.  For nearly every other instrument, though, including the piano, guitar and the human voice, a note indicates not only an attack, or the start of the sound, but also a release, or an endpoint.  In addition, performers may have some control over a sustaining, or continuous middle portion, of a tone, or the decay of a sound, the resonance that continues after a release point.  While some aspects of these envelopes are inherent in the timbre of an instrument, others can be effected through developed instrumental or vocal technique.  For example, the timbre of the piano has very little sustaining quality—as soon as a note is struck, the sound begins to decay.  But through the use of the damper pedal (found on the right on modern pianos), the decay can be allowed to persist instead of being cut off when the key is allowed to return to its original position.  This ability has largely defined the sound of piano music for the last two centuries, and a piano without a functioning damper pedal is not a functioning instrument any more than a car without brakes is a functioning vehicle.

The act of shaping the attack, sustain, decay and release of notes is known as articulation, and the art of articulation varies to a greater or lesser degree on every instrument.  It is crucial for an aspiring musician to study and understand the technique of articulation on her chosen instrument, and within various styles.  Much of what constitutes a style is often a tacit agreement on how to apply articulations to written or improvised music.

Despite the differences in articulation technique between instruments, a reasonably standard set of symbols has developed in written music to allow the composer some control over certain aspects of the shape of notes.  These symbols really alter any of several different musical elements—most often rhythm or dynamics, other times melody, and occasionally timbre.  Their meanings vary slightly between instruments and styles, but some generalizations about each symbol can be made.

The slur is a symbol with several different meanings, depending on the instrument or voice for which music is written.  Its first use was with notes in vocal music that employed melisma, the technique in which a single syllable is sung over two or more notes.  In this case, as in Figure 29, the slur indicates exactly which notes are involved in the melisma, and does so more precisely than the lyrics below the notes can do.

 

Figure 29: Indication of melisma in a vocal part using a slur.  The notes will be sung with the words “A melisma,” with the melisma occuring on the syllable mel.

Woodwind and brass players, whose approach to phrasing and articulation is similar to singing in many respects, also use slurs to group notes into unbroken (or slightly-broken) streams of sound.  In Figure 30, a passage for clarinet, the first note under each slur is given a relatively pointed attack with the tongue.  For the subsequent notes, the air continues to move through the instrument without interference, while the fingers work keys and cover or uncover holes to change the pitch.  At the end of the last note, either the tongue or the breathing apparatus may be used to stop the air.



 

Figure 30: A passage for a wind instrument employing slurs.  Only the first and seventh notes would have tongued attacks.

For bowed stringed instruments, slurs indicate the use of the bow rather than the tongue or airstream.  All notes under a slur will be played without changing bow direction.  In Figure 31, for violin, the player would play the first four notes without changing direction, then reverse course for the next five notes.  String players often use the Π and V symbols for downbow and upbow, respectively, in which the right (bow) hand moves in the stated direction, adding further control over attack and sustain envelopes.  In addition, other articulation symbols may be used in combination with the slur in writing for strings, often indicating the degree to which the bow should stay on or bounce off of the string at the end of each note.[1]

 

Figure 31:  A slurred passage for a bowed string instrument.  The bow would change direction on the first, fifth, tenth, twelfth, 14th, 16th, 17th and 18th notes.

Some instruments, such as piano, guitar and many percussion instruments, have little control over the sustaining power of their notes.  The result is that slurs for these instruments are most removed (although not completely separate) from their meaning for the voice.  For guitar, a slur indicates that the performer is to pluck the string for the first note with the right hand, and then either “hammer on” (articulate the second note by pressing a higher left-hand finger onto the same string), or “pull off” (change to the second note by lifting up a finger to lengthen the string).  The ability to slur on guitar, then, is much more limited than on most instruments.

In music for the piano, which also has limited sustaining power, a slur generally indicates a legato approach to the notes under the slur, meaning that the release of a note is delayed until the finger playing the next note can be put down, usually with a sense of heaviness in the wrist that allows one note to effectively blend into the next.

Slurs for most percussion instruments have only the most basic meaning, but one that is really at the core of the idea of the slur.  In its most basic form, a slur indicates musical phrasing, that is, it shows a player that a group of notes is meant to be a single musical idea.  This type of slur is frequently combined with the other types for all instruments, and it is typical of many styles to modify the tempo at the beginnings and ends of slurs to allow performers to breathe.[2]

Figure 32 shows the notational difference between the slur and the tie.  Several visual cues can be used identify which symbol is being used, notably that the tie always connects two adjacent notes that are on the same line or space on the staff, while the slur may connect any number of notes.  There are also slight engraving differences, namely that the ends of a tie are closer to the heads of the affected notes, and the overall depth of the curve is shallower.  Slurs generally point to the heads of the first and last notes, and are found on opposite sides of the note from the stem.  Two important exceptions, however, should be noted:  When a slurred passage begins with a stem-up note and ends with a stem-down note (or vice-versa), the slur is drawn above the notes.  And when two voices appear on a single staff, any slurs will connect stems instead of noteheads.

 

Figure 32: Slurs vs. ties.  In each measure, the first two notes are slurred while the second two notes are tied.

Several other standard articulation symbols appear in contemporary notation, each with a slightly different meaning on every instrument.  A good instrumental teacher will be able to couch their meaning in terms of instrumental technique rather than precise rhythmic practice, but in each case, the effect is relatively the same.  An additional differing factor is how each articulation is treated in any given style, even down to the expectations of an individual composer.

The staccato dot (.), which appears either directly above or below a notehead, opposite the stem, originally instructed players to perform the note at half its value, with a rest on the second half.   While this interpretation holds true for many styles, there is a fair amount of variation in the meaning of this symbol, even among performances of the same piece.  Many performers will tailor their interpretation of staccato in some passages to the acoustics of the room in which they sing or play, as the reverberation times of musical sound can vary greatly in live performance spaces.  Other composers had the opportunity to let their desires be known, as in the case of Igor Stravinsky, who generally insisted that his staccato markings indicated that the note was to be played “as short as possible.”

The tenuto symbol (–) is an indication to play a note for full value, but no more, in other words, to let a note take up its entire allotted space, but to also keep it separate from the next note.  Like the staccato dot, the tenuto symbol is written either above or below the notehead, opposite the stem.  While this symbol may seem redundant, it is highly useful as a clarifying symbol in a complex passage, or in unfamiliar styles.  The author, in his role as a composer, advises contemporary composers to use articulation symbols liberally in their music.  While most classically-trained performers could supply a reasonably acceptable performance of a piece by Mozart without any written articulations, and indeed frequently do so with the music by Bach, which has few, if any of these symbols, contemporary composers cannot rely on performers’ being able to make these assumptions, and thus should be as specific as possible.

The accent mark (>) always appears above a note, whether the stem is up or down, in order to be somewhat more visible.  This mark indicates that a more forceful attack, often in contrast to the tendency of the meter, should be made on that note.  The mechanics of this attack vary from instrument to instrument, and among styles, composers and individual performers, but the effect generally results in a more forceful attack at a slightly stronger dynamic level than the surrounding notes.

A cousin of the accent mark is the martellato accent (Λ), which is frequently understood as a combination of the staccato and accent.  It is not appropriate in some styles, but is ubiquitous in jazz, where it is often placed above beat-length notes.  Experienced jazz brass and woodwind players emphasize not only the attack but the release of notes with this symbol, and often describe the result with the word “daht” in the vernacular (and often highly personal) rhythmic solfege syllables that are used to communicate information about “swung” rhythms.

The notation of rhythm, meter and tempo, whether in the form of individual rhythmic patterns, metric notation, or articulations defining attack and release, is the key to the higher-level organization of notated music.  Ironically, it is the aspect of Western notation that came later (although not last), as the music originally notated (Medieval plainchant) was rhythmically formulaic to the point where only pitch and text required notation.  Precision of rhythmic notation, and accuracy in reading rhythm, are crucial skills for any musician that will allow the development of true music reading skill.  Rhythmic notation can be confusing, and must be read in real-time, but a focus on learning specific rhythmic patterns will yield relatively quick results for most students, and a degree of comfort and familiarity that will allow exploration of a great deal of written music can be developed by regular practice.  It is crucial that the goal of an aspiring musician in the Western tradition be to understand musical notation at sight without assistance from a teacher or director.  This is the first step toward a firsthand experience of the body of work that is the musical inheritance of a civilization.


[1] This practice has often been imported into writing for wind instruments, where it frequently causes confusion and uncertainty among musicians unfamiliar with its meaning to string players.

[2] This is even the case for music in which the instruments involved do not require use of the breath.  This practice allows the music to ebb and flow naturally, and is often accomplished subconsciously by well-trained and tasteful musicians.  This tendency is a major difference between the performance of a human player and a machine.

What is the best way to write down music?

March 29th, 2011

An excerpt from Chapter 3: The Notation of Melody and Harmony of my forthcoming book Music: Notation and Practice in Past and Present:

Modern Western musical notation has its beginnings in the liturgical music of the Roman Catholic Church.  By ca. 800 C.E., a common repertory of plainchant had solidified for use in the vast array of worship services performed throughout Western Europe.  A plainchant is an unaccompanied vocal setting of Latin or Greek prayers or hymns, sung at specific dates and times through the rotating calendar of the Catholic Church.  Plainchant is assumed to have developed from Greek, Byzantine and Jewish antecedents, just as Christian worship evolved from Jewish Temple and synagogue traditions of the early Common Era.  During Late Antiquity and the Early Middle Ages, there were many forms of plainchant, some of which continue in use alongside other forms of worship to this day[1].  With the rise in importance of the Pope (i.e., the Bishop of Rome) in governance in the Western church, however, worship practices became standardized.  Pope Gregory (dates) is often wrongly credited with inventing the system of notation that would become our modern system, but his role was actually to instigate a cataloguing and standardization of plainchant throughout the church year.  Nonetheless, the term Gregorian chant is often used synonymously (and technically incorrectly) to refer to plainchant.

The key difference between Western musical notation and earlier systems is that the earlier methods generally relied on an alphabetic approach to notation in which every note available was assigned a specific symbol, just as an alphabetic writing system assigns a specific symbol to each phoneme in a language.  An example of this type of system can be found in ancient Greece, where by the early Common Era, both a vocal and an instrumental set of symbols had developed on this principle.  The system appears to have been fairly well-known, and used in a variety of settings, from the theatre (scraps of papyri give musical notation for lines from plays by Euripedes), to religious settings (the Delphic hymns), to funereal memorials, such as the Epitaph of Seikilos, found carved on a tombstone.  In the Greek vocal system, symbols for individual pitches were simply written above or below the corresponding words.  The Jewish Psalter, the hymnal for worship at the Temple in Jerusalem, used a simpler set of diacritical marks to indicate musical cues within the Hebrew text of the Psalms.

While music is often referred to as a “language,” especially by writers and speakers who gush over its emotional properties and spiritual meaning, the truth is that it is not.  There are similarities between music and spoken language, but there are also great differences.  Music alone cannot convey precise meaning.  It cannot give directions or specific instructions.  It cannot describe historical events in unambiguous detail.  It can tell stories, but only in vague ways that often rely on a written or spoken description, or a preexisting understanding of the story being told.

At the same time, language is not music.  Our ear for language is somewhat forgiving, as evidenced by our ability to understand each other despite individual variations in pitch, rhythm and articulation.  It may be difficult to understand a speaker using our native tongue with a strong accent, but we still comprehend the same words.  If a musical composition were changed by the same amount in the same aspects, we might be forced to understand it as a variation on the original piece, if not a completely different piece.

The alphabetical approach to musical notation, then, while a clear borrowing of a wondrous invention, leaves some things to be desired.  The Greek and Hebrew systems did not have any way of showing duration, for the most part.  Duration, in the form of tempo, rhythm and meter, is crucial to musical expression and to the reproduction of a musical idea, which is the purpose of notation in the first place.  When notation using a musical alphabet is added to written language, the result is that a performer must either use both as simple mnemonic devices or attempt to simultaneously read both lines flawlessly.

For written language, the process of fluent reading depends on chunking—the mental grouping of characters into recognizable patterns.[2]  Fluent readers chunk on the word and phrase level, often making assumptions about the identity of a word based on its shape and first and last letters rather than perceiving every symbol.  In music, this sort of chunking might be useful to a certain degree, especially in styles that abound in melodic cliché, but the ability to simultaneously perform this mental operation in two different alphabets would likely be very difficult to develop.  Whether this is a reason that the Greek system never became as ubiquitous as our modern system of notation is debatable.  More likely, in a largely illiterate society, the need for musical notation was simply not all that great.


[1] Over the last 400 years, the Roman Catholic Church has increasingly left behind plainchant, and music derived from it, as the regular mode of worship.  Unaccompanied singing was appropriate in times and places where sustenance and survival were uncertain, but as Europe became more prosperous, the music used in worship became more complex.  The Second Vatican Council of the 1960s introduced the most sweeping changes in Catholic worship by mandating worship in vernacular languages rather than Latin.  With this shift, musical styles in worship also shifted to reflect folk and popular styles.  Plainchant continues to be used in some religious communities and on certain occasions in worship, but its interest is now primarily historical for most listeners.

[2] Another example of chunking in everyday life is the splitting of phone numbers into groups of two, three or four digits, thus making them easy to remember.  The American convention, for example, uses the three-digit area code, followed by a three-digit exchange, followed by the four-digit extension.  To the switching computers at the telephone company, these groupings are irrelevant, but they are extremely useful for humans who wish to remember a phone number.  Four digits seems to be an upward limit, and children often have to remember the last four digits of a phone number two digits at a time.