Posts Tagged ‘captioning’

The Contest Post

Wednesday, March 17th, 2010

Anyone who knows me very well knows that for years–probably since I was 9 or 10 and my piano teacher sent me to “Scale Olympics”– I’ve had reservations about musical competitions.  My attitudes have developed and simmered over the years, but I must admit that I remain somewhat wary, especially of the culture of large-ensemble contests that has, for better or worse, become the focus of secondary-level music education in this country.

In the last few weeks, I’ve watched the Music Department here at OPSU get ready to host Oklahoma Secondary School Activities Association (OSSAA) vocal contest, then traveled to Alva, Oklahoma to judge regional band and solo & ensemble contest at Northwestern Oklahoma State University (NWOSU) and then host our own band and solo & ensemble contest here.  Shortly thereafter, two of the adjunct instructors in our department who have full-time jobs as high school band directors took their bands to Texas’ UIL regional contest (and did well, so congratulations to Kevin and Sandy!).

So what, you may ask, is someone with an avowed skepticism of the contest culture doing hosting and judging them?  Simply put, I feel that, at this time, my participation helps to ensure that at least some students feel the benefits that I believe are available from contest and try to see that at least some of the excesses are avoided.  At this time, I think I can do good from the inside.

I waited a couple of weeks before writing this post, partly because I didn’t have time to sit down and write it, and partly becuase I wanted to reflect on my experiences with contest season (I’m not judging any other contests this year, so my part is finished).  I’m going to start with what, admittedly, is the less-natural stance for me:  what is good about music contests?

First, in my list of pros, I need to say that solo & ensemble contest is a great invention.  It gets students to discover the joys of small-group music-making, requires them to be independent musicians, gives them projects to be acheived (usually) on their own, makes them interact with adults who aren’t their teachers (such as accompanists or judges), helps them build leadership skills and in general makes them think about many of the things that their music teacher thinks about for them in a large-ensemble setting.  As a student, I looked forward to solo & ensemble contest every year, and as a teacher, I have frequently required my students to participate.  As a high school band director, I was able to assign all my students to an ensemble and provide them with time to rehearse, while I floated from group to group.  We had a recital just before the deadline for solo & ensemble contest that was their goal for the purposes of the class, and I then left to each group the decision of whether or not to participate in the contest.  This gave students many of the benefits without some of the drawbacks.

Solo & ensemble contest is an important counterpoint to the large-group contests because if it weren’t there, an entire set of skills might not get taught while teachers were busy preparing their bands, choirs and orchestras.  I am an unabashed fan and promoter of solo & ensemble contest, and if the rest of the contest establishment were to disappear I would argue to save this portion.

So what’s good about large-group contest?  I have always been wary of students getting everything they know about any subject from only one person–people are human, and they forget the things they learned in college or simply focus on one pet peeve to the exclusion of other things.  Preparing a program as well as it can be prepared by the students and teacher in question, and then having three experts make comments is a really great check on what the teacher is doing, and may remind a teacher of things he or she had not emphasized.

In a way, large-group contest is like an annual physical for a music program.  If all the components–teacher, administration, students–are in place, a group will probably do well.  If one of those is dysfunctional in some way, it will show in the contest ratings.

Large-group contest gives teachers and students a goal, and a way to gauge their progress.  It provides a life for the ensemble outside of the school, and just as athletic teams have home and away games, contest allows band, choir and orchestra students to test themselves.

In the states where I have been involved with contest, Ohio, Georgia and Oklahoma, as in many others, the music played at contest is to be chosen from a prescribed list, compiled by highly experienced experts in the field.  Having to prepare at least one program’s worth of music deemed to be worthy by experts is good for students in that it gives teachers a strong incentive to not pander to students and audiences by choosing only light, popular music and to explore more artful styles.

The sight-reading component of large-group contest is perhaps the strongest litmus test for whether a music teacher is really teaching music.  I don’t know what the point of having band, choir or orchestra in a school is if all students are able to do at the end is remember the great times they had and (hopefully) how good some of that music was.  As my undergraduate advisor, Gerald Doan, used to say, we don’t give students their music at graduation.  They only take with them the skills, physical and mental, that we’ve taught them.  Sight-reading components check to see if these skills are being taught in some way.

So, what are the drawbacks to contest, then?  For the most part, I will try to present what I feel these are in answer to each of the above points.

A major problem with solo & ensemble is that teachers frequently are unable to allot sufficient time to help their students prepare.  Of course, in well-off schools, or schools where music is taken seriously, this is less of a problem, because students have access to private instruction.  The result is that many students arrive at solo & ensemble contest unprepared or with little musical understanding beyond notes and rhythms.  In Ohio, where scales and rudiments are required at solo & ensemble contest, every year one could hear students in the warm-up room cramming their required scales at the last moment, which was certainly not the intent of that requirement.

Scheduling of solo & ensemble contest is critical–to have it the same day as large-group contest is less than desireable, but in areas like Western Oklahoma this is the norm.  Here at OPSU, we are one of two logical places to host such a contest in our area, and on the instrumental side, neither contest is large enough on its own to justify paying for judges.  Together, the two contests are economically efficient, and so we had them both on the same day.  The result is that most schools, wanting to disrupt their school day as little as possible and save on transportation costs, bring their students to both contests on the same day at the same location.  The large-group contest inevitably overshadows solo & ensemble in the experience of many teachers and students.

If it is good for a program and a teacher to get comments from outside sources, are there other, less stressful, more reaslistic ways to get these?  In the 21st-century, there are.  It would be a simple thing to send high-quality audio and video to a judge, who can then watch or listen multiple times.  At many schools, it would even be possible to do this in real-time with immediate feedback through VOIP or videoconferencing.  It would be a simple thing for a judge to come to various schools for a residency of a few days (maybe even every other year) to not only hear the ensemble perform but also to work with the group in a rehearsal setting and bring the sight-reading music along.  This would be a far more robust educational experience than being herded onto a stage in a strange building, playing to a mostly empty hall and then being herded off.  If we’re going to solicit comments, it should be done in a meaningful way.

And then there is the rating:  the number (because everyone is most worried about the composite score, not its components) that will determine many a teacher’s self-esteem for the next year, until they have a chance to get a new number.  The number that may determine whether that teacher is asked to continue in his or her position for another year. 

If large-group contest is like a physical for a program, does it make sense to only look in the program’s left ear and take its rectal temperature?  And then average those two results?  The form of rating used in most states for large-group contests was once referred to as the “Olympic” rating, because the highest and lowest scores are dropped to determine the overall score for the concert program.  There is a major difference between most school music contests and the Olympics, though.  Namely, in the Olympics, judges are comparing athletes to one another to determine a ranking.  In school music contests, each performance is allegedly judged on its own merit, and first-place, second-place, etc. are not awarded (with the exception of some marching band contests, which are not my particular area of expertise).  Why are we rating musical groups using a system that has its origins in ranking athletes, with their much more objectively qualified performances?

If one were to compare professional orchestras, the merits of each could be argued endlessly–how does Chicago’s brass compare to Cleveland’s strings or Los Angeles’ innovative programming?  No two ensembles will ever be alike, and this is even more true in the secondary school world, where every teaching situation and every social situation is a little bit different.  To attempt to listen to a 35-piece middle school band playing John Edmondson and an 80-piece high school band playing Percy Grainger and make the same sorts of musical evaluations in both cases is nearly absurd.

Ratings as I know them in the states where I have taught and judged seem completely unreliable, and worse, not at all helpful to the educational experience.  I would argue for one of two ratings systems.  The first would to simply adopt the system used in Alaska in the mid-1990s:  ensembles could receive a rating of “1” or a rating of “comments only.”  This allowed truly excellent work to be recognized while emphasizing the underlying instructional aim of the experience.  A more preferable alternative would be to bring the “captioning” system widely used in marching band contest into the concert hall, and make scores more statistically reliable by making each component a mathematical part of the final rating.  In this way, teachers and administrators could better evaluate the success of a program by comparing results from year to year, and identify specific areas for improvement.  An administrator would be able to see, for example, whether problems in an underperforming band are instructional (e.g., students aren’t playing rhythms correctly, a possible teacher shortcoming) or systemic (e.g., tone quality is poor, possibly because sufficient budget hasn’t been allotted to maintain and replace instruments).

As a composer, I am generally appalled by the repertoire choices made for contest.  The contest format encourages teachers to choose safe, unimaginative, formulaic repertoire that generally lies at the lower end of their students’ technical and musical abilities.  This type of music does not inspire, does not educate beyond the realm of motor skills, and does not truly represent any recognizable historical or contemporary style beyond “contest music.”

In my experience  judging and managing, a look at the scores of the “marches” that teachers choose for concert band contest is a case in point.  Historically, most marches (of which Sousa, Alford and King are all outstanding examples) are written to a very specific form, and have certain rhythmic, metric and harmonic expectations.  It is possible to make the argument that a part of a student’s education in band should be to learn to deal with this style of composition.  The “marches” chosen for contest, though, are often marches in name and tempo only, simply being compositions in duple time and at a fast walking tempo.  The chosen meter is usually 2/4 or 4/4, despite most historical marches being written in “cut time” or 6/8.  Absent are the characteristic form, the expected key change to the subdominant (or any key change), and, more importantly, the rhythmic vitality combined with genuine melodic appeal that make pieces like Sousa’s “The Stars and Stripes Forever” an intergal part of our American heritage.  Not every band can play Sousa’s work, truthfully, but the chosen pieces are not even way-stations on the road to that level of performance.  They are all too often instead souless, styleless, pointless exercises in quarter-notes and eighth-notes, which then proceed to be played as such.

While the prescribed “list” of compositions for contest can help to ensure that a basic standard of musical quality is in place (or not), it also encourages composers to continue to write and publishers to publish the sort of formulaic drivel described above.  There is good music for young bands, but precious little of it seems to appear at contest.  It is in this arena that a switch to evaluation by clinician, as suggested above, might have a meaningful impact.  Students must learn to make music, not just to play the contest selections (or the selections for their concert at school, for that matter).  My experience over the last two decades is that students are more likely to remember playing good music as well as they are able than to remember playing bad music perfectly.  And some of the music is so bad that it will never sound good.

This post has gone far too long, but a few words about sight-reading.  After solo & ensemble, this is the next most important type of contest, because it caters also to what should be the underlying goal of music contest, and music education generally:  to create adults who are able to pursue music on their own terms after graduation, either professionally or on an amateur basis.  Students who can sight-read and play in small groups will be able to do this.  The fact that a sight-reading contest exists is a crucial accomplishment, and I haven’t quite determined how the experience could be improved–possibly by having a full panel of judges for sight-reading instead of one, as is usually the case.  Possibly by having the same judges hear sight-reading and concert performances.

Music contests, then, to me, are a two-edged sword with great possible benefits, but the potential to harm the field of music education as well.  My advice to music teachers and adminstrators is to have an open and honest conversation about the goals of their music programs, and then decide whether or not music contests, especially large-group contests, really and truly further those goals.