More About Voltage vs Current and Audio Impedance

Voltage can be transmitted with little to no current because electromotive force and magnetic force are directly related. This is how we can send signals through outer space and how we can use magnets in microphones and guitar pickups to transmit sound electrically. So, voltage is important for these kinds of input transducers.

Because voltage can be created with little to no current, it is preferred for sending electrical power (V*I) over long distances. High voltage in power lines can be “stepped down” to 120V to increase current to meet the demands of appliances.

However, because voltage creates an accompanying magnetic field, if we transmit our audio signals at high voltage, they will interact more strongly with nearby magnetic fields along the way (e.g., other high voltage lines or devices). That is, our message won’t make it through in perfect shape.

Resistance decreases current, and when the current drawn by a device is constant, resistance makes voltage increase (like a thumb partially blocking the end of a water hose). Impedance is basically resistance, when considering AC circuits like those carrying sound waves. So, high impedance devices like electric guitar pickups and cheap consumer microphones send their signals with higher voltages and lower current than professional audio equipment. Their higher voltage means less power is needed for amplification, but the higher voltage signal is more susceptible to being affected by external electromagnetic interference. High voltage (high impedance) may be preferred when power is the main concern, but not when the quality of the signal—the message—is important.

So, it is preferred to send audio signals at low impedance, meaning high current and low voltage. Current is manipulated throughout a signal path in order to scale the voltage as needed to preserve the integrity of the signal represented by the voltage.

Device inputs are supposed to have impedance a about ten times as high as the impedance of the signal coming into it, sacrificing current in order to boost voltage while it is inside the device. This is like “zooming in” on the signal so the audio equipment can process it with greater precision.

As for actuators, transducers that convert electricity into sound, light, heat, movement, etc., it makes sense that power is needed: measured in Watts = Voltage * Current. So, they won’t work on little-to-zero current even if we can make high voltages as described above. Every device expects to draw as certain amount of current from a circuit. If the current is constant, specified by the device, then voltage scales the resulting power. Voltage is the answer tot the question, “How much?” in terms of loudness, brightness, speed, etc.

In short, you could say that voltage is the message, and current is the muscle.

Wishart, Tongues of Fire—Notes on Form

Time

  • 0:00 Introduction of motive
    • Motive triggers a series of gestures
    • Silence
  • 1:00 Motive reappears
    • Motive triggers a series of gestures
    • silence
  • 1:45 “New” material: middle of motive developed
    • Series of gestures on varied material
  • 4:00 Stable moment with new texture & material
  • 4:30 Stability breaks up and the material is developed
  • 7:30 Material from 4:00 reappears briefly
  • 9:00 Stuttered figure enters stubtly, ends up in fogreground
    • Silence
  • 10:05 First material restated
    • Material triggers a new series of gestures
  • 11:50 Pitched material brought back from the beginning
  • 15:00 Granular texture rises to be accented by another pitched hit
  • 16:10 Sounds like a recap of an earlier moment, but more stuttered; pitched hist go up and down
  • 16:50 Pitched version of voices appears briefly
  • 18:20 Pitched voice motive reappears alongside other gestures
  • 21:00 Slow fade out
  • 22:00 Single sound pulses, slowing down
  • 22:30 Slow down to brief silence
    • Numerous isolated motives from earlier in the piece reappear
  • 23:20 Motives build and accelerate
  • 23:50 Original motive reappears
  • 24:10 Final gesture

Rhythmic Compression and Gating

This tutorial by Blue Rooms shows how to make a pad pulse to the beat using a compressor with a bass drum as its sidechain input. This technique is pervasive in dance music and can be disorienting, hypnotic, or nauseating depending on how you use it. At the end of the video he shows how you can use a gate instead of a compressor to carve rhythms out of a sustained pad.

Notes from 11/26/2013

I mentioned some of Frank Zappa’s warped collages of recorded voice and instrumental passages. The piece I specifically mentioned was “Are You Hung Up?” from We’re Only In It for the Money (1968). In the context of the discussion, though, I was really thinking of so many of his recordings that are obscured because they’re awkwardly-miked live recordings, e.g. “The Old Curiosity Shoppe” on Finer Moments (1972/2012). The latter half of “Flower Punk” from We’re Only In It for the Moneyis an example of the extensive static-but-agitated moments he builds. Lou Reed’s Metal Machine Music (1975) lends itself better to moment listening as I was discussing it.

I also mentioned Bill Frisell for his reputation for placid textures and recent use of loopers to build them. Here are a couple videos of him demonstrating some of his techniques:

Example of music and narrative sounds in layers

“Years” (2012) by Alesso and Matthew Koma was brought up as an example of effectively using layers of narrative sounds overtop an already fully-featured musical passage with a strong structure of its own. In your work, you’ll of course need to focus on the listenability over danceability (i.e., music for ears and attention, not the dance floor), but this video includes several useful moments to serve as “case studies,” including multiple layers of independent musical passages laid over each other.

Examples of meter, pitch, and quotes used expressively

Around this time of the semester, many first-semester students start getting itchy to make what they can’t help but refer to as “normal”/”real”/”good” music (they usually mean “familiar” or “comfortable”). When we started the class, I explained that we’re temporarily setting aside:
  1. Non-musical things like intelligible words, recognizable sounds, and danceability (things that speak to other parts of our brains/bodies than the purely musical parts and distract from the pure musicality) and
  2. Structures-become-crutches like time signatures and key signatures (things that should be descriptive, to analyzemusic, but end up prescriptive, sterile formulas for making new music)

This is so we can rediscover musicality in its most raw form and return to familiar music with new ears. We’re not here for you to learn how to make better hamburger/driving/doing laundry music—that ability will naturally improve once you’ve focused on the pure musicality of your work.

We end up working with a narrow slice of even electroacoustic art music. The simplest assumption by students is that melodies and beats aren’t allowed at all, but that’s not so. They’re just risky because they easily allow us to rely on non-musical things instead of building our musicality.

Reflect a moment on the pieces we’ve studied so far:
  • The contour of melodic motives and counterpoint built from quasi-pitched voices in Yuasa, Projection Esemplastic For White Noise (1964) are key points in the piece.
  • The repeating and then rising pitch patterns of the “orchestra hit”-like sounds in Wishart, Tongues of Fire (1993) have a significant role in moving the piece forward, as well as the accelerating pulses.
  • Westerkamp, Cricket Voice (1987) uses some almost-recognizeable sounds—frankly, I think that distraction makes it harder to analyze its musical elements even though it is usually considered the most accessible piece of the ones from the analysis projects.
  • Deck of Cards (2012) builds tension over time by stacking up chords (given pitches by resonant filters).
  • Hit the Deck (2012) uses three pitches and a stable metric rhythm at times to evoke a sense of confident, energetic, focus, in contrast with the mayhem depicted when we are knocked out of those moments of focus.
Now is the point where most students just becoming mature enough musically to start using melodies and beats in expressive, dynamic ways, but I need to see that it’s not an “all or nothing” situation. You can organize events in line with a recurring pulse, use various forms of accent to suggest a hierarchy among the pulses (e.g., downbeats, backbeats, etc.), but you don’t have to set it and leave it like striped wallpaper (there’s nothing wrong with striped wallpaper, but you’d have a hard time passing it off as your own painting). The clarity of the pulses and hierarchy can come in and out of focus, become more stable or less stable, change and lead to new moments or stay the same and cultivate moments of stasis; that hierarchy can shift, as can the number of beats, the tempo, the time scope at which we hear the “main beat” (“is it a fast 1 2 3 4 | 1 2 3 4 or a medium 1 – 2 – 3 – 4 – ?”). And when you build those structures—just enough to suggest the pattern—then play with them, you can use them to build and resolve tension, move the music forward, make it an active, compelling contribution to the music.
Peter Klingelhofer’s composition, “Antarctica” appears to have a steady pulse when it’s noticeable at all, but there are also several free-floating moments. Accent patterns on the pulses suggest hierarchies at times, but they are fleeting; they allow foot tapping at times but elude counting “1-2-3-4-” for very long before that pattern gets overturned. At times it sounds like multiple layers of time are overlapping, each with their own pulse and metric hierarchy, strong within itself but different from the others (like the moments with the truncated female voice and the reversed male voice). It keeps our pattern-recognition machines (our brains!) thinking. These techniques use time in musically expressive ways, not just as wallpaper.

Peter Klingelhofer, Antarctica (2013)—DO NOT DISTRIBUTE (Posted here with permission of the composer)

Here is an effective example of an even more tricky issue: quoting tonal music. (Remember that the intellectual property rule of the class basically limits us to public domain works). Check out this past MUSC 316 final project that is based on Chopin’s Nocturne No. 20 but clearly has its own identity, de/reconstructing the nocturne and making something new.

Aaron, Loveall, Familiar Through the Haze (2009)—DO NOT DISTRIBUTE (Posted here with permission of the composer)

So,

Yes, you can do that. You know what’s important about our focus in the class now. I’ve been reminding you since the first day. It’s not black and white: welcome to grey. Don’t let old habits lead you astray from growing musically.