Saturday, December 31, 2011

General Hand Soldering Tips

               By Norman Varney

This blog is for the few DIY'ers out there. It seems that hand soldering is becoming a bit of a lost art as technology progresses. Hopefully you'll learn one or two tips and share them with others. These are soldering tips I grabbed from training I did for MIT line workers who were building cable networks, terminating interconnects and speaker cables, etc. It does not cover all soldering techniques for through hole mount, surface mount technology, turret and pierced tab terminals, desoldering or repairs.

Soldering Environment
  • Clean, oil free hands, dust and clutter free work area
  • Good lighting without casting shadows on work
  • Stand-held magnifying glass for small work and inspection
  • "Helping hands" to free both hands and hold work at comfortable position
  • Chair and table heights conducive for allowing arms and/or hands to rest on table for remaining steady while soldering and to keep the soldering iron cord away from the work

Trimming the Wire
Use a mechanical stripper with appropriate gauge die holes. This type of stripper will not cause nicks or scrapes to the conductor. Blades should be replaced as necessary. Be sure that individual conductors are uniform in appearance of twist and grouping with no stray or frayed wires, no "birdcaging", nicks, etc.

Cleaning All Components
  • All leads should be cleaned using an alcohol based, carcinogen free cleaner prior to being soldered.
  • Soldering iron tip should be bright silver and free of flux residue and solder. Any build up of oxide is removed by wiping the tip on a damp sponge ("shocking") prior to applying it to the soldering area. 

RA (Active Rosin) flux type. Flux prevents oxidation, removes the thin layer of oxide and atmospheric gas layer. It permits solder to flow or wet smoothly and evenly. It also improves the flow of heat resulting in faster heating times.

The soldering iron tip mass should be approximately the same size as the conductor being soldered. Heat should be high enough that the work is completed from start to finish between 1-5 seconds. Over heated joints look lumpy, dull, crystalline and/or grainy. Heat sinks (twezzer type) can be used below the soldering location to prevent overheating damage to electronic components.

Solder Iron Wire Tinning
Hold the wire in a downward position with the solder placed underneath the wire and at the center point of the stripped portion of the wire. The solder iron is then applied at the same point and when the solder melts, contact is made with the wire by the iron tip. Both the solder and the iron are moved upwards towards the insulation, and then down and off the end in a continuous motion. The solder should stop one conductor width away from the insulation. The wire is then cleaned of excess oxide and flux. There should be no solder underneath the insulation, as this would degrade the conductivity of the metal.

Solder Pot Wire Tinning
Time in the bath and/or temperature of the bath will need to be adjusted for different gauge wires, even if it is only 18g. to 20g. In every case liquid flux is applied to the stripped wire (allowing the molten solder to strip the insulation contaminates the lead as a conductor). The fluxed wire is inserted into the molten solder to the point where the stripped insulation is about one conductor width away. It is then moved over to one side and removed. The wire should be cleaned to remove any flux so that it doesn't become a problem during termination.

Soldering Procedure
  1. Clean lead wire and pad to be soldered.
  2. Apply liquid flux to lead wire and pad if necessary.
  3. Clean iron tip by wiping off excess residue on a lint free cloth.
  4. "Shock" the iron tip by touching it to a damp sponge.
  5. Place proper amount of solder in contact with the lead wire and the pad to be soldered.
  6. Place iron to the solder without exerting pressure.
  7. Repeat or move along to the other side covering all exposed copper.
  8. The work must remain motionless during the "solidifying" state of cooling.
  9. Clean off any excess flux or burned carbon.
  10. Visually inspect.
  11. Check electronic continuity.
  12. Leave a thin coating of solder on the iron tip when not in use.

Appearance of Good Connections
  1. Smooth 
  2. Bright
  3. Shiny
  4. Clean
  5. Concave solder filet with 0-20 degrees of slope
  6. Good wetting
  7. All of the wire is covered
  8. Contours are visible

Unreliable Solder Joints
  1. Overheating- de-wetting: lumps, dull, crystalline like, looks like sand has been thrown into it. 
  2. Cold- poor wetting: balled up, dull gray.
  3. Fractured- poor wetting: solder has stretch marks between the pad and the lead.
  4. Non-wetting: solder is balled up around the joint.
  5. Excessive solder: lead conductor is not visible and the shape of the solder is convex.
  6. Insufficient solder: hole is not covered, copper is not sealed.
  7. De-wetting- usually excessive heating: solder balls up.
  8. Pinholes or voids- causes can be dust, dirt, flux gas, improper heat, etc.
  9. Lumps and large holes- improper pre-solder cleaning, out-gassing from flux gas.
  10. Damaged wire insulation- excess heat and/or wicking of the solder under the wire insulation.

Gold Pin/Cup Soldering
Tiny pores occur at the connection point and the gold keeps leaching into the tin portion of the solder (within days versus years) resulting in an unreliable, brittle solder joint. The gold contamination must be removed by pre-tinning the gold cup.

Trimming the Wire
  1. Establish the depth of the cup by placing the wire inside and mark it at the top of the cup.
  2. Remove and trim the insulation with a clearance of at least two wire diameters above the mark.
  3. Insert flux cored solder wire (slightly smaller in diameter then the cup) into the cup and cut it flush with the top of the cup.

Tinning the Pin/Cup
  1. Apply clean solder  tip to the back of the cup and below the top edge. Wiat until the flux has bubbled to the surface.
  2. Apply some liquid flux to a solder wick and insert it into the molten solder.
  3. Remove all solder from the cup.
  4. Clean all traces of flux with cleaner.
  5. Insert solder into cup and cut it flush.

Installing the Wire
  1. Apply clean solder tip to the back of the cup and below the top edge. When solder melts, allow all of the flux and flux gas to surface. 
  2. Insert the pre-tinned wire slowly to allow the tinning on the wire and the solder in the cup to mix together. Make sure that the wire goes down to the bottom of the cup. 
  3. Move the wire to the back of the cup, then forward, then back and hold.
  4. Remove iron, while holding still, allow to cool.
  5. Inspection: no solder on the back of the cup, nice solder filet is formed.

                                      The author tinning the wire ends of a speaker cross-over network

Tuesday, October 18, 2011

Coordinating Construction Trades to Optimize Noise Control

Coordinating Construction Trades
to Optimize Noise Control
(Why small oversights can destroy noise control performance)
By Harry Alter & Norman Varney

 Let's say your dream listening space has been designed. Great care in the design construction has been taken to ensure  that dynamic range and low-level details are not masked by noise, nor cause distraction. Noises from outside traffic, electrical, HVAC, footfalls, motors, plumbing, etc. have all been considered. You've spared no expense to make sure your critical listening environment is designed for the most realistic experience. Ceiling, walls, doors, and floor systems have been designed to achieve a minimum STC (Sound Transmission Class) performance rating of 60 and IIC (Impact Isolation Class) of 55.  HVAC ducts were designed to make sure the ambient noise levels within the space are quiet to an NC (Noise Criteria) rating of 20. You’ve crossed your T’s and dotted your I’s, there’s nothing left to do…right?  Wrong. Because what happens in the field will determine whether all your hard design work will pay off.

  Planning to reduce potential field problems before they happen is key to your success. So what more do we plan for? First, plan for flanking noise and second, plan for the Trades in the field to work as they usually work. That means that if we want the various Trades in the field to work together and do what needs to be done, we need to provide clear and concise information and instructions (both visually and verbally) to help them through what will probably be a new and atypical construction process for them. We need everybody to understand the goal and how critical each trade is to achieving that goal.   
Here are a few basic examples of how field conditions and flanking noise can affect a room’s final noise performance level:
  1. Poor caulking around the frame of an acoustical door or along gaps at the floor plate can cut a high STC door or wall system in half.
  2. If a high STC wall assembly is built over an OSB or plywood subfloor, the performance level of the wall will (as a result of flanking noise) typically never achieve a STC rating above 50.
  3. The improper installation of plumbing lines thru wall studs and ceiling joists can take the room’s ambient noise level from a quiet NC 20 to a distracting NC 40.
  4. The direction of floor joists relative to a partition wall beneath it will affect the amount of airborne and impact noise flanking through the floor assembly to adjacent rooms.
  5. Lined duct runs alone don’t assure a quiet listening environment. Typical HVAC installations can be very noisy even with lined duct runs.
  6. Using heavy floor toppings such as Gypsum-Concrete can dramatically increase the STC (airborne) performance of a floor system, but can dramatically decrease the IIC (impact; footfall) performance when not properly addressed for noise control.

  And the list goes on. Remember, a little attention to flanking noise and the Trades can go a long way toward solving potential risks in the field. A/V RoomService is committed to making that journey for both you and/or your customer an easy one. Contact us at 740-924-9321 or at
A few of the visible construction designs offered by A/V RoomService, Ltd.

Friday, September 30, 2011

The Importance of Speaker/Listener Locations

By Norman Varney

The corner stone for high fidelity playback is positioning the speakers and listener at the optimal locations in the room. The idea is to avoid as much room boundary interference as possible, while providing an accurate soundstage. In very basic terms, let's find out why this is so important to the end result. 

Room walls, floors and ceilings react to sound energy with reflections and resonances from the make up of their construction surfaces and cavities. These interferences compete with, and distort, the direct signal sent by the loudspeakers. As speakers are located further away from boundaries, less energy is transferred in which to move room surfaces. In addition, as listeners are distanced further away from boundaries, less energy from room surface resonances and reflections is received by listeners. This mitigation of non-original signal information means improved low-level resolution, dynamic range, spatial cues and timbre accuracy.

There are three types of boundary interferences:

1. Cavity resonances. Try stomping on a wood floor and pounding on a framed wall and listen for them to sound like a drum. This adiabatic compression of a low frequency note is dictated by the mass-air-mass construction of the partition itself. When a loudspeaker plays the frequency in question, the partition will move sympathetically, resulting in that note being returned to the listener from the room surface, after the original event. 

2. Room resonances. Like any kind of enclosed space or musical instrument, a room has resonances defined by its dimensions, mass, compliance and friction. Each axis; length, width and height, has its own frequency in which the lowest (longest) wavelength can fit. Resonance, or room modes, are "standing waves". They are formed when the distance is a multiple of one-half the wavelength. When this occurs, the resonant frequency (and its harmonics) will sound louder than normal in some locations, and quieter in others. Think of the waveform with its pressure peaks and valleys traversing from one surface to the opposite parallel surface, and then reflecting back into the oncoming waves, etc. As they collide,  peaks from one surface run into the valleys from the other, resulting in a cancellation of energy. On the other hand, some peaks will run into other peaks, causing an increase in energy level.

3. Reflections.  Obviously, if we position ourselves and/or speakers near a large surface, will will hear the effects of sound energy being reflected to our ears later in time than the direct signal. The distance between the loudspeaker, the surface, and our ears will determine how much interference will be perceived. Basically, if the reflection is within about 15 dB SPL of the direct, it will be audible. In addition, the construction of the reflecting surface will determine what extent and what frequencies are absorbed and reflected by it.

In rooms of rectangular shape (preferred), simple math will predict what frequencies will resonate. It is correct to think that certain dimensions will offer better results than others. For example, rooms with dimensions divisible by each other will tend to exaggerate those resonant frequencies because they are similar in musical relationship. Once you determine the fundamental resonant frequencies, you can figure out where the peaks and valleys are located in the room along each axis. It is important to figure out the second and third order resonant frequencies for each axis as well because their energy levels are also likely audible. With this information you can avoid placing your speakers and listener in locations that will exasperate the room's unique modes and offer the most linear bass response.

Positioning for Room Modes
All rooms have room modes. Larger rooms have more of them. This means that there is less of a gap between one and the next, which is a good thing. Fewer modes mean that they draw more attention to themselves. Because all modes start and end at the boundaries with high pressure peaks, you have lots of bass there. Essentially, about 3 dB (sounds twice as loud) more bass at a single surface boundary, 6 dB (sounds three times louder) where corners meet, and 12 dB (sounds four times louder) in a tri-corner. People can use this for passive acoustical bass gain, but at the sacrifice of accurate, linear bass response. Same results for listener locations.

Ideally, you want to avoid placing a speaker or listener in a mode peak. Doing either will result in certain frequencies being discernibly louder than all the others. Though it's best to place speakers and listener between these primary room modes, you must always compromise. With speakers, avoid the peaks over the valleys. With the listener, avoid both, with one exception. It is very important to place the speaker/listener footprint exactly between the side walls to allow for symmetry in the horizontal plane. Without this established, the timing, energy levels and frequency response will be different for the left ear than for the right. As you can imagine, this means that you will be sitting in a spot that is a null for the first order resonance frequency of the width mode. This position is also a peak for the second and a null for the third width modes. This is a compromise that must be taken. It will suffer the fewest anomalies; only in the low frequency range and only at certain instances. Any other position will compromise all time arrivals, all energy levels and all frequencies, all of the time.(See Symmetrical vs. Non-symmetrical Layouts)

Positioning for Soundstage
By soundstage, I mean the accuracy in sound representation of the recorded space for width, depth and even height. Once mapping of the room modes is complete, either by modeling or with test instruments, the soundstage must be considered. The relationship of separation between the two speakers and the listener must be precise.  If the speakers are much closer to each other than the distance between them and the listener, there will be a small, narrow soundstage and sound will appear to originate from the speakers. On the other hand, if the speakers are too far apart, you'll have a hole in the middle of the soundstage and again, the sound will seem to come from the speakers. When the speaker/listener positions are correct, the soundstage will become three dimensionally large and solid, well beyond the speaker's edge. There will be a sense of true sound development beyond where the speakers reside and the recorded space will be realized. 

Fine tuning the soundstage is beyond the scope of this article. I will mention that are ways to precisely adjust the toe-in of the speaker angle using the ears and laser alignment tools. You can also adjust for personal preference of soundstage perspective, meaning if you prefer an intimate, front row perspective, or one more laid-back from say row T.  Note that toe-in not only controls balance, spaciousness, focus and intimacy, but also tonal brightness. It is speaker/room specific, due to the unique interactions of the speaker's energy dispersion pattern and the make up of the room.

The drawing above is an indicator of how positioning the speaker/listener footprint off center causes havoc on all signals, all of the time. The point that should be understood is how important it is keep things symmetrical, especially in the horizontal plane. Construction, even furnishings can impact how sound energy is absorbed, reflected and diffused.

Optimal speaker/listener location within the room is paramount to high fidelity playback. Keeping the speakers and listener footprint centered between side walls, away from boundaries, and room modes is the first priority in setting up a sound system. I would prioritize stereo separation as second, toe-in as third, and symmetry of furnishings in the horizontal plane as fourth. Without optimizing this footprint for the specific room, the full potential of the recorded experience cannot be realized. Avoiding room modes and optimizing soundstage go hand in hand. They are the foundation for optimal bass response, dynamic range & low-level detail, and accurate tonality & imaging. Getting this right is the most important aspect of the system. Regardless of the quality of the equipment, the quality of the sound will depend on how well the speaker/listener locations are set up in the room. A/V RoomService offers both modeling and onsite testing (voicing) services. Visit for more information.

Friday, September 16, 2011

General Electrical Maintenance of A/V Equipment

By Norman Varney

It shouldn't come as any surprise that audio/video equipment needs a little TLC in order to perform best. Happily, the TLC required costs very little in time, and next to nothing in dollars. As in most improvements in the A/V chain, start at the source and work your way down. Assuming there are no week links in the chain, an improved signal at the front, means a more accurate signal at the end. This is typically the hierarchy for any recording or playback system. With this in mind, we'll outline a full electrical maintenance practice for routine use. By routine use, I mean once every six months for climate controlled environments. However, in environments which may be exposed to ocean air, high dust content, extreme temperatures and/or humidity, you'll want to schedule this more often, say at least every three months, or after a conditional event.

If you have not done anything like this before, you will be pleasantly surprised at the audible improvements in dynamic response, resolution, speed and authority, soundstage size, image dimensions, reduced noise, etc. As always, what is presented here is to help the end user to optimally experience what the artists intended by delivering the most undistorted signal possible.

Note: The following procedures should be performed with the power off and dissipated.
  1. AC.  People generally think of power sources as simply a 50 or 60Hz. signal feed for component power supply capacitors. What they don't realize is that as the capacitors used to record or playback musical events must be replenished in a nonlinear fashion due to the transient characteristics of music. This may mean pulling bursts of current off the highest and lowest peaks of the 50 or 60Hz. sinewave, within milliseconds. During this process full wave bridge rectifiers and digital switching supplies can introduce significant noise to the line up to the 50th harmonic. Ideally, the power supply must be unrestricted if it is to deliver continuous and instantaneous current to the electronics. However, there are plenty interfaces in the path between components which impede, restrict and slow down the current's transfer flow. When this happens, loss in dynamic response, resolution and cleanliness of the sound occur.
    1. Service Panel. (For a qualified electrician
      1. *Clean any visible corrosion or carbon deposits seen on the grounding rod junction, buss bars, breakers (replace if old or has tripped several times), or outlets.
      2. **Gas tighten all connections to the grounding rod junction, buss bars, breakers and outlets.
    2. Power Cords. *Clean and tighten both male and female terminations, as well as the receptacles in the chassis of the electronic equipment itself (with power dissipated). 
  2. Source & Accessory Components
    1. Faders, knobs and switches should be turned back and forth weekly to wipe away oxidation and sulfide build up. Use a contact cleaner when needed.
    2. Vacuum all heat sinks, vents and electrical components inside and outside of the chassis very carefully using a soft brush
    3. *Clean all electrical chassis contact surfaces (pins and/or receptacles) for interconnects
    4. *Clean tube pins and receptacles (and replace tubes sooner than the manufacture suggests)
    5. *Clean power contacts, including fuses and fuse holders (with power dissipated)
    6. *Clean cartridge pins and leads
    7. Tighten all electrical connections
  3. Interconnects. 
    1. *Clean all electrical contact surfaces (pins and/or receptacles) of component interconnects.
    2. Tighten any possible electrical connections on the cable itself
Note on unbalanced interconnects like RCA and phone plugs, twist the connector to the right when disconnecting and reconnecting.
    1. Amplifiers.
      1. Vacuum all heat sinks, vents and electrical components inside and out of the chassis very carefully using a soft brush
      2. *Clean all electrical chassis input pins and/or receptacles
      3. *Clean tube pins and receptacles (and replace tubes sooner than the manufacture suggests)
      4. *Clean all speaker output terminals
      5. *Clean power contacts, including fuses and fuse holders (with power dissipated) 
      6. Tighten any possible electrical connections
    2. Speaker Cable.
      1. *Clean all terminations
      2. Tighten any possible electrical connections on the cable itself
    Note on speaker cable connections: A spade termination connected to a binding post will offer the most contact surface area. It will also allow you to make a tight, if not gas-tight, connection. If you have any type of connection that allows you to "screw it down", tighten it as far as you can using just your fingers, and then use a wrench or pliers to cinch it down another quarter turn (careful not to break cheap connectors!).
    1. Speakers.
      1. *Clean all terminal contacts
      2. *Clean any power contacts, including fuses and fuse holders (with power dissipated)
      3. Tighten any possible electrical connections on the binding posts
      4. Tighten all driver units by cinching down until seated. Do so incrementally and in a star pattern so that it may seat concentrically. Do not over-tighten.
    In summary, good signal integrity means flowing unimpeded throughout the chain. The three ingredients for this recipe are; quality materials, large, smooth, clean contact surface areas, and tight connections. Even in controlled climate environments, connections settle, are moved, are vibrated and resonate, which can cause breaks in the connections allowing air contaminants and oxidation to occur, and restrict current flow, resulting in signal losses, alterations and noise introductions to the original signal. Taking the preventative measures described above will help you achieve better performance from your A/V system, and a more accurate, more enjoyable experience.

                                               Acceptable AC THD and harmonics

    * Cleaning refers to wiping the contact surfaces with Lanolin-free isopropal alcohol, or a solvent such as Caig Lab's DeoxIT or Cramolin Contaclean, and/or physically scrubbing or abrading the contact surfaces of impurities. As soon as the surface is confirmed free of contaminants and debris, make a swift connection to avoid possible re-contamination. Light duty (gold plated contacts) applicators may include lint-free cotton cloths and swabs. Medium duty (visible coloration, etc.) may include pipe cleaners and nylon or stainless steel brushes. Heavy duty (high current) may include 100 grit sand paper or heavy steel brush.

    **Gas tight in this context refers to malleable metals being compressed to the point of deformation to create an intermetallic bond. It also means that all oxides and other surface contaminants are absent at the connection point, and that no air molecules can penetrate the seal.

      Thursday, July 7, 2011

      16 Common Partition Considerations for Noise Control

      By Harry Alter

      This is an excerpt from a much more in-depth article Harry wrote regarding noise control, which will be available (along with many others) on our website in the near future. Though the article's target audience was home theater enthusiasts, it certainly applies to any small room environment such as: project studios, conference rooms, class rooms, condo's, etc.

      You may have heard during some investigative discussions about building a home cinema, that noise control is something not to be overlooked, especially if you’re looking to create a truly awesome home theater experience. But what really is noise control, how much do you need and do all those noise control products out there really make a difference? How do I choose a product and when I do … will it really work? 
      Well, we’re getting to the bottom of all that in this article and while we’re at it you’ll learn a little about what to look for in noise control products, what questions to ask, and what to be cautious of. So without further a due let’s clear the smoke from the room, work our way through the maze of technical jargon, and remove the mirrors so everyone can clearly see and hear what a really great home theater experience is all about.

      Where to start? What better place than “Why”. Why do we need home theater noise control in the first place? Two reasons: a) noise reduction means improved sound quality, and b) we don’t want to disturb others. The most important reason to design for control noise in the home cinema environment is to create conditions that will first and foremost allow for the recreation of the cinematic experience intended by the artist(s). Pretty obvious right? “Creating conditions” is really what home cinema noise control is all about. If we fail at creating desirable room conditions, the result can quickly go from disappointing to disastrous. The common aphorism “garbage in, garbage out” holds true throughout the structural and electronic design stages of home cinema. Noise is distortions and/or distractions that are not original to the audio signal.
      One of the most important reasons we approve or disapprove of any listening or home cinema experience is the result of our own ability to listen and experience sound with a critical ear. This includes all of us who enjoy the home cinema experience. The desire to recreate and understand this experience is probably why you’re reading this article. We love it, because we know when the experience is right and like-wise, we know when the experience just isn’t right. We quickly become a discerning audience that knows the difference between awesome and awful, and as a result, become “critical” about our expectations and how we “listen to our surroundings” during the home cinema experience. I emphasize, “listening to our surroundings”, because what we hear within the shell of a home cinema is largely influenced by how the walls, floor, door and ceiling treat the sound energy generated within, around, and through the space. 

      So let’s begin by taking a closer look at how walls, floors, doors and ceilings influence your listening experience.
      There are three basic ways that rooms (walls, floors, doors, and ceiling partitions) influence sound energy: 

      1.                  The partition will absorb sound energy. 
      2.                  The partition will transmit sound energy through it.  
      3.                  The partition will reflect sound energy back into the listening space.

      How sound energy reacts with its surrounding room envelope can vary immensely depending on how much sound energy travels via each energy path. Changing or varying the energy path for better or for worse depends on a complex array of products, their material properties, and how they are integrated together to form an assembly.
      To better illustrate how the flow of sound energy effects the room’s listening environment lets bundle items 1 and 2 (absorption & transmission) together as all the sound energy that potentially “leaves” the room, and item 3 (reflection) as all the sound energy that remains in or is reflected back into the room. Let’s call the sound energy that leaves the room a  (alpha) and that sound energy reflected back into the room r  (sigma). My high school physics tells me that Newton once said that energy can neither be created nor destroyed. So all the sound energy that is incident to your room’s shell, before any reflection or absorption takes place, is equal to 100% of a partition’s incident sound energy.  The following equation describes how these principles come together. 
      r + a  =  1.0  (100%)

      Pictorially let’s look at how different wall partitions can treat sound energy. 

      As you can see Figure 3 provides the best results by utilizing a number of sound absorption characteristics to limit the amount of energy flowing back into the listening space as well as into adjacent rooms. Unfortunately, achieving this is easier said than done. Often the use of too much mass and too little panel absorption provides good sound transmission loss results, at the expense of interior room sound quality. i.e.; way too much energy is being pumped back into the room from the un-optimized partition assembly design. 

      By combining various construction elements and effective products, one can greatly reduce potential design problems or failures.

      The following is a list of elements often considered to optimize partition absorption, transmission, and reflection. 
      1.      Increase stud/joist spacing
      2.      Change stud/joist type (wood vs. metal)
      3.      Increase depth of cavity
      4.      Fill cavity with acoustical insulation
      5.      Increase mass of surface boards
      6.      Introduce multiple layers of surface board
      7.      Reduce thickness of surface boards while maintaining overall thickness
      8.      Vary thickness of surface boards
      9.      Introduce resilient isolation between surface boards and studs/joists
      10.  Introduce damping compounds between layers of surface boards (See RoomDamp2)
      11.  Change the material and/or component properties of the surface boards
      12.  Introduce vibration breaks wherever possible
      13.  Reduce hard surface-to-surface connections between floors and walls
      14.  Seal any and all gaps or penetrations to reduce air movement through the partition
      15.  Introduce a noise-rated door or double door assembly
      16.  Refrain from introducing regions with little air space available (i.e. Center septums or resilient channels fastened over existing gypsum board. These often make things worse instead of better.) 

      Reflected Room Energy:

      An item I would like to speak about before the close of this article is the potential sound energy that walls, floors, and ceilings can reflect back into the listening room due to poor partition design. As I noted at the beginning of this article, the best assemblies are those that gain the most sound absorption over a broad frequency range using a variety of noise control options & techniques. A frequent problem is relying too much on mass. A good example of this are the reverberation times below that show how a wall can push energy back into the room based on its construction. Two walls which both have very similar STC performances, but very different contributions to the reverberation times within the room. One promotes the control of low frequency energy from being reflected back into the room, while the other pumps too much low frequency sound back into the listening environment, which destroys the sound quality.

      In closing, this is a basic start which I hope you have found valuable toward understanding more about the importance and science of noise control. I’m sure you have many questions: like how many dB will each of the items listed above provide to my home theater design and how many is enough? I hope that future articles will delve deeper into questions like these, as well as addressing the importance of controlling flanking noise, impact insulation, HVAC noise and other design issues. Remember that noise control is a two way street: sound that leaves the space and sound that enters it. Noise control partitions are system approaches to principals incorporating block, break, isolation and/or absorption of sound waves and vibrations. These systems must adhere to the unique governing weight, thickness, d├ęcor, budgetary and/or even “green” requirements of the project. These systems must be designed to address each unique noise control issue; for example, maybe there is going to be a water pump for a pool adjacent to the cinema, or a child’s bedroom above. Different sound energy levels and their frequency ranges must be understood in order for noise mitigation to be designed appropriately. Means of acoustic computer modeling (if new construction) or testing and modeling (if existing) will increase the likelihood of solving problems through proper acoustic design, resulting in a higher performance cinema and a greater experience.

      Enhanced by Zemanta

      Wednesday, June 29, 2011

      Relative Phase Over Frequency Response

      By Norman Varney

      Audio enthusiasts are always concerned about frequency response. We see this data published in the specifications sheets of audio equipment, we often see it displayed graphically in reviews, and we are often interested in the frequency response of our room, etc. This is all fine, but what we should care much more about is phase.

      Our experienced brain is very forgiving when it encounters missing frequencies or intensities of recognizable sounds, and it does not know the difference when frequencies are missing from unfamiliar sounds. For example, you've probably never heard an actual explosion like those in action movies, or like many, have never been in the presence of a live orchestral performance. When inexperienced, you don't know what you're missing. On the other hand, you hear the kick drum on "Billy Jean" over tiny speakers and recognize it as such. Your brain works hard to fill in the missing amplitude and low frequency information in order to make it believable. Our brain is not able to do such a great job of modeling for phase. 

      What is phase?
      Phase, in relationship to audio, has to do with time. Time in audio is measured in units. For example, velocity is defined in terms of length and time, or feet per second. Frequency is measured in cycles per second (abbreviated Hz.), and wavelength is measured in distance per cycle period.  A 1 kHz. tone is about 1.13 feet long and takes about 1 ms to generate, while a 100 Hz. tone is about 11.3 feet long and requires about 10 ms to produce. The standard unit of time is the second (abbreviated s). The standard clock is the Cesium-133 atom, which undergoes 9,192,631,770 oscillations per second.

      You might be thinking that time and frequency are just two different mathematical ways to describe the same information in different domains, and you'd be right. However, when we start talking about more than one frequency happening simultaneously, as in a recording or playback system, we have to analyze their relationship to each other in order to determine the accuracy of what we perceive. Phase is both time and frequency dependent. Phase is the term used to describe the progress of a waveform in time relative to a starting point.

      What is phase error?
      Phase error results when two sound waves reach their maximum and minimum values at different times. Any degree of phase shift will cause the combined signal to be altered respectively, via the result of constructive and destructive interference.

      How do we perceive relative phase distortions? 
      In physics, sound is only vibration, but for the human brain, sound requires processing a lot of information in order to make sense of it as a sensation and react to it. Localization is instinctively our primary concern regarding sounds. We spatially map the location using the disparity of time (below approx.700 Hz.) and/or intensity (above approx. 700 Hz.) between our two ears. This is followed by frequency (pitch) and /or loudness, whichever wins our attention to indicate possible threat. Finally, requiring a tad more information (time), we analyze tone. We process this information a number of ways, looking for clues to discover whether the sound is friend or foe. We will look at some basic characteristics of sound as it is related to phase and human perception:

      1. Amplitude. If we were to play a steady tone of say 500 Hz. on the left speaker in an anechoic (reflection-free) chamber, and then add the same tone to the right, with the same relative phase, the sound energy will have doubled in power and the result is perceived as 3 decibels louder. What if we were to delay the second tone one half cycle later in time than the first? We still have double the power, however it is 180 degrees out of phase from the first causing cancellation of the two frequencies, resulting in silence. What's happening is, as the first speaker is moving forward (compressing air molecules), the second speaker is moving backward (rarefaction of air molecules). The combination leaves the air molecules at rest.  Now you understand how phase error effects frequency response. Nature begins her sounds with a wave of compression. Electrons however, flow without regard to our human perception. You are just beginning to see how important phase is to accurate audio.

        2. Spatiality. The easiest and most drastic phase distortion that most people recognize is the confounded sound when the polarity of one stereo channel is reversed. Rather than organized in space, sounds are difficult to localize and seem disoriented, thin and hollow. Both the soundstage (the apparent physical size of the presentation) and the image (the events that take place within the soundstage) are in chaos when the two channels are 180 degrees out of phase with each other. This is an unnatural phenomenon that you feel, and your brain works hard to make sense of it for comfort, but to no avail. Less dramatic degrees of phase shift will effect spatial cues and cause sounds to be incorrectly located or wander about in apparent location and size. Spatiality cues typically occur within the first few milliseconds of the signal's introduction.

        3. Timbre. Timbre is the subjective tonality or "character" of sound. It has nothing to do with pitch or loudness per se. When hearing a flute and a violin each playing the note Middle A (440 Hz.), it is the differences in their unique attack, envelopment of harmonics (partials) and decays that distinguish them apart. This is due to not only the way an instrument is played whether: plucked, struck, blown, rubbed, etc., but also their harmonic make-up (most musical instruments posses up to twenty overtones above the fundamental), and their resonance make-up (the body of the instrument amplifies or dampens certain frequencies). Good timbre is synonymous with good fidelity, whether you are talking about a musical instrument or a hi-fi system. A cheap violin does not have the rich resonances found in a quality one, and a cheap stereo system probably won't distinguish between steel and nylon strings on a guitar, let alone the difference between Ernie Ball and D'Addario strings. It's the intensity of the overtones, during various points in time, that make these distinctions. Plomp (1970) summarized: a) Phase has maximum effect on timbre when alternate harmonics differ by 90 degrees. b) The effect of phase on timbre appears to be independent of the sound level and the spectrum. Timbre recognition occurs in about the first 20-50 ms of introduction.

         (a) The waveform of an attack transient. (b) Amplitudes of the first five harmonics of the attack transient of a 110 Hz. diapason organ pipe. (From Keeler, 1972). Notice the second harmonic develops slightly faster than the others, including the fundamental. In other woodwinds, the fundamental usually leads.

        Timbre is altered when phase is shifted. Phase distortion to the original signal confuses our brain. It is interesting to note the experiments by Berger in 1963 where he removed the first and last half seconds of 10 various band instruments and asked 30 band students to identify them. Among the confusion, only the oboe was correctly identified by more than one third of the group, eleven identified the alto saxophone as a French horn, and 25 thought the tenor saxophone was a clarinet!

        Though the following exercise does not follow "real world" situations, it does a great job of allowing the reader to understand and experience what happens when phase shifts alter timbre.

        While holding your hand flat with your palm facing you, say shhhhhhhhhhhhh while slowly bringing it up to your mouth. Notice how the timbre of the sound changes. You are hearing the original sound conflicting with the reflected sound off your hand. As you move your hand closer to your mouth, different frequencies (predominately around 1 kHz. - 16 kHz.) are passing through one another in opposite directions, and depending on the interval in time, or point in space you happen observe the sound, it will appear different (brighter or darker, louder or softer) at certain frequencies.

        What causes phase distortion?
        There are two types of relative phase distortions that typically occur during the recording and playback process: electrical and acoustical. And there are two causes of phase distortion: delays and repeats.

        1. Electrical. Any and all types of audio electronics will add a time delay to an applied signal, from microphones, to cables, to loudspeakers and all processors in between. Each electronic device in the signal path introduces some capacitance (stored voltage charge) and inductance (stored current charge) to the moving electrons. These inherent charges take time to develop and each signal frequency has a unique voltage and current. If the time delay is constant at all frequencies between the input and the output of the device, it is said to be phase linear or phase coherent.

        2. Acoustical. Acoustical interference occurs when room reflections cause constructive (additive) and destructive (subtractive) phase errors, as can less-than-precise speaker/listener alignment, and multi-microphone leakage. As the direct signal combines at our ears with the delayed signal(s) of itself, we experience distortion.
        a. With room reflections, and stationary listening, our brain can adapt with some "spectral compensation" to the room, especially in the higher frequencies. However, reflections that are within -15dB of the direct sound will definitely cause audible phase anomalies.
        b. Ideally, each speaker voice coil should be the same distance to the listening position so that the signals from each arrive together. When they are not aligned, the relative signal arrival times are different, causing change to the sound, and to the polar response (directivity) of the speaker. Note that the more off-axis a listener is, the more time incoherency is increased. Note also that good designs take into account cross-over network phase and delays, and that even a 5 us change can be audible.
        c. Two microphones, each in a different location, but both picking up similar information can cause tonality errors. For example, a snare top head and bottom head mic both picking up the high-hat, or the bottom head mic picking up the direct sound with reflected sound from the floor.

        What can you do to reduce phase errors?
        There are several things one can do, even if you don't have sophisticated test equipment or knowledge:
        1. Train your ears. Listen to unamplified music for reference. Enjoy the richness of harmonic content, the spatial imaging, the attack, envelopment and decay of individual sounds.
        2. Confirm that all amplifier/speaker channels are the same polarity.
        3. Confirm that all speaker drivers are the correct polarity. Placing a 9 volt battery across the speaker cable leads should push all drivers forward in nearly all speaker designs.
        4. Investigate interconnects, speaker cables and speakers that boast about energy efficiency, time/phase alignment, etc.
        5. Do your best to locate the speaker/listening position for smoothest room mode response in the room.
        6. Confirm that each speaker is the same distance to the listening position.
        7. Treat first order reflection points in the room with absorption or diffusion. This can be done with the "mirror trick". Treat the locations with at least a 2' area to cover frequencies down to about 500 Hz.

        As with many blogs, a book could be written about the subject. Phase is easily one of them. Only scratching the surface, there are many sub-topics of phase effects that I did not include, such as pitch, resonance, ringing, beat frequencies, etc.

        Time delay spectrometry has only been around since the late 1960's. Prior to that, we didn't have the computer processing required to analyze the relationships between time, energy and frequency. This may be why we are so concerned and familiar with frequency response. Phase distortion is the primary reason why one piece of equipment sounds different from another. It is also the primary reason why most people are denied the full potential of their audio investment and cannot enjoy the full experience created by the artist.  

        In regards to relative phase perceptibility over frequency response, most any piece of good audio equipment today will offer good frequency response, but most do not have good phase response. The problem for the end-user is integrating synergistic system components, setting them up properly both physically and electronically, and controlling room acoustics. Noticing minor phase errors in a typically reverberant room will be difficult because of the lack of resolution available from the room. Controlling the acoustics properly is like discovering the deep sea. You probably have no idea of all the cool stuff that is below the surface.

        Thursday, March 3, 2011

        5 Reasons Off-center Room Positioning is a Bad Idea

        By Norman Varney
           Symmetry of the audio scene, especially the front horizontal plane, is important to accurate reproduction. We want to place ourselves in the middle of the left and right image in order to hear the proper soundstage. If we don't balance levels correctly, spatial cues, frequency response, low-level details, etc. become skewed because the energy on one side of us is louder than the other. This is true with headphones, but is even more problematic when sound is introduced to a room. Lately, I've been seeing a lot of designs that are incorporating off-center speaker/listener arrangements in the room. The idea is to avoid the room's fundamental width cancellation node by moving away from it. This is not practical for high fidelity.

          Axial room modes in a rectangular room are fairly predictable using simple math. Since room modes are dictated by room dimensions, we can calculate what frequencies will live where in the room. We want to avoid coupling the woofers and listeners with the existing first, second and third-order modes whenever possible, as they are the most energetic. Avoid placing woofers in antinodes (pressure peaks), and listeners in both nodes and antinodes. Placing speakers in an antinode will excite it, resulting in that frequency (and its harmonics) sounding louder than they should.  Placing a speaker in a node (null) will attenuate that mode, which at times can be useful. Placing a listener in an antinode results in the mode sounding too loud. Placing a listener in a node results in the mode sounding too soft. There is always an optimum position for the speakers/listeners in a room to deliver the best soundstage and bass response. 

          Symmetry is important. Placing the speakers and listeners off-center in the room to avoid the fundamental width mode is not a good idea. Here are some reasons why off-center positioning is not good practice:

        1. The fundamental (f1) mode wavelengths are too large to move away from. The fundamental wave supported in a 15' wide room is about 30' long (38 Hz.). The longer the dimensions, the longer the lowest wavelength. You would have to move off-center about 3.75' to smooth out a 38 Hz. null.
        2. By doing so, you’ll just end up in another mode. In a 15' wide room, 3.75' off-center, you'll find f3 (113Hz.) at its peak.
        3. By doing so, you’ll end up too close to the side wall, which will cause timing differences between your left and right ears, resulting in severe spatial skewing. 
        4. By doing so, you’ll end up too close to the side wall, which will cause energy differences between your left and right ears, resulting in resolution loss and inequality.
        5. By doing so, you’ll end up too close to the side wall, which will cause frequency differences between your left and right ears, resulting in severe timbre skewing and inequality.

          Let's look at what happens at these low frequency room modes. If we took an instantaneous time snapshot (1/75th of a second) of the first-order (f1) width mode in a room 15’ wide (38 Hz.), we would see a positive pressure point to our left, and null in the middle of the room, and a negative pressure point to our right. At the same instant, the second-order (f2) mode (75 Hz.), which is half the length of the first, would show a positive peak at the left wall followed by a null (located about 3.75’ from the left wall), a positive peak in the middle of the room, and a null (located 3.75’ from the right wall), followed by a positive peak at the right wall. We want to avoid the third-order (f3) mode as well (113 Hz.). You would have to move 3-4’ to one side before you would notice any appreciable frequency smoothing of the first-order mode, which moves us into to the f3 antinode at 113 Hz. This particular frequency is contained in nearly all music and dialog recordings. Not a good move (see Fig. 1).

        In summary, we must place the audio footprint center of the side walls and settle for the rare, problematic bass note, over distorting all frequencies, all the time. Placing the auditory scene symmetrically between the left and right walls provides optimum dynamics, tonality, imaging, spacial cues and low-level resolution.

        Enhanced by Zemanta