<img height="1" width="1" alt="" style="display:none" src="https://www.facebook.com/tr?id=311512539006924&amp;ev=NoScript">
  • Telos Alliance
  • Telos
  • Omnia
  • Axia
  • Linear Acoustics
  • twntyfive-seven
  • Minnetonka Audio

Blog Central

Alex Kosiorek, Recording Engineer

Posted by Kirk Harnack [TWiRT] on Jul 24, 2015 2:48:00 PM

Find me on:

TWiRT 266Recording techniques and tools are still evolving - and some truly exciting Immersive Surround tech is emerging right now. Recording engineer, Alex Kosiorek, joins Chris Tobin and Kirk Harnack with lessons in modern recording technique, why 96kHz sampling is important, and what jobs are demanding skilled audio engineers.

 

 

 

 

Watch the Video!

Read the Transcript!

Kirk: This Week in Radio Tech, Episode 266 is brought to you by the Axia IP-Audio Intercom. Imagine a digital intercom system with no central matrix, with AoIP simplicity and flexibility. Actually, don't bother. We've built one: Axia IP Intercom. By the Telos Hx1 and Hx2 telephone hybrids, the most advanced hybrids ever developed for use in analog phone lines and by Lawo and the crystalCLEAR Virtual Radio Console. CrystalCLEAR is the console with the multi-touch touchscreen interface.

Recording techniques and tools are still evolving and some truly exciting immersive sound tech is emerging right now. Recording engineer Alex Kosiorek joins Chris Tobin and Kirk Harnack with lessons in modern recording technique, why 96 kHz sampling is important and what jobs are demanding skilled audio engineers.

Hey, welcome in to This Week in Radio Tech. I'm Kirk Harnack, your host. So glad that you're here and I'm glad to be here too. Chris Tobin is along and we're going to cut to him in a minute. We've got a great guest, a gentleman whom I've wanted to have on for months now and finally we've found time to get together on a Thursday afternoon.

Our show, This Week in Radio Tech, is a show where we talk about everything audio and some RF too, from the microphone to the light bulb at the top of the tower and to the end of the stream. Maybe I should trademark that slogan.

We're going to talk today about audio recording, high-quality audio recording. Alex Kosiorek is our guest and he is going to tell us a lot of interesting things about capturing that great sound that they capture on location in symphony halls and various different venues and how broadcast engineers can help maintain that quality of sound throughout.

All right. Let's jump right into saying hi to our cohost and that is Chris Tobin, from somewhere in Manhattan, New York. Hey, Chris, welcome in. Glad you're here.

Chris: Hello, Kirk. Hello, Alex. Yes. As we were saying earlier, I was doing some work here in a building, broadcast work, and realized the time had gone away from me. So I grabbed my go-kit and thought I'd make a go of it here on the rooftop. I'm coming to you via cellular. So this is a cellular connection, 4G LTE, the two different technologies or the marketing names, that is. It's on a distributed antenna system in the building. So I'm actually only technically my modem is about 30 feet from the cell site antenna.

Kirk: Is that why it's working so well?

Chris: Exactly. I've got a nice RF link right into the network. Can't beat it. Kirk: Cool. Well, are you outdoors or close to outdoors?

Chris: I am outdoors. What's behind me, you only see one of them, the other side where you see the open vertical or horizontal structures is the actual catwalk behind me. I'm on a rooftop with antenna structures. The water tank, this is the water tank right here made of wood. It's for supplying water to the building. There are several of them up here. I'm only behind one.

I'm outside. There is a helicopter flying overhead right now. My location is not far from Times Square, New York City. So that will give you an idea of what that's about. I'll tell you later what's on the side of the building. I don't know if the camera shows it. Let's see. If you look straight ahead, there's a vertical structure just to the side of the water tower. I'm not sure if you can make it out, but that's the Empire State Building. Kirk: Really?

Chris: Yes. This camera's aspect ratio, unfortunately, doesn't give you anything. I can't zoom in manually. Let me see if my finger can get the spot just right. Kirk: Yeah, I see it. Chris: That right there is the Empire State Building. Kirk: Yeah.

Chris: That gives you an idea of where I'm at and what I can see. Kirk: Yeah. Wow. Are there some buildings that are kind of, well, that are also out there next to it?

Chris: Yeah. There are a couple other buildings. There's a bank building across the way with a very large spire on it. And then off to the other side of me are several other skyscrapers. Later in the show what I'll do is near the end, I'll take the camera and point it up and you'll get to see something really cool. It will be just about the right time for the sun setting and the lighting should just be right. But I am definitely outdoors. So you will hear a lot of strange things. I'm hoping that the chiller plant above me doesn't decide to come online. Kirk: I barely heard the helicopter when you said it was there. So your mic is doing a great job. Chris: Excellent. Kirk: Great job of focusing on you. Chris: Yes. Like I said, this is my go-kit. This is what I've been using for five years now, for those of you inquiring in the recent emails. Kirk: We should do an episode on your go-kit. In fact, next Thursday, Chris, do you have any plans to be in your regular office or will you be out and about?

Chris: Next Thursday, I should be hopefully in a regular place. Yeah. Kirk: We've got some catching up to do next Thursday. Have we played the interview I did with Brian Jones yet? I don't think we have. Chris: I don't think so. No.

Kirk: The one where he's talking about the Ubiquiti NanoBeams and making an office out of his RV?

Chris: No. We have not. That would be a great one.

Kirk: That's a 20-minute interview. We'll play that. While I was in Australia, you sent me a couple of great show ideas. We'll do that. And then we ought to take a look at your go-kit. Chris: Okay. Yeah. Fine. Kirk: All right.

Chris: It's a nice little kit, as you can see. It's done very well for me. Kirk: Yes, it has. Only a few cops have asked you what you're doing. Chris: Yeah. I didn't tell the building folks what I'm doing either.

Kirk: There you go.

Chris: There's a security camera just above my head. Kirk: Hey, let's jump right over to our guest. That is Alex Kosiorek. I have not known Alex well for so long and I'm getting ready to know him better here in this next hour together. Alex, welcome in. Glad you're here. Alex: Same here. I'm glad to be here.

Kirk: Alex, you've made a move in the past year or two, but I always knew you from being in the Ohio area. What did you do in Ohio and what are you doing now?

Alex: Well, I used to be the Director of the Recording Services Department at the Cleveland Institute of Music, where we produced a lot of different audio programs and recordings, not only for the institute itself but for broadcast. Prior to that, I was in charge of what was called the Corbett Studio, which is a full-fledged audio production facility that's part of what's now called Cincinnati Public Radio. I dealt with all of the audio production for the Cincinnati Opera, the Cincinnati Symphony and all that. Now, I'm here in charge of what is called Central Sound at Arizona PBS. It's the audio division of Arizona PBS, one of the larger PBS stations. We produce lots of content, both locally and for national distribution. We take care of all the things classical here for broadcast on KVAQ, all the local products and do audio for video as well for the productions on PBS. Kirk: Wow. All those positions really put you in a great position to tell us from hands-on experience, not just theory, but also hands on about the whole recording process and keeping audio quality just fabulous. That's what I'm looking forward to hearing about. We have a commercial to do, but after that are you going to be ready to tell us about that?

Alex: I certainly am. Kirk: All right. Good, good, good. By the way, I noticed you found a different microphone and that one is sounding quite good. Alex: Oh, yes. Kirk: All right. Hey, our show is brought to you in part by--let's see, is it Axia? Is that who I said it would be? I think it was. Let me look at my notes here. Yeah, Axia and the IP-networked Intercom System from Axia. You know, the Intercom System from Axia doesn't get a lot of talk, at least on this show in terms of advertising and that's a shame. The Intercom system is really fabulous. Here's what I mean about that. You know, Axia Livewire and now Livewire+, that's an audio over IP-networked system.

The engineers at Axia were able to add an intercom capability to all that, just a panel in front of you with intercom stations listed and you just push and you talk or you can lock it on and talk or you can listen or you can carry out a two-way, hands-free conversation. Let me tell you about a couple things about the IP intercom from Axia. First of all, I got to see one in action just earlier this week at a radio station, a pair of radio stations in Adelaide, Australia. I was at Southern Cross Austereo in Adelaide and they have two big stations there, MMM, which is a rock station--as the jocks there say, "It's for the blokes, mate"--and then the hits station, kind of a top-40 CHR called HIT107.

And those stations, their morning shows, or as they call them, their breakfast shows, both have producers and the producers sit outside of the control room at a big, long desk where they can see two control rooms and the news booth and they have right in front of them an Axia IP Intercom.

And this intercom lets the producer have instant communications with the announcers on-air, in the on-air studio, whether they're on mic right now with their headphones on or whether they're just walking around or doing some things in the booth, in the control room without having their headphones on. When the producer pushes that button, the producer is interrupting one side of the headphone feed or talking over the cue speaker in the control room.

The quality is just amazing. Now, you don't need super quality from an intercom system, right? But you've got it. It's no extra trouble. When Axia designed it, just to make it another source on the network with a few special signaling protocols to handle the interrupts and headphone split and that kind of thing to call other intercom stations.

So the Axia IP audio intercom is just fabulous in that you can use it as a full audio spectrum source, 20 Hz to 20 kHz, absolutely ruler flat audio depending on what microphone you're using with it, of course. You can get a cheap microphone or use something high-quality. You can also use a mic input or line-level input on the back of most of the intercom panels as well as line-level audio output if you need to feed some headphones or some other quality input.

So it's not just a little mic and little speaker, although you've got that if you want. You can also have high-quality audio input and output. That means you can put one of these intercoms into a newsroom at news edit workstations. You put it anywhere where you need not only voice communication, but where you might need high-quality audio for a contribution from a news editor or a reporter or something like that.

You can handle this audio in lots of different ways. You can run it over a codec if you want to, stations have done that. You can, of course, have it as part of the local Livewire+ audio network in a facility. And the intercom is the latest software they're working on right now also talks SIP. So you can actually ring to a voice over IP SIP telephone or other SIP device with this. That means that it's got compatibility with AES67. So that's exciting too.

Lots of possibilities. If you are a facility, if you have a facility that is looking for intercom capability, whether or not you've got Axia Livewire already, man, this is just so simple to wire up. It's just part of a network, an audio over IP network. Go to the website at TelosAlliance.com. Look for Axia and look for those Axia IP Intercom panels, all kinds of panels, fit in consoles, fit in the rack. There's even a soft panel that runs on a PC. That's really cool too.

Thanks to Axia for sponsoring this portion of This Week in Radio Tech.

All right. It's Kirk Harnack and Chris Tobin along here at Episode 266. We're talking with Alex Kosiorek. Let's jump right into this. Chris, you feel free to jump at any time with questions. But Alex, we had a number of different topics we were going to hit. Do you want to hit surround sound production first? That's a big bite to take off. Do you want to chat about that?

Alex: Certainly. Let me start off with how much I really love immersive surround. This seems to be a topic that's being reinvigorated with ATSC 3.0, the discussions for even more beyond 5.1 with broadcast for the next phase of television coming up in the next couple of years. It's been dealt with in a number of different respects in radio broadcasting as well with either matrix surround or a companion surround stream to your live radio broadcast.

So it's actually a lot of people say, "It may not be alive and well." It is more than alive and well. We have lots of listeners who actually comment that when we broadcast in surround, that they really enjoy the experience. That's something that we've been involved in for a while here at Arizona PBS. I thought I would discuss a little bit about how we actually capture surround. Most of the work we do . . .

Kirk: Hey, help me out here, Alex. Alex: Yeah?

Kirk: You mentioned immersive surround. I actually haven't heard the demo yet. You know, I was in the NAB booth at Telos during this whole NAB and our linear acoustic division had what I heard is a fantastic immersive surround demonstration and I didn't have time to jump in there and listen to it. Is there a difference between regular surround, 5.1 or 7.1 and immersive surround?

Alex: Yes. Probably put as simply as possible, 5.1 you have discreet channel information sent to each of those five speakers in the subwoofer. With immersive, we have tools now that we can say, "Let's place a sound in a three-dimensional space." Not only do we mean on a singular level, as far as a plane, but also with height as well. That allows us to say, "Okay. Where do we want this sound to exist in the space?"

If you're going to movie theaters and you find movie theaters as Dolby Atmos or Auro-3D, those are two of the technologies that are out there. DTS has theirs and there are more and more of those coming out. Some of them are even cross-compatible with each other.

It allows us to say, "Let's place a sound up there in the upper-right corner in front of us and if you have a setup at home that's 5.1 or maybe it's this weird 13-speaker setup or whatever, the devices that are coming in the future will figure out where your speakers are and then, because we said we want this to appear in the upper right-hand corner, then the sound will figure out through this amazing technology to actually place the sound there. Kirk: Ah, that sounds fantastic. I mean in a fantastical way. Alex: It sounds very mysterious even. It sounds like, "How could they do that?" It's just quite amazing. Chris: So the elements are now considered objects within the space?

Alex: It's an object that you move basically virtually throughout the space. And then, depending on how the playback system is set up, it will figure out how to put it in that space that is set up for you in the home. Kirk: Does this mean that a DSP device engine or whatever, when the producers are mixing this, that they recognize sonic elements and not just a mic that it came from, predominately, but no matter where it came from or if it came from more than one mic, that sonic element gets somehow defined and then can be placed in space on a playback system. Is that what you're saying?

Alex: Indeed. And one of the amazing things that I'm just fascinated about and I think it's going to actually help the consumer is I've seen some different technologies that actually . . . you put this little test microphone in a space and if you have your speakers setup, I'm sure many of you have gone to people's houses and they have all five speakers in the front of the room. You're going, "How does that work?"

Well, these test systems now can say, "Hey, your speakers are messed up. They're not in the true surround position like they should be." It doesn't matter. The system will figure it out and still get the sound to be placed correctly around you. It's just quite amazing. Kirk: Wow. I'm trying to visualize how this works. You know, the guys in our linear acoustic division who are also demonstrating how this works and what tools they have, they were telling me that it's really cool and I need to hear it. Now I'm really honked that I didn't go get the demo.

All right. So now that we've established that this is different than the regular 5.1 or 7.1 we've been used to before, please jump in teach us about how this is captured and then what's done with it afterwards. Alex: Well, right now what we're doing is we're still concentrating on the discreet surround platforms. So, therefore, we still work in that 5.1 or 7.1 model. But we're still making sure we have all the elements, basically those different microphone feeds saved. Storage space has gotten so inexpensive. Therefore, that allows us to store all the information and actually get it ready and packaged for that immersive surround, if and when it becomes available to the average consumer. So we have all that. But right now what we do is we concentrate in different recording techniques that capture the immersive experience, 5.1 being the minimal. But we're also now experimenting now with 5.1 plus an extra four channels for height. It really makes . . . when you listen to the difference of a regular surround and surround plus height, it's really quite mesmerizing. It's very realistic onto another world sometimes. You go, "Oh my goodness. I actually feel like I'm really in the room."

Kirk: Height, does height include depth below or only height above?

Alex: Well, the main way that we've been looking at it now is most people in their homes have their standard setup of speakers. We consider it like the base layer. And so, when we're looking at it from the perspective of surround plus height, we have that base layer basically where you're sitting at, and then height, which being the airspace or what's above you. Some of the new movie theater technologies that are being used have that base layer that's there. You have height and then you have the ceiling, or as I heard from one of the people presenting on this at the NAB, "The voice of God location."

Kirk: You know, I was thinking actually the Imperial battleships as they go overhead. That's what I want to hear. I want to hear that rumbling overhead. Alex: Probably with "Star Wars" coming out later this year, that may be the experience, like, "Oh my goodness, the ship really is going to land on my head."

Kirk: All right. Not to preview too much the pictures you're going to show us, but I did see a quickie of a picture that had a microphone array overhead, was that part of what you're capturing to eventually get it to immersive sound?

Alex: So what I'm going to do is I'm going to switch over here. I have a myriad of photos, but we're just going to look at the photos for now of some of the different ways that we capture the event. Keep in mind, most of the stuff that we record is classical, sometimes jazz as well, content. So a lot of the techniques that we're using are specific to that genre. If we were to switch to a different genre, which is more popular music, we would be using different microphone techniques. So let me go ahead and switch over to that. Kirk: I can imagine, while you're doing that, in so many movies and television shows and soundtracks, there are orchestral pieces moving us along. So often, me, Joe Consumer, I barely notice that there is a score behind a lot of the movie action. But there is, so often, and wouldn't it be interesting to . . . if you're listening to a playback of a symphony orchestra, I'm not sure that you expect to have things above you.

But in a movie, when you can do anything you may want to represent some piccolo floating above you that represents something . . . I guess, I'm not sure we know yet what we want to do with this

technology for placement if it's standard orchestral accompaniment. I'm sure there are sound effects that movie producers are going to love to place in spaces above our heads. Certainly I can think of action and thriller movies, where this will be a big thing. Go ahead if you're ready to toss a picture our way. Alex: Certainly. So this first picture is when we were recording one of the best organs here in the area. Let me make sure that we're switched over to that picture.

Kirk: So far not. I'm seeing you.

Alex: Okay. Let me see if we can get this to share the right screen.

Kirk: Ladies and gentlemen, he has more than one screen. All right. It's thinking about it. There we go. Alex: Okay. The picture here that you see, we have a couple different microphones in front of a fantastic organ located here in Phoenix, Arizona. We were capturing an artist named Isabelle Demers. You'll see there that you have sort of the base microphone array, which I call the stack of microphones there in the center. It looks almost like a cross. But up at the top of the cross, we're using a different set of microphones.

In this picture, it's a little hard to see, but that's actually pointing at the upper part of the instrument. So you can actually differentiate the lower level of the instrument, the lower pipes, and the upper pipes in that surround mechanism or that surround immersive sound field.

But the other thing which we try to concentrate on our microphone techniques as we do this is to also make sure that it always folds down very pleasantly down to stereo and even mono. So we're always thinking about that. So it becomes a little bit more comprehensive. In the past, it would always be, "Okay. We're just dealing with a mono source." Then stereo, making sure stereo folds down to mono.

Now we're dealing with not only 5.1, but also 5.1 plus height to make sure that it all folds down appropriately so it's always backward compatible. But you'll see there that there's an array of microphones. It's one of the arrays that we basically created ourselves based upon a couple different studies that we've done.

The microphone that's at the top is one that we've been trying out for a while. It's called the Sennheiser Esfera. It's this cool-looking stereo XY microphone that actually, the elements are very, very highly calibrated so it goes through a decoder and actually gives you another 5.1 layer of audio to work with. It's quite stunning. Kirk: Now, Alex, correct my thinking here if it needs correction. But I noticed that on the apparatus that looks like a cross that's got your baseline and microphones plus the microphone or an array or whatever at the top of it, I was always thinking that the reason you do co-located microphones is to avoid phase issues if you collapse that to mono. What kind of phase issues do you have to worry about if you're recording 5.1 if you know it's going to need to collapse to stereo and perhaps to mono as well? What do you have to be careful about?

Alex: Well, the things that you have to be concerned about is the timing of the sound coming from the sound source the microphones themselves. So we're working in a height perspective. If you ever look down on the microphones themselves, the microphones are pretty much lined up with each other. So the arrival of the sound from the instruments to the microphones is pretty much all the same. So that's one of the things we have to be concerned about, to make sure that when it folds down, it doesn't have those phase issues.

Also, especially with classical, phase is sometimes our friend. It gives us a sense of envelopment and a little bit more about the space around us. We do know that, of course, when we're taking stuff that does have some things that are a little bit out of phase, that when you fold them down, of course, you're going to get some cone filtering and some other things that cancel each other out. We try to make sure that one of the elements that we take care of is that time element, time of arrival to the microphones.

So, now, not only with that array are we concerned about that, but also when we add in spot microphones. Let's say that we were adding a microphone very close up to the instrument. We would actually delay that microphone and make sure that it matches so the time of arrival of sound from the sound source is the same. Kirk: What kind of tool do you use to match that phase?

Alex: Mostly we are working inside the DAW and we do some old-fashioned clap techniques. What we'll do is we'll go out there with a clapper and we'll go by the instrument itself and then we'll hit whack. Even a pair of drumsticks works just fine. We'll measure that. Almost any DAW can do this. You can go in and you can zoom in on the waveform. You see the spike. You measure the spike between one microphone and the other and you go, "Oh, that's six milliseconds apart." And then you can add the appropriate delay. Kirk: Ah, okay. That makes sense. So that's if you want to spot, like maybe a guitar or a piccolo. So you're going to spot something with one microphone and then add that into the mix. That's the correction that you'd make to mix it in. Okay. Wow. All right. Learned something new. In that picture you've been showing us, there are a couple of other vertical stands there. What are those?

Alex: We've also added . . . I'll go back to that picture in here in just a second. So that also gives us a little more tools that take back to the production studio to see what else we can do as far as growing that space that you are going to be experiencing. In some cases, we'll actually make more than one mix, but we have this system and our techniques basically automated.

So we can say, "Okay. For core stereo, we're using our core microphone stand where all the time of arrivals having the same time." But if we're creating something that's just having those extra channels and we know that's going to go out to a discreet channel playback system, then we can add in those extra microphones to give that more immersive experience. Kirk: Cool. All right. Next picture to show right there, I believe. Alex: Yeah. I'm going to scroll down to a whole different type of microphone array that we use and it's also using one . . . I'll jump to this. Kirk: Oh my gosh. Alex: So this is something that we discovered just about two years ago. It's a recording technique called STAG. Now, for the life of me, I can't remember what STAG stands for. It's like stereo augmented something or other--it's got some weird words. It's something that if you're an AES member, you can go onto their website and dig through and find out what this technique is.

It's basically two sets of RTF sets of microphones, one pointing backward, one pointing forwards. The one pointing forwards is usually cardioids. The rear pair is usually hypercardioids. The diaphragms are right on top of each other so that that creates the same kind of constants of near coincidence. Depending on where you set it, you can actually steer the sound forward or backward.

So you'll see in this location, it's up high, which is usually in classical recording, we're getting a little bit of height above the instruments to gets some distance from where they're playing. Now we can actually in post or even live at the event, we can mix in how much room we want into those microphones and it folds down absolutely beautifully to stereo and to mono because all those microphones are coincident. I'll go back to the former picture that we were at. Kirk: By the way, you mentioned the phrase ORTF. I've heard of that. I just look it up here. It's a way of recording a stereo image. Do you have some possibility to manipulate that image post-recording? Is that what it's for?

Alex: Well, regular ORTF as a stereo recording technique has been around for a long, long time. That is basically a coincident pair of microphones spaced a little bit. It's usually cardioid splayed at about 120 degrees. The diaphragms are about 8 inches apart from one another. Kirk: Right. Alex: That's something that when you fold down something to stereo, it works incredibly well. Now what we're doing is using a forward-facing ORTF pair and a rear-facing pair of ORTF essentially. That allows us to still keep all the microphones in a coincident scenario. But now you can mix in the forward-facing microphones and essentially the group and the rear, which could be the hall or the audience. So now you can actually with a small array, this looks big because the picture is blown up, but it's actually a pretty small array once you get it up in the air. Kirk: Okay. Okay. Yeah. Alex: Here is another cross-section of it. This one looks a little bit more crazy because we actually have another microphone sitting on top of it. That's that Sennheiser Esfera mic that we've been trying out for a while. We're also using that Sennheiser Esfera in a couple of different situations. That microphone system is actually developed for ENG video cameras. Most video cameras going out there for work only have a stereo input.

So you put those two microphones into a standard camera and you record the atmosphere and they're bringing it back to the studio and you can run it back to the decoder and get yourself discreet, not some quasi stuff, but discreet surround out of those two microphones. Kirk: Now, that very top microphone, does that have more than one diaphragm? Is it just one?

Alex: Yeah. Kirk: Okay.

Alex: Let me see if I can right back to the former picture here just to see if we can go back to the blow-up. So facing forward would be facing to the right on this picture. So you'll see this. Actually, it looks like an XY microphone pattern. That's essentially what it is. But what they've done, what they've told me is they've highly calibrated these microphones to be very matched in frequency response and everything else so they can actually derive the surround content out of it.

I know that people in the Canadian Broadcasting Company, they've been using quite a few of these, especially for television because of that, "Hey, I can only record with a stereo input on my video camera," and derive surround sound content for sports and other types of broadcasting. Kirk: Wow. Okay. Wow. This is so far out the usual broadcast realm of engineering and beyond what we talk about here, but it's great to get a picture, to get an idea of the issues that you deal with in recording out in the field. Uh-oh, what's that? There we go. What's next to show? Chris, you got any questions at this point?

Chris: No, no, not yet. This is exactly . . . this is cool stuff. I just wanted to say.

Kirk: I'm soaking it in here. Chris: I have some questions, but so far you've answered every one each time. So I'm good. Kirk: Go ahead, Alex. What's next?

Alex: The other challenges we have, both in flat-headed recording or in live broadcasting is that we're always out in the field. We're in a different place almost every weekend or sometimes every day. So we have everything in road cases. This is a typical picture of one of our, believe it or not, less complicated setups.

We're recording everything to a computer and we have high-end mics, see there on the left. We're making CDs for the artists. We have backup high-definition recorders on the right-hand side there, a variety of equipment to get our job done. That's just for the capture of the sound. We haven't even touched on how we sometimes need to get this to go all the way to the broadcast plant and all that kind of stuff. You'll see there in the lower right-hand corner, we have an Antelope Orion running at 96 kHz.

Some people might ask, "Why are you recording at 96 kHz instead of at 44.1 or 48, the typical broadcast standards?" We have a couple of food for thought to answer that. One of them is we don't know what's coming up five, three years even, in the future. We don't know what else is out there. So we want to make sure that we're going a little bit higher-quality sampling rate. We're not doing it because we think we can hear at 25 kHz. I know I can't. Can you?

Kirk: No. Alex: But we also do it so when we're in post-production or doing any manipulation of the sound on location, we actually have the frequency--I like to call it frequency headroom, which is really not a technically correct term--but the frequency bandwidth to manipulate compression, to manipulate equalization without running into other issues that may occur, such as a very sharp, Nyquist frequency filter or something else. A lot of people don't realize that even though things get recorded at 44.1, 48, the processors, like I know the linear acoustic processors, the Omnias, they actually oversample. They double, triple, quadruple the sample rate to make their calculations so that what we're doing is starting off with a higher-end product to make those calculations, those manipulations more possible in the workflow that we're in. Hopefully, that makes sense.

Kirk: Yeah. Maybe a good analogy would be if you're a professional photographer and you're doing photography for a magazine, you're almost always going to be capturing and working in a higher resolution than what the end user ends up seeing in a magazine or on the web or any other format that the consumer is likely to take in your photographic expertise.

But we're working in a higher res format so that we don't bump into resolution problems that then actually do show up at the consumer end. So not only as you mentioned recording at 96, we don't know what consumer format is coming down the pike. Maybe it's going to be so simple, cheap and easy to let you play back 96 kHz audio at the consumer end. Maybe that will be a standard one day at the consumer end.

But for now, let's capture this as best we economically can so that we can manipulate it, do math on it and hey, with DSP, we know there's a ton of math going on. And you're right. Audio processors often work at very high sample rates inside, far higher than what came in or what's going to go back out. Alex: Indeed. The other thing that I like to mention to a lot of people is that even iTunes, when a lot of submissions into iTunes, they request for a 96 kHz master. So you have to think to yourself, if they're asking for that now, something is going to come down the pike. I wouldn't think Apple would be asking for that just because. Kirk: Yeah. All right. Alex: So here's my last photo that I thought I would share. This is one of our--we make many orchestral recordings. This one is our setup for the Phoenix Symphony here in Symphony Hall here in Phoenix, Arizona. For these particular situations, we have the fun and joy of having to fly all of our microphones. We actually take that seriously because we do acknowledge that our microphone stands can become a bit of an eyesore, especially when people are paying $45 or $55 or more per ticket to see the performance. So we're flying microphones.

What you'll see here in the center is a pretty common recording technique. It's called the Decca Tree. If you're familiar with it, I highly encourage you to type in "Decca Tree" into Google. You'll find out this is an array that is usually 2 meters across with a bar sticking out the front about 1 meter forward with a microphone on each one of those edges of that tree.

Those microphones are usually Omnis with a little bit of high-frequency boost that gives you a little bit of directionality in the high frequency. The microphones that we're usually in this particular case are Sennheiser MKH 800s, but there are other microphones that serve the purpose quite well. There are ones that Neumann makes that are called M 50s, which are like four times the size of these. We don't use them because they're four times the eyesore. Kirk: Okay.

Alex: As well as we get a little skittish when we're putting a lot of weight above a conductor and that conductor is the person who's running everything.

Kirk: Yeah. By the way, you mentioned those Neumann M 50s. We'll link to this in the show notes. I had heard Decca Tree before. I didn't know what the heck it looked like or sounded like. There's plenty of information on the web. The Neumann M 50s, they're specifically mentioned in a couple of places. We'll put those links in the show notes. Alex: Indeed. And one of the other things we like to do is when we're using these particular recording techniques like Decca Trees, they're made to make sure that surround works as well as stereo and folds down to mono. We have a strong grounding with that center microphone to make sure that there is a strong center image in what we're recording. That's one of those things that I try to emphasize to people. Always make sure that the techniques you're using can be adapted and work, not only in surround and stereo, but also all the way back down to mono and vice versa.

You'll see a spot microphone there on the stage, by the way. If you look closely, it's basically in the middle of the stage, it's there for the woodwinds. Remember I mentioned about delaying microphones.

We delay that microphone so that the time of arrival of those instruments there in that section of the orchestra into the recording is the same time of arrival as it would hit to the Decca Tree. Very important. Kirk: Folks, you are watching or listening to This Week in Radio Tech, Episode 266. Our guest, along with Chris Tobin and myself, our guest is Alex Kosiorek. He is a recording engineer and consultant among other things, high-quality audio. He's all about that. He's in . . . are you in Phoenix? Is that where you're at, Alex?

Alex: That's correct. Phoenix, Arizona.

Kirk: All right. We'll have more from Alex. God, we've learned so much already. My head is about to explode. But we're going to talk about loudness and the BS. 1770, web delivery, all kinds of more issues about quality and workflow, transcoding. All that's coming up in the second half of our show. This Week in Radio Tech is brought to you in part by the folks at Telos and the Telos Hx1 and Hx2 telephone hybrids. Now, these are classic. Well, classic in that we can trace the DSP action back to Steve Church's original design. But we've made so many more improvements over the last 30-plus years in these codecs.

Now, the Hx1 and Hx2 are still POTS codecs. And there are still plenty of places where POTS codecs are getting put in. I have seen literally hundreds and hundreds of these Hx1 and Hx2s in radio production environments, at podcaster environments, individual studios. If you've got . . . it's still the easiest way to connect. You don't necessarily have to have POTS from the phone company coming into your facility in order to connect a POTS codec.

Let's say that you just need a quick and easy hybrid to connect phone calls with. Well, all you need to do is hook this up to let's say a POTS port on your PBX system. Let's say you're part of a big business and you've got a big PBX there. Hook it up to a POTS port. It will work just fine.

Let's say you've got VoIP service coming into your home, let's say through Vonage, which is a proprietary format. There's no way you can plug that into an Asterisk VoIP system, for example. So just get a POTS output out of that, out of your Vonage system, for example, or magicJack or whatever your connection is, almost all of them still offer a least common denominator connection with POTS. You're going to get really good quality out of this. Despite whatever connection you've got, you're going to get the best quality that will come out of it.

Now, if you have regular POTS lines too, that can be helpful in that the Hx1 and Hx2 can be set for the individual characteristics of the POTS system in every country on earth, something like 140-plus different settings are available to match your line voltage, your pickup current, the ringing cadence, lots of other characteristics, impedance included of the phone line.

Plus, there's Omnia audio processing in the Hx1 and Hx2. Now, the Hx1, by the way, is a single-line telephone hybrid. The Hx2 is a dual-line hybrid. The Hx2 you can treat as two completely separate hybrids or you can cross-connect them with just a couple of DIP switch settings, cross-connect them so that they work together in a conferencing mode.

So if you want to take calls on the air and you want to use two phone lines, you can use the Hx2 to have a caller on the air, let's say an expert guest like Alex Kosiorek here on this show, and then you'd have listener callers call in and keeping the expert guest on the air, have those callers interact. Even if your audio console doesn't have two mix-minus feeds, the Hx2 will provide that conferencing between them for you. Just amazing, amazing box.

A couple of other features: the Hx1 and Hx2 can be set to auto-answer and auto-disconnect. So you can use them for information lines if you want to or listen lines. I'll tell you what they're really popular for and they're still being used for this is an IFB call-in line to TV stations or radio stations that send a reporter out, quick call in to your designated phone number. It will auto-answer and connect you right into the intercom system at the TV station. So plenty of uses for these devices, the Hx1 and Hx2. I've got to tell you, the Hx2 is one heck of a great value for two hybrids. If you need two, look at the Hx2 and look at its pricing from your favorite dealer and you're going to find that it is just a really, really inexpensive hybrid for all the stuff that it does and you get two of them in the same box. If you would, check it out on the website at TelosAlliance.com. Click then on the Telos tab and the telephone hybrids and you'll see the Hx1 and the Hx2. In use everywhere. You've probably heard one today and didn't even know it.

All right. It's Episode 266 of This Week in Radio Tech. I'm Kirk Harnack along with Chris Tobin. Chris, you want to check in from your rooftop location in Manhattan? There he is. He has to go get his microphone. Chris: Yes, yes. Hang on. The sun is moving so I have to sort of move out of the direct light.

Kirk: Gotcha. Chris: Yeah. So I have a nice water tower behind me. I should get a little spigot and get some water out of it. Kirk: I wonder what it would be like to have a live streaming surround feed, an immersive surround feed from the top of some skyscraper in Manhattan. Wouldn't that be kind of interesting?

Alex: That's exactly what I was thinking. There's probably plenty of water there. We don't have a lot of that out here in Arizona. Kirk: Right behind Chris? There's a lot of water right there. Chris: Two large water tanks. Yes.

Kirk: You know what would also be cool? To have some immersive sound arrays in Central Park. I've never lived in New York City, but spending some time there, I've got to believe Central Park--yeah, I know, TV shows Central Park as a place where you might get killed after midnight, Central Park is where you keep your sanity in a place like New York. It's so relaxing. So many corners of the park. Oh my goodness, it's like being in the middle of the country. I would love to hear an immersive sound array from Central Park, livestreaming. Chris: The reason Central Park is for that reason alone. It was for people to get away, city folk to get away to the country without leaving the city. Kirk: Yeah.

Chris: It's all man-made. Everything in Central Park is man-made, including the rocks that you may climb on and the water you pass, the pond and everything. It's all been placed there by human hands.

Kirk: Yeah. Good design, though. It feels very natural. Isn't there a place called the Bramble or the Ramble or something like that?

Chris: Yeah.

Kirk: It feels like you got lost in the woods. Chris: Prospect Park in Brooklyn is similar. It's actually larger, I believe. Similar effect. Kirk: Alex, we have a number of other subjects to cover. Anything more that you want to wrap up with on this sound acquisition techniques here that you've been talking about for the last half hour?

Alex: I think the best thing to mention is even though we got a lot into surround sound and immersive content, to go back to basics for just a moment and ensure that everybody knows that to create a really great recording, it doesn't matter what genre. It really takes just making sure that your knowledge of keeping it to the basics, to the core--two sets of microphones makes a stereo recording. You don't need 50 microphones to make a stereo recording. Two, start off with two, making sure your gain is set correctly, recording into the medium with plenty of headroom.

Especially with digital today, the signal to noise ratio of digital and the quality of products has gotten so high at a very economical price. Just do the recording justice by making sure you have all that set up well. Keep it basic and use these. Use your ears. If it doesn't sound good, try changing where you put those microphones and just keep it basic. Keep it simple. It doesn't need to be highly complex. Kirk: Chris may have some follow-up questions, but I've got two right off the bat. Does the binaural technique, maybe with an added microphone on the top or a microphone array on the top, does that have any application here?

Alex: Yes. I don't spend a lot of time in binaural, so I can't talk too much to it, or at least not very intelligently, I would say, but I do know that binaural is a different way of getting immersive recordings. But primarily, the reason why I don't spend a lot of time into it is even though a lot of us wear earbuds with our Apple iPhone or what not, it's not the norm. Not everybody is wearing a pair of headphones trying to get this immersive experience going on. It's more or less you want to make sure it translates to speakers and to a television, so on and so forth. Often times, a binaural recording won't translate well. Kirk: Okay. Alex: That's the issue that's there. Kirk: My second follow up question is would it just be absolutely goofy to have a technique where you mic every instrument close and mix 72 microphones together? Is that just goofy?

Alex: Actually, it's done quite a bit.

Chris: Quite often. Alex: Yeah. When we watch, let's say, one of the awards shows, the Academy Awards or what not, almost every instrument is mic'd up close and we are mixing it on a large desk or what not and then we're trying to create that same feel. The main difference between those two is, one, with using just a few microphones, we like to call it a minimal microphone technique, is we're allowing the ensemble, the conductor and the acoustics to create the sound that we want.

If we're in a pit orchestra or we're doing something for live television, when you've got to mic everything close, then we're artificially creating the environment. One is not better than the other, they're just different. Kirk: Gotcha. All right. Chris, I'm sorry, did you have something to follow-up with?

 

Chris: You're talking about instruments, 72 instruments plus or minus. That's done often. The binaural stuff I've done, I did recordings many years ago. Yes, it's best experienced with headphones. Outside of that, it can be difficult to enjoy. It is an interesting recording technique. It's fun. You do hear things in a time/space frequency relationship that you probably are not accustomed to if you are wearing headphones.

I can tell you one of the recordings I made, which was a live jazz quartet outside at a beach, there was a radio station helium tank. They were doing balloons for giveaway. It was standing behind me. If you see my hand and think visually behind me, say about 20 feet, 30 feet and they're blowing up balloons and you hear that sound.

Well, wearing the headphones, listening to the music recording, all of a sudden you hear literally come from behind you, the sound of the helium. I've had people wearing headphones in a room suddenly turn their heads to the right. I'm like, "Oh, it must be the balloons." That is the time/space, the three-dimensional or the recoil-ness about binaural recordings or now with the immersive.

You can create that effect where you visually are looking or listening to something in the foreground in front of you and all of a sudden something comes out to you from a different dimension, I'll call it, and your brain registers something shifted, literally phasing. You now turn your head. That was the coolest thing we did with the recording where I was like, oh wow, now I really understand what some of these folks go crazy with when they do binaural recordings and the--what's it called? Oh Fritz, Fritz head?

Alex: The Fritz head. Yes.

Chris: Yes. I did a lot of reading up on that when I was doing these recordings. Then I tried the same concept with the Fritz head using two condenser mics velcroed into the collars of my shirt for a Joe Jackson concert at Radio City Music Hall to record it in the same fashion as Alex was mentioning, acoustically, rather than all these crazy microphones. That too was sort of a quasi-binaural, if you will. It was pretty cool. Again, that too, I heard conversations behind me, to the side, in front of me, while listening to Joe Jackson perform. It was pretty cool.

So yeah, I would say if you're going to record stuff. Don't think it's set in stone you have to do it one way or the other. As Alex pointed out, if you make a recording, stereo is two microphones or multiple or 5.1 or multiple, whatever it is. Listen to it, listen to the mix and try and capture what the event is about or the moment. If it's music, try to capture what that artist is trying to convey. Alex: I don't mean to urge on another story here, but I remember when I was listening to one of my first binaural recordings. It actually had sirens on it. This was 15, 20 years ago. I was home visiting my family. Here I am in the backseat of my parents' car. I'm about 22 or 23 years old and still think I'm a hot shot or something like that. I'm listening this and going, "Mom, pull over there's a cop coming up." She's like, "What the heck is going on?"

Kirk: Hey, you know what? We're going to run out of time here in the next 10-15 minutes. Let's move along to a couple of things I would love to hear about. Alex, before the show, we were talking a little bit about loudness. You mentioned the BS. 1770 standard. You may have mentioned another one in our conversation too. Can you talk to us a little bit about loudness and measuring loudness?

In the world of most of broadcast engineers, we're always trying to crank it up. We're trying to have loudness that people can listen to our station in any environment and it still sounds pretty good, but we know we've got to change the original dynamics of what got recorded. Of course, nowadays with CD mastering being the way it is or song mastering being the way it is, we don't have to change much. In fact, we have to undo some of what's been done so we can do our magic to it. But talk to us about loudness and how we ought to be thinking about loudness. Alex: Well, when I mention loudness, it's usually before that end broadcast processor, where everybody is trying to be present on the dial and so on and so forth. What I mainly deal with a lot is how we are actually sending the audio from one location to another and making it more uniform. It's been an issue in various types of broadcasting radio and television and a lot of people have been circling around a variation of what's called the BS. 1770 standard. That standard has been revised a few times. I think we're up to BS. 1770-3, the third revision.

It basically is a measurement standard that tells us, at least tries to mimic, "Okay. This is the loudness of how we perceive it in an environment." Set the volume on a console and you're hearing something coming at you and it's a certain loudness. We have the scale, they call it LUFS. I'll get into that in a second. But supposedly it mimics what we think is loudness. The other types of scales that are out there, peak reading meters or what not, don't really give you a sense of how loud something is. That's what this new standard is there to help us with. Kirk: Okay. Alex: So the next thing has also been adopted by various organizations around the world. Here in the united states it's A/85. A/85 is used primarily for television broadcasting. Here in the United States, most stations run at a loudness unit below full-scale. I'll get to what that means, what LUFS is. That's what LUFS stands for, loudness units before a full scale, full scale digital meaning 0 DVFS. When you hit the 0, they can't go any farther beyond that.

So most of the broadcasters in the television world are broadcasting around -24, -23 LUFS. That's sort of what they say, "Hey, we need to broadcast at that level." It allows us a lot of headroom. A lot of dynamics can come back into the content that we're listening to.

But also, because it allows for that dynamic to occur, and I'm probably not saying this most effectively, it allows us, like when it does get to something like onto a radio station where the audio does need to be compressed to be on the dial at a certain level, it allows us to control that audio much more effectively with much less artifacts, much less distortion, so on and so forth. If you're broadcasting at a louder . . . or you're sending material that's a lot louder before it even gets there, have you heard the term hyper-compression?

Kirk: Sure. Yeah. Alex: So when things become hyper-compressed, you can't really squeeze much more out of them. That's why there are these various techniques now to undo a level of squash that's there. Here in our production studio alone, we have three different types of de-clippers and different things to get the dynamics back into the music.

So I encourage people to listen or to look up these standards called BS. 1770. You can Google it. You can put it into YouTube. There are lots of videos that can express this and teach you much more effectively than I am probably talking about it now and share that if you are using a loudness scale and you are going at a relatively conservative loudness level, -23, -24, which is pretty much the norm here in the United States of how we're trying to send material or should be sending material from one to another.

It is the PBS standard. It is the standard for PRSS, which is one of the distribution methods for public radio and other entities, they're using the BS. 1770 standard where 23 or -24 LUFS is the loudness level that we're trying to get to. That allows for up to 20 dB or more of dynamic range. That's just wonderful. Kirk: Gotcha. All right. We'll provide links to various documents in the show notes if folks want to have a look at that. Unfortunately, we're going to run out of time here. I want to talk about our final sponsor here. When we come back, Alex, if you would, and Chris, add your thoughts to this as well, think about what you've talked about so far in terms of audio quality and capture and then give us some comments on what the broadcast engineer ought to be thinking about to maintain quality as best we can.

Is there anything that we're doing that we shouldn't be doing or not doing that we ought to be doing? We might be addressing the engineers more at fine arts radio stations where dynamic range and plenty of it is more allowable. Maybe we can also speak a bit to engineers at commercial stations where dynamic range is generally not what we're really after all the time.

I'd like to hear about you maintaining audio quality or maybe where we're going in the future with regard to high sample rates and lots of bitrates. And please, let's get rid of audio compression where we can because hard drive space is so cheap nowadays. You're watching and listening to This Week in Radio Tech, Episode 266, with Alex Kosiorek as our guest. Chris Tobin is along as our cohost broadcasting from Manhattan. One of our sponsors for this show is Lawo and the Lawo crystalCLEAR Audio over IP Audio Console.

This console is really interesting in that, okay, it's traditional in the fact that there's this rack-mount box. This is where all your audio inputs and outputs go. This is where your mics can plug into, analog ins and outs, AES digital ins and outs. And there's also networking capability, where you can get the Ravenna standard, which also is AES67 compatible.

So you've got all your inputs and outputs on this one-rack unit box. You can also get dual redundant power supplies in that as well. That's the mixer. That's where it goes on. That's not the part that you touch. That's not the part that the disc jockey or announcer or program producer touches.

That part that you manipulate is now on a touchscreen, a multi-touch touchscreen that's actually a Windows PC with this app that just takes over the whole screen and looks like an audio console. Now, the audio console on screen is nice and large. You can easily get to all the faders and the buttons, no problem. You can move several faders up and down, touch buttons at the same time because it's a 10-touch touchscreen. That's all I could handle with 10 fingers.

But it's an app that runs. It's networked into the DSP mixing engine. So this networked console can be anywhere on the network. If you need to pick up and move to a different studio, no problem. If you need to pick up and move to, I don't know, your desk, no problem. You could do a show from your desk.

What's interesting is that since the console is literally designed in software, there's nothing on the screen that isn't designed on purpose in software, all the buttons are context-sensitive. They're contextual. So if you hit an options button on a mic channel, you get options just for mics, just for what you want to do right there. It's a very interesting concept. I think it's going to be important for a lot of broadcasters to look at going with this technique. You have so much flexibility. I think it's just the beginning of this kind of technique.

Check it out on the web. There's a video where Mike Dosch, the Director of Virtual Radio Projects at Lawo, demonstrates this product, the Lawo crystalCLEAR Virtual Radio Mixing Console, talks about how to use it, how it works and why it's designed the way it is.

If you go to Lawo, L-A-W-O, that's a German word, Lawo.com, and look for radio consoles and then look for the crystalCLEAR virtual radio mixing console. You'll see the page you see on the screen right now. In the upper right-hand corner, there's Mike Dosch, a little thumbnail from the video where he's explaining how this works. Check it out.

Contact your nearest dealer and see if you can take one for a spin. All right. I appreciate Lawo for sponsoring this portion of This Week in Radio Tech and the Lawo crystalCLEAR Virtual Radio Console.

All right. We're back for our final few minutes. Alex, I was asking you to think about your messaging to broadcast engineers about audio quality. Got some advice for us?

Alex: Yeah, certainly. I think that a lot of things have changed now that we're into the digital domain.

Kirk: Okay. Alex: And a lot of things become much more simplistic. You mentioned audio over IP, which is another way of getting audio information. Everything now seems to ensure that we're connecting everything at least at 16-bit, usually at 24-bit linear audio. I think one of the important things is to always double-check. I say dot your Is, cross your Ts and make sure you're actually communicating appropriately. I've been in a lot of different situations where you have a 24-bit IO device, but inside the device you've accidentally turned on dither, the dither down to 16-bit when you don't need to do so. Everything else is communicating at 24 bits. Those types of simple checks in your system to ensure that everything is communicating at the best bitrate and hopefully in a linear audio fashion really help with audio quality.

Whether or not the source is popular music or classical music, it doesn't really matter. If we have everything at a linear storage format as well as being communicated through the entire broadcast chain as linear audio at the best bitrate and hopefully the best sample rate as well that really gives us effective amount of quality.

The other thing is headroom. We are so used to saying, "Okay." I was always thinking for years, "I need to get as close to digital zero as possible." Now at 24-bit depth and the chips that are available these days, in a very economical little box, even the ones from M-Audio that cost like $88.

The performance of those things is fantastic. We don't need to be hitting digital zero anymore. Leave plenty of headroom in there. Especially for your entire broadcast chain. If you have lots of headroom, 6 dB is what I call the minimal. You shouldn't be getting above -6 if you don't have to and getting through the entire broadcast chain. Use the BS. 1770 standard that I mentioned.

If you have everything going through your broadcast plant at -24 LUFS that will help you maintain headroom almost automatically. Keep that through the entire broadcast plant. Then by the time it gets to the transmitter, it gets to the broadcast processor regardless of brand, it's going to be the best it can be at that point and then you can manipulate it and you have much more to work with. That's what it is. Kirk: Yeah. Good advice. Chris Tobin from high atop a building in Downtown Manhattan, this has been pretty interesting. I've enjoyed this. How about you?

Chris: I've enjoyed it greatly. Everything Alex has mentioned has been spot on. My suggestions are pretty much aligned with Alex. Come up with a standard level that you're going to operate with, -23, -24, set some type of a standard internally with staff. What I always like to do is educate folks on why and how you're doing it and then give them the tools.

So let's picture a radio station that is jazz music or jazz classical or combination thereof and you have several groups that work outside as well as inside the radio station. I'm sure if they're outside, as Alex pointed out in some of his pictures, they had a MacBook and they're using that to record or be part of the laptop-centric recording system. Make sure that maybe there's a LUFS meter on that laptop so when they're out in the field, they can already look at the proper reference and know where they are.

So when that recording makes it back to the studio for any other production work or enhancement or not or archiving, it's already hopefully at the proper level, so that when it goes back into production there, they bring it up on their machine and say, "Oh yeah, we're good," and they just continue on their merry way. Same would be true for the consoles. Try and either mark the VU meters, the LAD meters, just somewhat represent where they should be for LUFS -24 because I'm sure they're not. They're not on that kind of a meter.

But at least demonstrate throughout the plant the workflow, where to keep an eye on things, so this way if there are any hiccups, you can catch them really quick. Also, pointing out as Alex said with 24-bit, 16-bit or anything of that sort, sampling rate, is your plan 44.1, 48 or are you going to go 96. Whatever you choose, you're going to have to keep throughout the entire workflow.

So when people come in from outside and maybe have a Pro Tools facility and doing Pro Tools work, are you doing 44.1, 48? Are there work pieces coming in and work parts that are the proper data rates? These are things you want to have up front so when people are working for you, you have guidance for them. They know when they're coming in what they're going to do. That's just broad strokes. There are a lot of other things you can do. One thing I want to throw in, after all this talk, Kirk, you mentioned earlier about orchestra and music scores on a movie and how do you make the piccolo create a sound and then tailors it to the score and to the image that you're watching. If you want to get a real good taste, if you're a sci-fi buff it's even better, listen to carefully the score on the "Doctor Who" series. It's produced by the National Orchestra of Wales. Murray Gold is the composer.

I watched a couple of interviews from the BAFTA Awards that he received some nominations. He talks about how he was hired to do a movie score, TV score, but doesn't want it to be a TV score. It's got to be something different. He goes, "Well, you want this? You want that?" He'd play pieces, they heard strings. They heard percussion. They heard other things, like, "That's what we want." He goes, "I'm going to need an orchestra."

They have the National Orchestra of Wales producing the score of "Doctor Who." But when you listen to that score and the character identity with each different sound, I'm telling you, once immersive technology gets really easy to implement and things of that nature, it's going to be crazy. If you want to get a feel for it, how the music, the score brings out everything in the visual, that's one of the shows to watch. I just think it's great for that. Alex: It's actually great that you mentioned that. There is a great documentary that is going to be coming out this fall. It's actually called "Score." It's actually a movie that has all these different composers, probably the one you just mentioned, and a lot of different orchestras that are involved in this entire process. It really is, especially with music, television music, there's a resurgence of this right now. Chris: There's a phrase called the Korngold Hollywood Music, Korngold Music of Hollywood. It was Wolfgang Korngold. If you remember the movie "Robin Hood" with Errol Flynn, that music score is his. He was the first composer to use F-major. He went a different approach and brought the orchestra sound and the energy to movie scoring. This goes back 50 years, 60 years. That's why I was bringing in the "Doctor Who" reference. The movie "Score" is going to probably talk about it in the fall.

If you listen to movies produced in that fashion, and "Star Wars" in the early days was done that way. There's a difference. John Williams, the John Williams orchestra. There's something to be said for the visual exciting effect as well as the visual and you marry them together. You can't go wrong.

I think with immersive audio and the techniques and the ability to take an object and place it in different spots in the sphere of your listening, I'm using my hand to go around my head to create a sphere, those who are just listening. This show is for those visually impaired and audibly not. We're talking audio in more than just a single dimension. So you have to think of it. Alex: Indeed. Yeah.

Chris: You mentioned earlier, "What's it going to be like to record something on a rooftop in Manhattan?" Well, it's three-dimensional. It takes a lot. Kirk: Well, one thing that's become clear to me through this conversation and where things are going is that audio engineers will have no lack of work, at least high-quality work, in the coming years, audio engineering artists, perhaps. I'm not sure what you call them, people who deal with this gear and work with the sound placement, making sure you have great quality audio to place. We're up for exciting times ahead, I believe.

Alex: I actually want to plug something really quick if I don't mind.

Kirk: Yeah. Alex: When it comes to audio engineering and immersive surround, one of the biggest realms right now where there is actually a lack of engineers, a lot of the different broadcasting companies have mentioned this, is in sports. Sports, you have an immersive environment that is around you. If you're in the stadium, you feel it.

Creating that in the broadcasting realm is not the easiest thing to do because we're still short the engineers for that. So if there are any of you out there that are thinking of, "How and what area of broadcasting can I get involved in that can get be dived into immersive surround," I highly encourage looking at going into sports broadcasting. Kirk: Sports audio. Alex: Sports audio indeed. Kirk: Good. And remember, there's sports audio all around the world. There's not just football and baseball in the US and basketball. I just got back from Australia, where there's rugby. There's rugby league, which apparently is completely different or somewhat different. And there's Australian Rules football, which nobody understands at all. But it's got plenty of opportunities for good audio. Guys, we've got to go. Alex, this has been fabulously interesting. I hope that you can find time to come back and visit us another time. We have a lot of things we didn't get to cover. Alex: I'd be more than happy to. Kirk: That's great. Thank you so much. Thank you. Chris Tobin, you have the chief cook and bottle washer at IPcodecs.com, something like that. Is that right?

Chris: Yes. Just support@IPcodecs.com. I'd be more than happy to help you out and make things happen for you. Just for the record, I said earlier if you do a search in the search engine for the H&M building in Times Square, New York City, that's where I'm at. That's the rooftop.

Kirk: The H&M building.

Chris: I'm on the south side. So I'm looking at the Empire State Building and one World Trade Center and a few other landmarks facing Downtown Manhattan. Kirk: Sounds good. Wish I was there. Chris: By the way, this is all done in LTE. This is all wireless. Kirk: I'm amazed at how well you make that stuff work. Remember next week we'll look at your go-kit. Thanks a lot for Suncast for producing this episode of This Week in Radio Tech and for Andrew Zarian, who's out today but provides the network, the GFQ Network, with lots of interesting shows that you should check out at GFQNetwork.com.

Alex Kosiorek has been our guest today along with Chris Tobin, who's our usual cohost. Thanks to our sponsors. Please visit them and tell your friends about This Week in Radio Tech. You can find us at ThisWeekinRadioTech.com. If you'll just subscribe to the podcast, it will automatically get loaded onto your favorite device or devices. I think I'm responsible for about four downloads a week on four different devices. We'll see you next week on This Week in Radio Tech. Bye, bye, everybody.

Topics: Broadcast Engineering