Adaptive Streaming with Tim Pozar | Telos Alliance
By The Telos Alliance Team on Nov 6, 2015 11:58:00 AM
Adaptive Streaming with Tim Pozar
Online video - like Netflix and YouTube - have used adaptive bitrate streaming for a few years now. The same tech is becoming available to audio streamers like radio stations and other online audio broadcasters. Tim Pozar, Streaming Director for Fandor.com joins Chris Tobin and Kirk Harnack to teach us about adaptive streaming technology.
Watch the Video!
Read the Transcript!
Kirk: This Week in Radio Tech, Episode 280, is brought to you by the Axia Fusion AoIP mixing console. Fusion: where design and technology become one. By the Z/IPStream R2 hardware audio processor and stream encoder. Stream like you mean it with a Z/IPStream. And, by Lawo and the Crystal Clear virtual radio console. Crystal Clear is the console with the multi-touch touchscreen interface.
Online video like Netflix and YouTube have used adaptive bit rate streaming for a few years now. The same tech is becoming available for audio streamers like radio stations and other online audio broadcasters. Tim Pozar, Streaming Director for Fandor.com, joins Chris Tobin and Kirk Harnack to teach us about adaptive streaming technology.
Hey. Welcome in to This Week in Radio Tech. I'm Kirk Harnack, your host. I'm glad to be here. I'm really excited about this show. We've got a great guest for you and, of course, our usual lineup of superhero hosts like... well, there's me, and then the best-dressed engineer in radio is Chris Tobin from New York City. Hey, Chris, welcome in.
Chris: Hello, Kirk. I'm doing well, yes. Today is a dress down day. I'm doing some work in a terminal room, but having a good time.
Kirk: Cool. Are you in the city, are you in North Jersey, or where are you at nowadays?
Chris: New Jersey. I'm in Newark, New Jersey today.
Kirk: I'd ask for a weather report, but it looks like it's kind of punchy around there.
Chris: It's raining.
Kirk: Is it really?
Chris: It's supposed to be a 10% chance of rain today, and so far it's a deluge. It's their idea of 10%.
Kirk: We're going to bring our guest in here in just a minute. Hey, if you've tuned into this, then you've tuned into the show about radio and audio engineering and RF engineering for radio. It's This Week in Radio Tech. This is episode number 280. Normally, every 10 episodes we do a War Stories episode where we talk about some crazy experience that we've had that we can learn from. I've asked our guest, who we'll bring on in a minute, to think about some crazy episode in his engineering life that maybe we can all learn from. While we're thinking about it, maybe Chris Tobin can come with something as well.
In the meantime, though, the show's topic is about something that's new. Actually, there will probably be war stories about this new topic as time goes on. We're going to be talking about adaptive streaming technology. We've had streaming for a few years. There are all kinds of conditions that affect whether a stream can be received by a given listener. We're going to talk about adaptive streaming technology and how to put that to use. Hang on for that. We're going to get right to that in just a minute.
Let me go look at the rundown. This is a new thing for Kirk here. I'm supposed to intro the guest according to the rundown. Let's do that now. Our guest, welcome in from California, Tim Pozar. Hey, Tim.
Tim: Hello. Hi.
Kirk: How are you doing?
Tim: Good. San Francisco area in particular, in my office here, in the Jackson Square area of San Francisco.
Kirk: Cool. You and I were talking about something that you do. It's a website that I don't think I'd really heard of until we talked. That is a website called Fandor. Tell us about that and what you do there.
Tim: Sure. Fandor, the quick elevator speech is it's a subscription video on demand service. It's much like Netflix where you pay your $8 or $10 a month or whatever and it's all the films you can eat. In this case, the films that we do are the ones that Netflix usually doesn't pick up. It's independent, foreign language films, film festival films, the artsy kind of stuff you would see Criterion have. It would be the Kurosawa films, Warner Herzog, those sorts of films. If you're a real cinephile, this is the place to be to sign up and watch this kind of stuff.
It's designed to try to also help out those people who create these kinds of films. We try to extend the income for these kinds of films. Normally, these just see the film festival circuit. Then they put them on a shelf and they don't make any more money. If they can put it on our service maybe we can pay them a little bit more for their efforts.
Kirk: Cool. The site is Fandor, F-A-N-D-O-R. You have a free trial program. You definitely might want to check that out after this show is over. Speaking of this show again, coming up we're going to talk about HLS and other types of adaptive streaming. Stick around for that.
Our show, This Week in Radio Tech, is brought to you by some sponsors. One of those sponsors is Axia and the Axia Fusion audio console. Chris Tobin is familiar with Axia technology. He installed some Element consoles. Element was the predecessor to the Fusion. The Fusion is a successor to that in that it takes Element technology and improves on that.
All metal construction, for example, OLED technology for the displays to show at an instant, at a glance and very clearly what the input channels are. Confidence meters for the incoming audio, also confidence meters for the back feed audio. If your talent is out in the field and says, "I can't hear you," you can say, "I'm sending you audio. I see it right here. Let's find out where the problem is because it's not the console."
The Fusion console from Axia is simply amazing. I want to point out that this is a console that will last you for years and years. I've got to estimate that this is a decades console that Axia is putting out. Why do I say that? Because the top panel, the thing that gets all the finger work, all the hands touching it, the dirt and the grime, it's made out of double anodized, laser etched, brushed aluminum. It is amazing in that the markings on it, the dB level markings, any other markings that are near any buttons around the faders, near the knobs can't wear off.
They can never ever wear off. They're laser etched and then double anodized. You can't rub them off. You can't scrub them off. If there's dirt on it, yeah, you can wash that off. Guess what? The dirt goes away and the markings stay right there. It is really gorgeous. The bullnose is metal. The over bridge is metal. The end caps are metal. Of course, the under pan is metal. It feels so solid. It will put up with years and years of heavy-duty service.
The Fusion console, like the Element console, and I know Chris Tobin configured a number of Elements, the Fusion console is custom configured the way you want it. Faders come in blocks of four. You can also get a telephone module that comes with two faders and then telephone controls right down the middle of it with clear status symbols on there that you can see exactly what the status of the different phone lines is.
You can get an intercom module that lets you page right away up to 20 different intercom stations and be ready to listen to other intercom stations when they call you. You can dial your telephone with the main monitor module. You can select what you're listening to in the headphones, the monitor selection. It's amazing how you can lay this console out. You can also get buttons that are either film caps, so they're fixed buttons, or you can get buttons that have text that's writable on there.
There's software called... Chris, help me out here. I'm having a senior moment. What's the software called? Pathfinder software will write on the buttons and can change their function if you want to. You can have buttons that scroll functions up and down. We even wrote a whack-a-mole game on the buttons just to show how fast they work and how quickly the button reactions are to the commands that you've given.
The Fusion console, of course, is the thing that sits on the desk or is embedded in the desk, whichever you like. The rest of it is the usual Axia equipment, the nodes for inputs and outputs, the studio mixing engine, the power supply for the console itself. You can get all those things combined in the Axia power station, or you can do a separates system.
Show that picture one more time, the beautiful Fusion console from the website. Go to axiaaudio.com, or go to telosalliance.com. Either way will get you there. Click on Axia and the Fusion console. This is a serious console for serious operations and will last you for many, many years. I've put a bunch of those in including just up the road from you, Tim, in San Francisco at the [inaudible 0:08:48] stations.
Tim: Great.
Kirk: Yeah, it's a beautiful installation there. Thanks a lot to Axia for sponsoring This Week in Radio Tech. I'm happy to talk to anybody about the Fusion consoles. There are videos online, by the way, on our YouTube channel. YouTube.com, look for Telos Alliance. Go to the Axia playlist and you'll see some videos about the Fusion console.
Tim Pozar is here along with Chris Tobin and Kirk Harnack, yours truly. We're here to talk about streaming. Let me go back to the rundown here so I don't mess this up. This is so new for me, an actual rundown. We're going to talk about the status of streaming and where congestion points are. This is a real teachable part of our show. We're going to have some slides here. Tim, why don't you talk us through some of these concepts that you developed to talk to other engineers about? Suncast, go ahead and pop the slides up whenever it seems appropriate.
Tim: Thanks. Actually, I got invited by Steve Lamp. I probably assume that he's been on your show or at least most of the engineers that watch this know Steve. Steve has been an old friend. I knew him back when he was chief engineer at KJAZZ over here. I was chief and KLOK and KKSF. Steve, knowing that I'm in this area at this point, left broadcasting in '96. I've been trying to figure out a way of actually making sure that the kind of material that we're pushing out, and I jokingly call it audio with pictures, in this case, for you guys, makes it to our end customers.
They pay $10 a month. They want to be able to make sure that the films that they watch can be an immersive experience. You don't want to get about 10 minutes into the film and all of a sudden the little beach ball shows up and you have to wait for the next minute or two for it to queue up another chunk of film. That just doesn't work. People are going to leave in droves after that kind of stuff.
Originally, we had a lot of that kind of stuff. We had a lot of those sorts of problems. The congestion points are many. There's not one solution to be able to make sure that the audio stream, or the video stream with audio in our case, gets out to the customer intact. I put together a little presentation that I did at AES a couple days ago. I think you still have those slides.
Kirk: Let's go to the first slide, which is the multilayer OSI model. This is interesting what you've added to it.
Tim: Yeah. Jokingly, a number of people have added layers 8, 9, and 10 to the seven OSI layers. You'll notice that eight is financial, nine is political, and there's a little circle that says You Are Here. Usually, those define pretty much everything all the way down. The financial may mean that you're not going to have as fat of a pipe, or you're going to have older technology, or you're going to have older servers or bad placement for where your servers are going to be. It's the same thing with the political.
I noticed when I was researching this, I went to Wikipedia to see if they had something similar to this. Actually, the next slide you have there, they actually added a tenth layer. This is stolen from Wikipedia. They say government as well on top of that. If for instance, China is notorious for their great firewall that they have there. That's going to be another impediment for streaming and such.
If you want, there's another slide I have in there, which is a generalized schematic of the Internet and where things fail. That's perfect. When people usually think of the Internet and they diagram it, they make that big cloud at the top there. They try to think of everything as the man behind the curtain.
We need to figure out how our server is going to work and somehow our client is going to connect. There's this magic that happens within the cloud. If you actually drill into it a little bit, you'll see that you're actually going through a number of different service providers as well as what's called Internet exchanges. That's that little IX thing in the lower right-hand corner there.
Kirk: Is that what you call peering points?
Tim: That is. Normally, there are a couple different ways that Internet service providers peer with each other. One is called transit. That's they're expecting that their packets actually will transit through the company they may be buying bandwidth from and go all the way out to the rest of the Internet. That costs money. In many locations, companies... in fact, I've started one as well called San Francisco Metropolitan Internet Exchange where we have about six or seven of these peering fabrics scattered around San Francisco, the Bay Area.
A company will through wire into literally a 10-gig or a 40-gig switch. Anybody else who's on that switch they can peer with. You can have 10, 100, 1000 people connected to this. They can make some sort of policy or business agreement to exchange your traffic over this. This means they don't have to go out and buy a jumper or have to backhaul it 12 miles away, 20 miles away, or 100 miles away to get to where they want to go. They're very popular in data centers. Equinix has these kinds of things.
Kirk: What you're describing sounds to me a bit like... not that they're rogue, but kind of like independent IX points or peering points that avoid whatever politics, costs, or whatever is at the normal peering points we think of, Mae East, Mae West, whatever else there may be.
Tim: They're no longer around so much, but the more well-known peering points are companies like Equinix, which has data centers in Reston, Virginia; Santa Clara, and things like that. They'll have very, very large peering fabrics. There's another quite famous one in the San Francisco Bay Area called Palo Alto Internet Exchange, which actually just got doubled up by Equinix. Those are ones that charge usually quite a bit of money to connect to these things. It's usually about a buck a megabit. It can be a buck a megabit per month to connect. If you have a 10-gig port, you're paying $10,000 a month for connecting to these things.
In many cases in Europe, and I won't dwell on this too long, but I want to mention it since you said there's kind of this rogue thing. In many cases in Europe, there are a lot of non-profit exchanges. The London Internet Exchange does this. There's AMS-IX, which is in Amsterdam. There's DE-CIX which is in Frankfurt. A lot of these are run as not-for-profit.
The one that we have in San Francisco is actually a 501(c)6. It's not a 501(c)3 where it's an educational thing. It's actually a c6. You'll see companies like the NFL is a c6. It's a non-profit organization that's designed to support businesses. We run an extremely low profit margin and try to encourage people to connect through these things. It reduces the cost of connecting to the Internet and providing services with the Internet.
These places tend to cause problems occasionally. There was a famous battle, if you probably remember this, about a year or so ago where Comcast was blaming Netflix for dumping their traffic through an Internet exchange where in fact they should have been dumping it through a transit connection that they had arranged with them. What that meant was they overcommitted. They were trying to dump, say, 40 gigs worth of traffic on a 10-gig port. You have a lot of loss if you're trying to do that through an under-provisioned network switch.
Internet exchanges can have problems. Typically you're running maybe 10 to 15% of your traffic through an Internet exchange. The rest of this is transit. If your transit goes away, you may decide, an ISP may decide to move all of their traffic through an Internet exchange if things won't work. It's like trying to put 100 gallons through a straw.
Kirk: Does this have to do, Tim, with the concept of saturation? Back when Comcast and Netflix were having their tiff... I'm a Comcast customer. Normally, I'm delighted with their service. For me, for two or three months Netflix didn't work very well. I was just about to Netflix, thinking that it wasn't really Netflix's fault. Maybe it was. I don't know. I didn't, and then calmer heads finally prevailed. My Netflix works great now, better than ever.
Tim: Actually, Netflix has a couple other problems. If you go back to that slide, I can talk to you about that as well. In the case of... I mentioned the typical congestion points. I sort of pointed at them. A would have been the Internet exchange that I pointed out. B, the little arrow there, you could have an under-provisioned... a connection where Comcast may have a connection to ISP D. I'm just saying Comcast in the general term. Your cable provider may have a connection to an ISP D. They're trying to push too much traffic through their edge router.
In the case of cable companies, they actually have an interesting thing. They're trying to use basically 50-year-old technology to provide Internet service. In that case, what they're doing is they're taking the cable system that they've been pushing television channels through, usually unidirectionally because all they're doing is broadcast, and they converted these things so they're bi-directional now. What they've done is they've chopped out a couple of these channels to say, "We're not going to carry 12 or 24 megahertz worth of TV anymore. What we're going to do is we're going to dedicate that to data."
That means that they may have 300 or 400 megabits worth of data that they can distribute through a neighborhood. What that means is a neighborhood who may also want to watch Game of Thrones all at the same time because it just got released and they're going to binge watch the whole 13-part series, means that the cable system that they have is going to be, again, saturated. It's going to be quite lossy in that case. You don't get a dedicated pipe when you hook up your... it's basically a shared media within the neighborhood.
That's a big congestion point that we have in trying to deliver films to providers who are typically connected to cable services. We don't see this as much with telephone services like DSL and such because it's not as much of a shared media. You pretty much have a dedicated pipe at that point.
Kirk: I was wondering if my neighborhood may be different or better engineered from Comcast. Every time I've ever done a speed test in the last three or four years with Comcast, I always get more than I'm paying for, at least on any servers that are on Comcast or servers that I know are not very far away from a Comcast peering point. I always hear that our own Joe Talbot, who works with me at Telos, for example, tells you that generally DSL has better jitter figures. It's better for VoIP than cable. I must just be lucky. In my neighborhood, the cable service is just fantastic.
Tim: Keep in mind Comcast went out and they bought up a bunch of tiny little providers. They rolled this up into a gigantic acquisition play. They've been slowly trying to roll out and upgrade a lot of their facilities. They tried to move fiber closer to the neighborhood and set aside more channels. They've actually been upgrading a lot of their protocols. The protocol going over cable modems is called Docsis.
You'll notice that, for instance, you can get cable modems that are Docsis-3 now. That means that they are actually a little bit more efficient along the lines of being able to push bits down a particular channel. It also will tend to aggregate more channels so you can get more bandwidth through them. I'm not trying to slam Comcast too much. We're just using them as a generic cable provider. They've had their difficulties, and it's been at various locations. There's the neighborhood, which we just talked about. There was the little fight that we had between Netflix and Comcast. Actually, Level 3 was a part of that as well. Then there's just network engineering in general.
The real problem that we typically have in trying to deliver audio is kind of the cruddy provisioning that usually users do for their home and the expectations that they have with Wi-Fi. What will happen is they will put their Wi-Fi next to their cable modem, which may be in their living room or something like that. They may want to watch their film in the bedroom. The bedroom may be two walls away. That means they're going to have relatively low signal, or it may be in competition with their son, who is playing World of Warcraft on the other line and trying to share that as well.
What this means is you don't have a consistent crappy network condition. You have one that varies up and down depending on who's using it and what they're trying to download at that moment and such. This actually gets into what you talked about earlier. How do you adaptively manage a good viewing experience, knowing that you may have 1 megabit one moment and you may have 10 megabits the other moment? You want to make that video, or audio in some cases, sound as good as possible.
Kirk: Tim, let me see if I can say this a different way. Without adaptive streaming, the content provider, Fandor, Netflix, or whoever it may be, or a radio station, my little WIQQ in Greenville, Mississippi, without adaptive streaming, we as the content provider and doing the encoding, we've got to decide, "Okay. What's the worst-case scenario that we want to serve? That's the bit rate that will have to do. What's the buffer size that we need to make sure that the player is doing? "We're going to provide a least common denominator experience for everybody.
Or, we can do what some radio stations did for a while. Maybe television did this for a while too. Make the consumer choose. Click this bit rate, or click this link for this bit rate, or click this link for this bit rate. At our station in American Samoa, we did that for a while. If you were on the island, you could hear our station at 96 kilobits MP3. If you were off the island, we only let you stream at 48 kilobits, HE-AAC just because of the cost of the streaming here and there and the reliability. Adaptive streaming lets us not have to program for the least common denominator connection, right?
Tim: Exactly. In this case, we're stressing the network a little bit more than just 48 kilobits or 96 kilobits. Our lowest bit rate is defined by Apple. Apple wants a bit rate that's 64 kilobits per second. What it is is audio only. It's AAC with no video. The idea behind that is they want to make sure that there isn't a point where you're not hearing anything. The video may freeze on the screen a little bit, but you're still hearing an audio track playback. That's the lowest we'll do. The next one up that we do is 300 kilobits per second. That means we do about 192k AAC audio. That means it's about 100k or so worth of video that's in that MP3 container that we have.
Kirk: Is there any device upon which that looks good?
Tim: No. If you have something that doesn't have a lot of color graduation or has a lot of movement to it or anything else like that, like if it's a cartoon that has relatively static images and things like that, it will look okay. You've seen this. The first time it will show up, you'll see these big blotches of video and then it sort of fills in. Again, we do this for those people who may be on cell services or something like that who is trying to watch their video on something like this. You may be able to get away with something like that for a cell phone.
We have people actually wanting to sit down and watch this on a 52-inch screen, a 1080 monitor. Eventually we'll be doing 2k and 4k video as well. We go all the way up to about 10 megabits or so for our highest bit rates and such. Eventually, when we do 4k video, we're going to probably have to be pushing around 20 to 50 megabits or so to [inaudible 0:28:23].
Kirk: Really? That much?
Tim: Yeah.
Kirk: Tim, we've got to take a little pause. Take a breath. Think about the next couple of things on your slides that you want to talk about. We've been talking about the issues with the congestion points, understanding that, and understanding that if we don't have an adaptive method of streaming at a bit rate that's appropriate for the player's connection all the way through the Internet to the provider, then we either have nothing or we've got to stream at a really low bit rate.
The idea here is to make it convenient and automatic so that the consumer just hits play, whether this is audio or video. This has been going on in video for a while now. Now adaptive rate streaming is becoming available to audio-only broadcasters, like radio stations, and the players are becoming built into operating systems. That's what we're going to get to. There you go.
Tim Pozar is our guest. He's the Director of Streaming Technology for fandor.com. Tim has been in the radio business for quite a while, engineered at radio stations in California. He knows a lot of the same people that you and I know. Chris Tobin is with us here too. Chris, you're just soaking this all in, aren't you?
Chris: Yeah, absolutely. For radio stations, I'm curious in the next portion of the show if Tim can talk about what the most economical way is to get the best streaming audio you can for a radio station who wants their mobile audience to stay in touch and keep the cost down for the radio station, or at least find an economical approach.
Kirk: Funny you should ask. Our sponsor for this part of the show, I just got this box today. I was on my way the Nashville SBE meeting where I gave a little talk about adaptive rate streaming. As I'm leaving the house on my way over to the restaurant where the Nashville SBE chapter meets, here comes the UPS driver up the road. The road I live on, there's only room for one car. The UPS truck is about a car and a half.
Anyway, I got out of the way, flagged down the UPS driver, and he had a box for me. I knew he would. This is what was in the box. This is our sponsor for this part of the show. Telos, our Z/IPStream division... let's see, can I get that? There it is. It's way over there. There's the Z/IPStream logo. Obviously, it's hardware. We make this in software too, but this is the hardware box.
This is the Z/IPStream R2, with R meaning rack mount. There we go. Thanks for cutting away from me. I don't know what's happening to my video. This is the box. What this box does, this is your new transmitter for your cluster of radio stations because this will take eight program inputs, up to eight. You can buy this with just a couple of program inputs if you want and pay for more as you go. You can have up to eight program inputs, AES or Livewire audio over IP coming into this box.
From there, you can do all kinds of things. There's the back of it. The AES inputs and outputs are that card over there on the right. Eight stereo in, eight stereo out, AES. There are two network connections and, of course, all the other usual connections. Dual power supply so you've got lots of reliability built in.
What this box does is it runs our Z/IPStream software that does audio processing, either Omnia-3 or optional Omnia-9 processing, and it does stream encoding. For each program coming in, you can encode in different ways if you want to. For legacy, if you want to encode an MP3 stream at 96 kilobits per second and send that off to your legacy Shoutcast server? No problem. Just do that.
At the same time, take the same audio from the audio processor for that audio coming in and encode that in AAC at a high bit rate for some other service you may be providing. Then, if you want to encode at an adaptive bit rate stream, Apple HLS for example, or Microsoft Smooth Streaming, for example, either way, you can encode at several different bit rates. At my station in Mississippi, I'm doing a test here. I'm encoding at four different bit rates in HE-AAC.
That box I just showed you will absolutely do that and make those files. It comes out with file chunks. It will put those either on a local server or it will FTP them to another server and make those adaptive rate streams available to your audience. We'll talk about that in just a few minutes. I'm so excited about this box. It's well built. It's amazing. It's heavy duty. It does make some noise, especially when you turn it on. Those fans come on at full speed. They quiet down after a minute or two.
It is the way to go. It's your new transmitter for the Internet for your radio stations. No more Mickey Mouse. No more free software. No more Lame MP3 encoders and bad metadata. All of the metadata is built in too. Lots of filters for your metadata. Activities are all built into that so you can hook up your metadata aggregator programs or right from your automation systems into there too.
Check it out. Thanks a lot, Telos and Z/IPStream, for sponsoring this portion of This Week in Radio Tech. Glad to have them on. You're watching episode 280 with Tim Pozar and Chris Tobin along with me, Kirk Harnack. We're talking about adaptive rate streaming. Tim, we were just kind of getting through the congestion points, why we need adaptive rate streaming. Would you give us now an intro into the notion of adaptive rate streaming? How is it fundamentally different from the serial streaming that we've been used to like Shoutcast or Icecast?
Tim: The concept [inaudible 0:34:20] stuff that we've been doing. Actually, I run Icecast and Shoutcast servers for a lot of community radio stations and such because it's a cheap technology. When you say Lame MP3 encoders, that's actually an open software. It's not a degradation of a name of it. That's actually a generic name for an MP3 encoder that was an open source of the Fraunhofer encoder that's out there.
The problem is, as you point out, I know a number of companies that are trying to use streaming on the Internet. They're constantly running into the fact that they want a high-quality stream out there so they'll do something like 96 or 128, but they also want to be able to deliver this to, for instance, cars. I wrote a paper about 10 years ago for Motorola where I called the cellular bands the new standard broadcast bands.
To that, what I did was look at the cost of running a 5-kilowatt AM transmitter some place, knowing what the cost of the land is, the power, everything else, figuring the population of something like, say, the middle of the San Joaquin Valley or even San Francisco, what you could cover with that compared to the cost of putting a server in a data center and streaming out, particularly since the cost of bandwidth has been dropping. When I first started by ISP, we were paying $5000 a month for a megabit worth of bandwidth. At the data center, you can buy that for 50 cents a megabit at this point.
It's quite cost effective. There really isn't any excuse to not put your station on the net so people can connect that way. People are expecting that with applications like TunedIn and various other applications where they want to be able to listen to this and stream it through their Bluetooth to their car while they're driving down the road.
The cost has been dropping, but again, the roadblocks to be able to do that, particularly if you're trying to cover drive time in the car, is a little tougher. You have to start looking at things like adaptive bit rate because driving down the road, you're also going to be competing against everybody else who's going to be using the cell service at that point. Then we as a video provider... again, I outlined earlier what the congestion problems are here.
How adaptive works is typically a client will connect to the server. It will say, "What streams do you have available that I can start using?" This is all automagic. This is all done as part of the negotiation of the adaptive gateway protocol. In the case of HLS, which is as you said Apple's protocol, which is actually pretty standard. You'll see it pretty much on every platform out there like Roku and various other ones out there. They all use HLS. It's pretty ubiquitous.
What will happen is it will go out and get what's called a playlist manifest. That will list all the bit rates. In there, it will actually show that you can grab this one and it will be 20 megabits or this one will be 10 megabits, or whatever. Then the client will, through an HTTP protocol, it keeps asking for chunks of a stream. It may start off at a low bit rate.
If you notice, you've probably had this experience where you connected to Netflix. At first, everything looks really grainy. That's because they specifically are asking for the lowest bit rate. What they're going to do is stair step up the bit rates until it figures out, "This is the highest one I can do over this connection that I have with this customer."
Kirk: I'm so glad you told me that. I thought it was my eyes. I'd start a cartoon for my son. It'd be all blurry. I'd rub my eyes for five seconds, and it's fine. That's not me, that's Netflix?
Tim: No, that's Netflix. That's a sensible way of doing this. If you went the opposite direction, in other words, if you asked for the highest bit rate, you may be waiting until the cows come home for that to complete and for the connection to timeout for it to start rolling down to lower bit rates and such. This is a more conservative way of doing adaptive bit rate. They have conservative and liberal strategies on doing adaptive bit rate. Picking up the lowest bit rate, this is what we do as well.
We also do a couple other tricks for our adaptive bit rate. Normally what happens is our files are not live. Our files are static films. We have, again, about 12 or 15 different bit rates the client will choose from. When it does do that, it says, "I want the lowest bit rate," it will get what's called a chunk list. A chunk list is a listing of every single segment in a film. In films, the way that we do our films, a segment or a chunk is every 10 seconds.
In the case of video, you don't compress the same way you do with audio. In audio, you throw away what psychoacoustically you don't hear normally. You probably had shows on that and talked about compression and how codecs work. In the case of video, what you do is you look at differences between the frames. If you have 24 frames per second or 30 frames per second, you'll typically have what's called a key frame. The key frame will have the whole picture built into that frame. Then the next frame, which may be a B frame or a P frame or whatever, that will be the Delta.
In other words, what's changed? In the case of, if you're looking at me right now, these posters behind me that you're seeing are not going to be part of those different frames because they're static. My mouth moving will be part of those frames. You can get relatively high compression, particularly if you start de-resing the sharpness of the film and such.
It will go out and get these chunk lists, which are every 10 seconds, which is how often we do these key frames. A key frame will come by every 10 seconds and you'll have these Delta frames in between that. The client will try to go out and get the lowest bit rate. If it decides it needs to shift to a higher bit rate or a lower bit rate, it's going to do it on a key frame boundary.
Kirk: The key frame is the first information in each file chunk.
Tim: Exactly. If you change it between that, it won't know how to construct the frame.
Kirk: For audio-only, it only has to be at the beginning of an MPEG audio frame. Not a video frame but the framing that is done with bit rate reduced audio. Those occur a lot more frequently than once every 10 seconds.
Tim: Right. We try to, again, do a lot of compression. If you do too many key frames, you won't actually be able to compress it down as much, obviously, because you don't have as many Delta frames in there. Also, there's overhead on the client in trying to reconstruct these frames that are between the key frames too. There's a tradeoff on compression as well as how much CPU power you have on these little Roku boxes to be able to decode this compression. We've spent quite a few years in finding that magic sauce that seems to work with most players.
Kirk: Interesting. We're learning about how this works with video, but the same principles without the worry of the key frames gets applied to audio. When audio goes with video, it's just along for the ride. You're making your decisions based upon the key frames. The audio is along for the ride with the same notion.
With audio-only [inaudible 0:43:16] streaming which heretofore broadcasters really haven't had available... I'm not exactly sure why. I get the bigger the money, the technology, the players, the encoders for video, but now it's making sense to do this for radio too, which is why iTunes radio is doing this. I believe BBC is doing this already. My little radio stations, I can bring up the manifest file that you talked about earlier. By the way, that manifest file, that's the one that has the dot M3U8 extension, right?
Tim: That's correct.
Kirk: What does that extension mean, M3U8? That was weird the first time I saw that. Do you know what that was?
Tim: I don't remember exactly what that exactly means. You got me on that one. There's a reason Google exists.
Kirk: Yeah, I'll look that up. Engineers, people who are watching and listening to the show, when you see that extension as part of this technology, that's the file that's either one of two types of manifest files. What you want to do, what you end up doing, the player, you end up pointing your browser or the player to the URI of wherever that file is... my little son just stepped in. Come here, Michael. Come here, buddy. It's a family show. I'm sure there's something important he's got to tell daddy here for a second. Yes, Michael? What is it?
Michael: Guess what. Is it today I get...?
Kirk: Your toy?
Michael: Yeah.
Kirk: Yes it is, right after the show.
Michael: Okay.
Kirk: Love you, buddy.
Michael: What about now?
Kirk: How about right after the show, okay?
Tim: You've got to work.
Kirk: Give me half an hour and you can have it.
Tim: He's a cute kid.
Kirk: Thanks. He takes after his mom. He gets a toy for being good for three days. He had three stellar reports from school.
Tim: Excellent. Congratulations.
Kirk: Now, just as an experiment, this is kind of new to me. I can point my Chrome browser on an Android phone to this manifest file sitting on a server in Greenville, Mississippi. It grabs that. It tells it the different bit rates that are available. I guess the player chooses one of them. Then it goes and gets that file that describes what file chunks exist right now on that server.
They're ephemeral, by the way. Unlike a movie where they stay there on the server until you go out of business, these are ephemeral. We're only keeping three archive files for each bit rate. Each one is five seconds long. We keep 15 seconds of audio at each bit rate on the server. That second manifest file, also an M3U8, tells the player, "Go get this file now." Then it starts playing those sequentially.
Tim: Right. When you're looking at video, you have the M3U8 file, which you just described. What it is, the chunks themselves usually have a .TS, which is basically just a stream, a video stream. That is chunks of what would be an MP4. An MP4, keep that in mind. That's just a container that holds the H264 video stream as well as an AAC or an MP3 video stream that you may have in the middle of that.
Kirk: Describe the hierarchy again. The TS file is what compared to the MP4 compared to the actual audio and video files?
Tim: The TS file is just the 10-second chunks of the MP4 file. HMLS doesn't have the concept... actually, this is a new protocol that's coming out of being able to split out the video and the audio. It just takes, in the case of a server that's doing HLS, they're going to do these 10-second chunks of the MP4. You really can concatenate all these TS files together and create your MP4 again if you want.
Kirk: Okay.
Tim: There is a new standard that's coming out called MPEG DASH. That one will do things like give me a video bit rate at this rate and give me an audio bit rate at this rate, whereas you can't do that with HLS. It's basically whatever is bound into the MP4 container.
Kirk: Interesting. In talking with our developers at Telos about our software that does smooth streaming, or HLS, I noticed that... I told them, "Hey, when I play back our stream with . . ." What's the name of the software? VLAN?
Tim: Oh, VLC?
Kirk: Duh, VLC. When I play back the VLC, I was getting a little tiny gap every five seconds. He said, "Yeah, not every player gets this yet." In our software, you can say, "Don't make them .AAC files that actually contain the audio. Make them.TS files." We just tell it to do that and it does. Then VLC played it back perfectly. VLC understands transport stream, .TS files. Then a new version of VLC either came out or is coming out that does understand the .AAC files and butts them right up together without any gap.
Tim: Right. VLC is actually the amazing Swiss army knife that we use actually in house for testing a lot of our streams and such. It just works. Try to get clients. We're on Roku. We're part of Chromecast. We're on the web. What other things? We're on iOS. Each one of those things, either you have to use a native HLS code. In the case of Apple, they actually have a pretty well-cooked implementation of HLS. We actually had pretty good luck with that.
The early days of Roku, like on the Roku 1, we had a lot of problems with that. That was partially because the client itself wasn't that well implemented. The other part was that the Roku 1 was just underpowered. We could not deliver any bit rates above one megabit to a Roku 1. That took us a long time to figure out, why these Roku customers were having such a problem with buffering until we said, "Okay. You have a Roku 1. We're only going to send you these bit rates."
Kirk: Interesting. Chris, feel free to jump in any time with your own experiences and questions. In my house, I have three of the Roku 3 players. I love them. They're snappy. They play back right away. They're great. I've got a couple of the old Roku 1s. One of them, believe it or not, I've got a treadmill. I visit it every once in a while. The treadmill has got a TV screen built into it. I have to use the composite output from the old Roku 1 into the composite TV screen. Anyway, while I'm jogging on this treadmill, I want to watch stuff. Oh my god, is it slow. It is painful. Two minutes to buffer some ordinary video.
Tim: Yeah, they just underpowered the processor on that. Particularly now you're working with, again, companies like ours and such who want to be able to push out HD content and such. These little guys just can't keep up with it. Take your Roku 1, put it someplace else, and go out and get a Roku 2 or a Roku 3, or go to Apple TV, or something else like that.
Kirk: Yeah. Okay, cool. Where were we going with this? The world of video is so standardized now with adaptive rate streaming. You buy a Roku, you've got it built in. Safari browser has this built in. The Chrome browser, at least on this Android, doesn't have it. The player is built in, but when I brought up for the first time that manifest file from my own test setup, it asked me, "What do you want to us to play this back?" It gave me VLC. I thought, "Why not try the dumbest player that this thing comes with?" I tried that and it works perfectly.
Tim: Right. Ever since... I think it was Android 4.2, they started putting in HLS support. Actually, at this point in time, pretty much any newer phone, anything from Apple is going to have it because they developed it, but if you have a 4.0 Android or older, you're going to have a hard time trying to playback anything with HLS.
Kirk: Okay. If I'm a broadcaster I'm thinking, "Okay. Maybe I should at least start experimenting with an HLS stream." Am I going to be . . .
Tim: Are you going to be able to deliver to all the clients out there?
Kirk: Am I going to be able to deliver, or do I need to wait around for MPEG DASH to get more popular? Is HLS, should I just jump in with that?
Tim: MPEG DASH, I'd wait a little bit for that too. They're actually trying to use a new video codec called H265, which has about twice as good compression on that. Those codecs are not well distributed at this point. I'd stick with HLS, using H264. AAC seems to be pretty ubiquitous as well. It actually sounds pretty good. If you want to do the lowest common denominator, I would do that for right now. Of course, there are a lot of people that still support, in the case of audio, MP3. You can always fall back to that.
Kirk: Let's talk for a second about the server technology involved. Our guest next week is a friend of the show, Greg Ogonowski.
Tim: I know Greg.
Kirk: Formerly with Orban. His company now, he does this stream software that is very nice for iOS devices for listening to good quality HE-AAC streams on the Internet. Anyway, Greg was explaining to me a few weeks ago, and I found it to be absolutely true, that one of the neat things about HLS or adaptive streaming is it doesn't require a server that's designed with server side streaming software on it. It doesn't necessarily have all these constant connections to all of its clients going on. It's a file server. You're serving files. If you want to roll your own, an Apache web server will do it.
Tim: It's trivial, yeah. The original video streaming, you probably remember Real Audio and such when they were doing video streaming. What caught on after that was Adobe Flash. They used a thing called RTNP. The decisions about adaptive bit rate were on the server side. It wasn't on the client side. It was a constantly open connection. You had to buy their product to be able to use RTNP. Of course, Flash was not cheap as well, their servers and such.
The interesting thing about RTNP also is the fact that it used non-standard port. Every time I went to a hotel, I would run up against a firewall that would only allow me to connect to port 80 or port 443, in other words, the HTTP or web ports. If I tried to do something like 1935, which is what RTNP uses, I'm blocked.
Apple and everybody else is trying to do something a little bit more sane and use HTTP ports. What happens is HTTP is a stateless connection. What it means is that it will go out... that's why you have to do these chunks. It will say, "Get me the next chunk." Then it closes the connection. Then it says, "Get me the next chunk." It closes the connection and such. You could take a video file and chop it up into these segments. Each TS segment would be an individual file. You put a manifest on an Apache server, an NGINX server, or whatever else you want to do, and you all of a sudden have an HLS video stream or audio streaming server.
Kirk: It's fair to reiterate that the smarts... once you've got these files on a server, static in the form of a movie or ephemeral in the form of streaming audio, is it fair to say that the smarts really exist in the player, in the client?
Tim: It does. Again, conversely with RTNP, the smarts were in the decision making with the adaptive bit rate was on the server side. In the case of HLS, HDS, or MPEG DASH, the smarts, as you call it, are in the client side. This is why we actually have to be very careful when we construct our clients for our various platforms.
We want to make sure that the adaptive bit rate technology and the strategy and things like that will work for rather congested networks or really adverse networks. We even have people trying to stream through their satellite connection, their direct PC connection and such. We really try to serve and try to develop our adaptive bit rate clients so they can survive that kind of transport.
Kirk: That brings up a good question. I believe that... in fact, I was just getting ready to look at the container file here. I've got it in a keynote presentation that I gave. One of the parameters in the M3U8 file for each bit rate is "#X-X-targetduration:". Can you tell me what target duration... in our case, it's set to four. Is that four seconds?
Tim: I assume that's probably a four-second window, yeah. Particularly for live streams, you're not getting the whole chunk list. In a movie, as I said, say for instance you have four bit rates or something like that. It's going to... in many players, what they'll do is they'll try to download the chunk list. In other words, a listing of all the TS segments for every single bit rate. In the case of live, you can't do that. You have no [inaudible 0:58:20] the audio. It has to be able to... particularly with audio as well since there are no key frames associated with it necessarily, it's going to have to figure out when it can switch or how much it needs to grab for.
Kirk: Part of the way this works is... tell me if I'm explaining this right. This is what I'm saying in my seminar. You've got a file chunk that is five seconds worth of audio. You've got this at several bit rates, but you've got five seconds worth of audio. The player knows this. The player is told by the manifest file that this is a five-second long file, or maybe 5000 milliseconds. Then the player has to see, "How long is it taking me to download a five-second file? Is it taking me 0.3 seconds, or 1.9 seconds, or, heaven forbid, 4.8 seconds?"
Tim: That's exactly how it determines what the bandwidth is. I'm glad you brought that up. What we do is we monitor that. We actually look every five seconds or every 10 seconds or so. We get what's called a heartbeat back to our servers and such to know how fast they were able to get a 1-megabyte file. That five seconds you have at 96k or 128k or something like that is going to be a predefined file size, particularly if you have a constant bit rate file or stream.
Then downloading it, you could measure how many bytes you've got. You could measure how long it took. That's an easy calculation of what your bandwidth is. From that you could say, "I got that. That was about 10 megabits or so. I'm going to ask for a 5-megabit stream next time."
Kirk: Good. That's how I thought it worked. You're talking video speeds there. In audio speeds, I don't know that the players start out with the lowest bit rate. I hope they do. I'd like to maybe use some Wireshark or maybe VLC to help me see that bit rate that it does. Let's say I've got four bit rates available from 16 kilobits up to 128 kilobits. I'm hoping that the player grabs the file for the lowest bit rate first because that's a tiny file. It's 12k. Bam, gets that one down. It can't be 12k; that couldn't be right.
Anyway, it grabs these files and it says, "Hey, I downloaded that so fast. I'm going to go for the next bit rate or even a much higher bit rate." I hope that's the way it works. We talked about not only for bandwidth sake so you're not watching no video or hearing no audio for a while, while more of it buffers in. We're eliminating that problem. Also, if you grab the lowest bit rate file and bring that in first, your video or your audio starts quicker. The consumer hits play and, bam, it starts quicker than if it had to download a great big file.
Tim: We also do a couple other tricks. Remember how I said we have about 12 or so different trans codes or bit rates that we use. Many times what happens is we don't pre-segment our files. On the file server that we have, it's just one gigantic MP4. What happens is when a customer connects, we use Akamai and we use a couple other things like Wowza for our streaming. It literally has to go through and index that file. For us, if we have a two-hour movie or so, that means it has to index all 12 of those trans codes. That can take quite a bit of time. Also, it can take quite a bit of time to transfer those chunk lists back down to the client.
One of the tricks that we do of the trade... I'm not going to be giving away too much because everybody probably does this as well is we use clients that will ask for the lowest bit rate and only ask for that chunk list. Then if it makes the decision to go to a higher bit rate or so, then it will ask for that chunk list. In that case, we're not trying to index all 12 trans codes at the same time for a two-hour film. We're only indexing one trans code. Then if it decides to make a shift, it will only index another trans code.
The idea is that we really want this thing to start up within two or three seconds. Most people are pretty tolerant of a movie taking a second or two to start up. If we're talking 10, 15, 20 seconds or so, I think they're going to start getting a little annoyed.
Kirk: I hear you. I'm annoyed. We're out of time. We're going to hear from our last sponsor and then we're going to try to wrap the show up with either a tip or a little story or something that would be a good kicker to leave our audience with. Chris, you too. Tim, hopefully, you'll have a little something for us.
Our show, This Week in Radio Tech, Episode 280, with Chris Tobin here and Tim Pozar, our guest, talking about multi-rate adaptive streaming for both video and audio, why it's necessary, and how it's implemented. I'm excited to be talking about this to different groups. In fact, I'm going to be at the Baltimore, Maryland SBE chapter on the 18th of November to talk to them about this. Also, the New York City SBE chapter on the 19th of November. Chris, I hope to see you there on the 19th if you can make it.
Chris: Absolutely.
Kirk: Good. Maybe we can go out and have a beverage or two, maybe before. We should do it before. Our show is brought to you by the folks at Lawo (L-A-W-O) Console Company, pronounced Lavo but spelled L-A-W-O. They make those big, incredibly complicated consoles for multitrack mixing, surround sound, TV trucks, and big venues. That's what Lawo does. They also are a proponent of audio over IP in Europe with the Ravenna standard. They're also behind the AES-67 audio over IP standard.
Lawo also has a line of consoles that are meant for smaller applications, for us guys in radio. They make a console called the Crystal Clear console. This console is so cool. Twenty years ago, I was dreaming of a touch screen controlled audio console. I thought this would be really cool. The tech just wasn't there yet. We certainly didn't have multi-touch 20 years ago. Civilians didn't. Maybe the military did, but we didn't have any access to that.
I thought, "Wouldn't it be great if you could have an audio console that was on a touch screen? That means you could modify the way it looked for your particular needs at that time. If you needed... what kind of functions do you need? Is the traffic report ready? You could have a helicopter fly in on the screen and sit over a fader like, "I'm right here. Push this button. Helicopter report is ready." That kind of thing.
Lawo console doesn't have a helicopter flying around, but it does have on-screen multi-touch faders. You can run several faders up and down at the same time. You can hit buttons on and off at the same time. The whole layout is designed for your fingers to touch and to do so accurately and easily with this multi-touch touchscreen monitor.
The heart of the console, of course, isn't the touch screen. That's just the control surface. The heart of the console is a well-proven design that they've used for some years now on their Crystal consoles. It's a one-rack unit box, custom designed with mike inputs, high-quality preamps there, mike inputs, line level analog inputs and outputs, some AES digital inputs and outputs, and also now it's got Ravenna and AES-67 audio over IP right through the networking jack right on the back.
Plus, because it needs to be reliable, it comes with dual power supplies. You can order it that way with dual power supplies. That makes it a really solid platform for audio inputs and outputs and for all the mixing and for even a certain amount of microphone processing, for example. Parametric EQ is built in. You can have that available on your inputs.
You have that device that goes in a rack. It's been pretty silent, not much noise there. It can go in your rack in your studio, or it can go back in a rack room or something. Then in your studio or wherever you want to put it, maybe you want to do your show from your desk, your office, you put the clear part of the Crystal Clear. That's the touch control surface.
If you go to the website and want to see how this thing works, Mike Dodge does a really good demo of this. The website, lawo.com (L-A-W-O), look for radio products and look for the Crystal Clear console. On the upper right-hand corner, you can click on that icon there, that thumbnail of the video where Mike Dodge explains and walks you through how this console works.
Of course, it networks with other consoles that are on the same Ravenna or AES network. It's just amazing. Check it out, if you would. I think this is a great idea. You might like it too. Lawo.com, L-A-W-O, and the Crystal Clear console. Thanks, Lawo, for sponsoring This Week in Radio Tech.
We're talking to Tim Pozar about multi-rate streaming. He explained some of the inflection points or the congestion points, I should say, on the Internet and how multi-rate streaming can reduce the problems that your end listeners and viewers have with that. Chris Tobin, let's jump on you here for a second. You haven't gotten a word in edgewise here on this show. Sorry.
Chris: That's okay. That's the way it goes.
Kirk: What tip might you have for us besides don't fall backward into the wall of punch blocks?
Chris: Yes, you don't want to do that. That's not a good thing. One of the things to consider, I just recently this week worked on a project in a performance studio. We were doing recordings using Neumann microphones, condenser mikes. An interesting thing, we picked up a local radio station on the mike cables. It wasn't a good thing.
We discovered that the panels in the wall, the snake panels, interestingly enough, we always forget these things, ground loops. Ground loops happen between two devices that have the ability to create a return path. A microphone technically doesn't have a return path. It's a single-ended device. However, the microphone cable ground is still used through the chassis connection. I'm going to show you here a standard XLR. You're familiar with that, the right angle XLR there. We'll look at the back of the connector. You'll see right there above the three pins is a little tab. That little tab is to the chassis.
You take pin one ground, tie it to the chassis as well as to the shield, so that's what it would look like typically, red, black, and shield. At the very top, there's that little chassis pin again. That creates the complete ground all the way through. It also creates a ground plane so the RF can be dissipated off. Believe it or not, I did 24 of these on a panel, grounded pin one to the shield of the chassis, and the radio station we were hearing at the very low high gain levels of the preamp went away, only to come back on a certain cable. It turns out the mic cable didn't have double shielding, therefore the shields didn't properly work. That's the tip. Definitely use the pin one to shield chassis. It definitely makes a difference. You may not notice unless you're using microphones.
[Cross talk 1:09:43]
Tim: Sorry. You do this on one end of the cable as opposed to both ends of the cables, right?
Chris: The other end was a preamp. One end was a cable, an XLR by itself. You did it at the panel where the microphone met it. If there's any stray RF riding along the cable, it would stop it at the panel. It worked really well, actually. I was able to go deep down into the noise floor of the preamp and make for nice recordings and not listen to the local... at the time, I think it was the Rush Limbaugh programming. It made for some fun.
The other tip is at AES, we talked about this, Kirk, on the panel. We talked about the new way of creating content. I was talking to a few folks that called me recently in the last day or two about this little device from Digigram called a queue mike. It allows you to properly interface to, in this case, an iPhone. It's the four rings for a standard iPhone interface. It keeps the proper impedance from switching. The nice thing is... look at the time. It's 8:16. It's nicely designed because of the right angle. This is important. I was talking to the news folks that came up to me at AES. The right angle actually saves it from getting broken off, believe it or not. Those are the two tips, proper ground on your XLRs.
Kirk: That pigtail there is from Digigram?
Chris: Yeah, it's a Digigram device, oddly enough. Let me see if I can throw this for those that will watch the video.
Kirk: It's got a line level input as well as the mic itself?
Chris: Yeah, line and mike. It's designed to get tossed around. I actually was told this is a ruggedized, ABS plastic. Let me make sure I got that right.
Kirk: Chris, you and I have also discussed the similar adapter from Mickey, from our friends at [inaudible 1:11:34], the guys who make the Lucy software.
Chris: I have one of those too.
Kirk: I'm glad you pointed that one out from Digigram. They make good stuff.
Chris: I have both and I've used both in different applications. It turns out the Digigram one had a line input, so it made it really convenient. Also, the way it was designed, you could bounce it around. The Lucy one, I believe you can bounce it around pretty much as well. It's a slightly different design, but they both work really well.
Kirk: Cool. Thank you for those.
Chris: You're welcome.
Kirk: Actually, I did have a follow-up question, Chris, on the pin one to the tab. I like to go by rules of thumb with exceptions if I have to. Tim brought up if you ground that only at one end like we were oftentimes used to with shielding, or is it your default to hook that up to every place there's an XLR and then maybe snip it if you have to? What rule of thumb might you have for that?
Chris: This was used for microphones. Microphones are single-ended device, so you can get away with it on both ends. The other end of this was a snake, freely handing XLRs, which didn't have it on the tab. It was just on one side. It was creating a ground plane. There's a name for it. I looked it up. It was interesting how as soon as I alligator clipped from pin one to the tab, the RFI interference dropped by 50%. Then I did short solder, copper wire, an eighth of an inch, and boom, gone completely. We were down minus 100 dB on the preamp. It was nice and clean. We heard more of the preamp noise inside the microphone element than we did on the cable.
Kirk: Less Rush Limbaugh and more preamp noise.
Chris: Yes.
Kirk: Okay. Not that you increased the preamp noise, but now that's what you heard?
Chris: Yes.
Kirk: Less Rush, more pre. Good deal. Tim, you so kindly agreed to give us a tip or a story or something. Have at it.
Tim: Actually, I was trying to think of one. You were talking about the last console. I constantly see... they say they have two power supplies on it for redundancy. I constantly see people who take servers, like the one that you have there from Telos, have two power supplies, and they'll plug it into the same circuit. That's not what you're supposed to be doing with two power supplies.
Most of the time, it's the power supplies that don't fail. It's the circuit breaker that fails. The idea behind having two power supplies is being able to move the equipment back and forth without having to take down the box end because you have to take down the circuit and such. Anyway, keep it on two different circuits, guys.
Kirk: That's a great point. Sure, having two power supplies is a certain level of redundancy. If you really want to take advantage of that, which you should, you need to have your racks wired with two different circuits, preferably one of them on the infrastructure UPS and the other one on a totally separate circuit, a different UPS.
Tim: It could even be on short power. Your UPS can fail too.
Kirk: Guys, we do have to go. I appreciate Suncast for staying a little bit late and helping our show go long and producing it for us. Thanks for much. Coming up next week, Greg Ogonowski is our guest. November the 19th, we're going to have for you the story behind the FM master antenna 50th-anniversary celebration. Tom Silliman spoke at that, also Bob Tarsio, Frank Foti, and other guests as well talked about that in the Empire State Building. It's what happened last week, but it's the stuff afterward. It was the seminar that went on. We're going to have that for you in two weeks. Stick around for that. Tim, thanks for being with us. I appreciate it. I'd sure love to have you back sometime.
Tim: My pleasure. Thank you very much. Will do.
Kirk: Chris Tobin, slaving away in the punch rack room there. Chris, thanks for taking the time to join us. By the way, Chris, your camera looks fantastic.
Chris: It's the same camera I've been using for five years now.
Kirk: Great lighting. I don't know. It just looks great.
Chris: It's a small spot that I travel with. It's the usual setup.
Kirk: Good deal. You must have good Internet there too. Thanks to Suncast. Thanks to Andrew Zarian for providing us the bandwidth and the distribution on the GFQ network. Check out the other shows on GFQ. We've got to go. We'll see you next week on This Week in Radio Tech. Bye, everybody.
Telos Alliance has led the audio industry’s innovation in Broadcast Audio, Digital Mixing & Mastering, Audio Processors & Compression, Broadcast Mixing Consoles, Audio Interfaces, AoIP & VoIP for over three decades. The Telos Alliance family of products include Telos® Systems, Omnia® Audio, Axia® Audio, Linear Acoustic®, 25-Seven® Systems, Minnetonka™ Audio and Jünger Audio. Covering all ranges of Audio Applications for Radio & Television from Telos Infinity IP Intercom Systems, Jünger Audio AIXpressor Audio Processor, Omnia 11 Radio Processors, Axia Networked Quasar Broadcast Mixing Consoles and Linear Acoustic AMS Audio Quality Loudness Monitoring and 25-Seven TVC-15 Watermark Analyzer & Monitor. Telos Alliance offers audio solutions for any and every Radio, Television, Live Events, Podcast & Live Streaming Studio With Telos Alliance “Broadcast Without Limits.”
Recent Posts
Subscribe
If you love broadcast audio, you'll love Telos Alliance's newsletter. Get it delivered to your inbox by subscribing below!