Video Special Forces with Colleen Kelly Henry | Telos Alliance
By The Telos Alliance Team on Jan 15, 2016 1:56:00 PM
Video Special Forces with Colleen Kelly Henry
Mission: Acquire and send real-time HD video and audio from a remote venue you’ve never seen before, then format and distribute it to every popular computing platform known to man. It has to work right the first time. There are no do-overs.
Mission Specialist: Colleen Kelly Henry.
Watch the Video!
Read the Transcript!
Kirk: This Week in Radio Tech, episode 287, is brought to you by the Axia Fusion AoIP mixing console. Fusion: where design and technology become one. By the new Omnia.7 FM, HD, and Streaming audio processor with "Undo" technology. Omnia.7 is a mid-priced audio processor with the sound and features you love. And by Lawo and the crystalCLEAR virtual radio console. crystalClear is the console with a multi-touch touchscreen interface.
Mission: Acquire and send real-time HD video and audio from a remote venue you've never seen before, then format and distribute it to every popular computing platform known to man. It has to work right the first time and there are no do-overs. Mission specialist: Colleen Kelly.
Hey, welcome in to This Week in Radio Tech. I'm Kirk Harnack. It's episode number 287. That means we've done a lot of these things. And this show is going to be fantastic. I'm looking forward to this. We were able to pull this show together at the last minute and we've got a guest. She is so interesting and what she does for a living is amazing.
But first, let's get into our cohost. He's right here. Well, he's in . . . I don't know where he is. I guess he's in New Jersey. Hey, Chris Tobin. Welcome in.
Chris: Hello. Yes. It is New Jersey. It's 24 degrees outside. It's just one of those crazy, cold days unlike some parts of the country where it's warmer.
Kirk: It finally got a little warmer here.
Colleen: That's where I am right now.
Kirk: Colleen's in a warm place. Colleen, you've got a weather report that's envious.
Colleen: Yeah. I'm sitting here basically overlooking a beach in Kauai and it couldn't even be more perfect. I'm not even sure what the temperature is because I haven't even thought of looking it up, it's so nice.
Kirk: It doesn't matter. It's nice. Oh, jiminy. That would be great to be there. Hey, welcome in Colleen. Glad you're here.
Colleen: Glad to be here.
Kirk: So, Chris Tobin and I are going to be chatting with Colleen about massive remote broadcasts. Is that massive in terms of audience, in terms of stature and importance? We can't afford for things to go wrong. We really want these things to go right? We'll be talking about the technologies that Colleen has been using, compare that with what Chris is using and, as usual, I'll just stand by and ask a few dumb questions. If you want to join us in the chatroom, you're welcome to at GFQLive.tv and give yourself a pseudonym there and ask a chat question or two.
Our show is brought to you by . . . who did I say first? The folks at Axia? Yeah. The folks at Axia. Axia for a little while now has had this new console, the Fusion console, and Clark Novak is here to tell you about the design of the Fusion console. If you're an engineer, stay tuned. You're going to like this. Go ahead.
Clark: Hi, I'm Clark Novak from Axia Audio. I'm here to introduce you to the new Fusion AoIP mixing console, the newest modular AoIP console from Axia, the company that invented AoIP for broadcast in 2003. Let's take a quick look at some of the unique features found only in Fusion.
After 10 years and more than 5,000 consoles, people constantly tell us how attractive Axia consoles are. But a console isn't designed for show. It's made to work in challenging conditions 24 hours a day, year after year. Here's a look at some of the special design choices Axia has made to ensure that Fusion meets that challenge.
Some companies cover their console work surfaces with paint, which can rub off or with plastic, which can tear or be ripped. Not Fusion, its work surface is all metal, solid aluminum. Not only that, it's double anodized markings are sealed in. They can't ever rub, peel or flake off, which means that Fusion will still look as good in five years as it does the day you begin using it.
At one time or another, we've all had the task of replacing light bulbs in console switches. Fusion does away with all that. All switches are lit with LEDs made to keep on shining for hundreds of thousands of hours. And those switches themselves are aircraft-grade, specially sourced and tested by us to sustain millions of on/off operations without failure. So you won't ever have to worry about replacing those either.
Fusion's frame is made from thick, machined aluminum too. It's RF-proof, but also lightweight, no worries about whether your tabletops can hold up. Fusion is designed for drop-in installation and it's very low-profile, no giant tub to intrude on under-counter space. Where other consoles use dot-matrix readouts for channel displays, Fusion comes with easy to read, super-high resolution OLEDs above each fader. They show the assigned source, tallies when talkback or other special features are enabled and full-time confidence meters to help prevent dead air. Talent doesn't have to wonder whether that caller is dropped or that satellite feed's ready to join. They can see it clearly before they pull the fader up. No wipers to wear out on our rotary encoders, they're all optical.
Some of the most important parts of any console are the faders. One of the reasons faders fail is from dirt, grime and, of course, liquid that falls through the slots in the modules. Fusion's faders are special, premium, conductive plastic faders that actuate from the side, not the top. That way, dirt that falls through the surface slots falls past the faders, not into them. They stay smooth and silky nearly forever.
That's a fast look at how Fusion consoles are designed to last and built to perform just as beautifully as they look. We'll see you next time.
Kirk: And thanks to Axia for sponsoring This Week in Radio Tech. All right. Kirk Harnack, Chris Tobin and our guest, Colleen Kelly Henry are here. Let's jump right into. Colleen, before the show, you and I were talking, we were catching up because we worked together a little bit back at another network for a few days.
Colleen: Installing Axia, actually.
Kirk: Yeah. They still have Axia there. That's great, as does this network that we're on right now. So what can you tell us about what you're doing now in general?
Colleen: So when we met, I was working as the head engineer for TWiT, which was a live Internet television network. Since then, I went off to work at Google and did a startup and now I'm over at Facebook. One of the responsibilities that I tend to always fall into is massive live streaming. I call it Video Special Forces. There's the regular product that people are using, like Facebook's live streaming app from the phone or uploading regular videos to Facebook or something like that.
But if people want to do something that's sort of outside of the standard product because it's a business need such as our developer's conference or maybe internal live streaming for the CEO to talk to everyone at the company, I build, end to end, the entire video infrastructure to be able to do that from basically where somebody hands me an HD-SDI cable and says, "Here you go. The baseband signal is done. Enjoy it." Then I have to make it appear on every screen on every device reliably.
Kirk: Okay. Wow. All right. That's not so different than what we do in broadcast. At least in radio, we do that with audio. Maybe the audience is smaller, but we do like the broadcast to happen as it should. You were telling me that one of the biggest challenges you have, and we've talked about this on our show before, is the last mile uplinking. So, from the venue . . .
Colleen: The first mile.
Kirk: First mile. Okay. Yeah. Do it that way. The first mile uplinking from the venue, getting it into the Internet infrastructure or headed on its way to your cloud. Why is the first mile such a pain in the butt?
Colleen: Well, if I were to do a broadcast from the headquarters of a company that I worked at, that would actually be relatively straightforward because you can test the actual infrastructure there. If I'm going to do a broadcast, say internal to external, internally, I'll have encoders hooked up that will go across our network. We know how it works. We use it all day every day. If it's a non-confidential one, I'll have a fiber line switch. If you've heard of them, it's sort of like DivX, a dedicated fiber line. That would be another off-site uplink. If it's hyper-important, I might even get a satellite truck. One thing I would not do in this circumstance is use one of those backpacks that do inverse multiplexing of wireless broadband.
One problem that people tend to run into with those is that they need to double check whether or not there are repeaters hooked up to the network inside of the buildings because you don't actually have a separate uplink. If you have repeaters inside of there going to the same IP connection hijacking the cell signals and then putting them over that same connection you're already using, then it's not a backup.
Kirk: Ah, gotcha.
Colleen: But if we're doing something like a developers conference that's offsite, the problem with that is going to be that it's being set up the day before. It's only a couple of hours before that thing goes live and you're doing clap tests for audio/video sync on stage. You've already shared the embed code. You've got some countdown going. You better make sure that that thing is happening.
One thing that we had during a big developers conference last year is we had hyper-redundancy for all sorts of different things, but there was a storm in Washington DC for our East Coast satellite downlink. So it went out. Now, we had enough redundancy that that wasn't an issue. I love when these things happen because it proves next year or whenever, when I put on the list of things that I need and how much it's going to cost, they're like, "Do you really need all this redundancy?" And it's like, "Well, yes. Yes, we do. This is why."
So this year, for example, even though there's one satellite truck uplinking, we'll be downlinking in two facilities, one on the East Coast and one on the West Coast. But the major issue here is that you have the hyper-redundancy because if you can't burn and test and actually trust the environment to make sure that the networking people on site are going to give carved off bandwidth on a VLAN, it's like I don't trust anyone. I've got to have backup plans to everything and then I have to have a system that makes it very easy for me to switch between these options in real time, ideally seamlessly.
Kirk: You said something there I want to focus on in just a couple minutes, and that is I don't trust anyone because you can't fully trust anyone else. You can only trust you and the backup plans that you make. Out of all the backups you make, surely one of them is going to be okay.
Chris Tobin, I wanted to discuss with you for a second, Chris, that what Colleen is going to be talking to us about is typically video and audio. That's the business she's in. Most of the people that you and I deal with, Chris, typically are still dealing with audio, although more radio stations are adding video to things. But Colleen's advice is worthwhile. I'll bet you're going to be sitting there nodding your head, just like I am. If you've got enough bandwidth for video, you've got enough bandwidth for the audio.
Colleen: If you can dodge a wrench, you can dodge a ball.
Kirk: What you don't have necessarily, though, in the types of things that Colleen does is low latency. In radio broadcasting, sometimes we're tasked to have meaningful two-way low latency conversations. So, Chris, you've done some of this too. Have you had to deal with getting great bandwidth reliably and low latency at the same time?
Chris: Absolutely. Every broadcast I've done, including video I've been doing a lot lately. I'm doing a lot of web-related broadcasts. Colleen is right. You've got to have a backup. I've always operated with friends of mine who have worked with me and known me over the years. I've basically operated under the premise trust no one, TNO.
I'll accept your offer and take your bandwidth, whatever you're willing to offer me, but know this, I will have something in the background operating under my control and will not tell you about it because I don't want to hear about why I shouldn't be doing it or why you need to have access to it and make sure that there's no chance for trouble.
Having bandwidth is only one piece of the puzzle. You've got to have bandwidth that has packets that can travel properly. You can have 50 megabytes and have a really jittery network. It means nothing to your connection. So, it's not just bandwidth. It's a lot of things.
Colleen: It depends on your latency, as you were saying earlier. You did bring up the point, for example, that I tend to be doing live linear broadcasts of high latency, and that's true to a certain extent. But there are also times like what we're doing now, which was where we have a live linear broadcast that's high latency for the audience, but a low latency connection between us.
It's much easier to get enough carved off, good quality bandwidth for a video conferencing call for four people, but it's very different then if you want to actually have that going out to low latency to 10,000 people. The fact of the matter is 10,000 people can't converse with each other real time. You only need to have as many as converse at once.
Kirk: Thank goodness. True.
Colleen: But you do bring up an interesting point as far as the performance of the network. The bandwidth is only one axis of this. You have throughput, goodput, you have latency, you have packet loss and especially if you're using TCP-based protocols, high latency and high packet loss, it's going to back off.
That bandwidth, even if your connection is theoretically 50 megabytes, it will drop way down. So one of the tricks that I use is reliable UDP uplinking. So, if you've heard of Aspera for file transfer, the way that that works is they use a UDP protocol and then on the application layer, they're actually . . . sorry, I didn't mean to be brushing my mic. There we go.
And on the application layer, they're actually doing the error checking and resilience and making sure that they can retransmit it. But it's within a window. So you are absolutely adding latency and this would not be helpful for video conferencing, for example. But if you're doing live linear, give five seconds to a reliable UDP protocol like Zixi or Aspera or something like that and you can take care of the packet loss and latency, 3% fewer packets getting through means 3% less throughput, for the most part, as opposed to TCP, where it's much worse. You can also do inverse multiplexing. So if you want to have multiple connections bonded, that's another thing that protocol can do.
Kirk: So I want to go back and cover a couple of terms that you've brought up here, Colleen. I do want to circle back around to the sat truck and how you deal vendors that provide that. You mentioned Zixi and Aspera. Chris nodded his head like he knows what those are. Clue me in.
Colleen: Chris, did you want to . . .
Chris: Go ahead, Colleen. You have probably a better application than I have.
Colleen: Okay. So Aspera is one of the more well-known brand names for this concept, which is reliable UDP transmission. So when you're talking about the two major protocols that are in use in the Internet, there's TCP and UDP. TCP is what is known as reliable. It's what HTTP, most web pages go over, wink, nudge. There are some exceptions.
But the thing about it is that you can guarantee that it works on everything and that the packet gets there. If you're sending an email, you don't want to miss a couple of lines in the email. You want to make sure it gets there. If it's not getting through immediately and there's some loss or whatever, what it does is it backs off and waits a little bit longer and then waits a little bit longer and then retransmits.
UDP, on the other hand, which is what most video conferencing or things like that are based on top of, is unreliable. It blasts that packet through and it either gets there or it doesn't. That's all on the protocol layer. Now, if you look at the OSI model, you go a little bit up, there's the application layer. So you can add intelligence to a protocol where you use UDP as a way of firehosing data through, but the application has these check sums and is actually asking for retransmissions if they need to.
Aspera is sort of the industry standard of using that technique for file transfer. So Hollywood movies are sent to Netflix, Amazon or something like that with their master grade files using Aspera. But Zixi is something that's much more video native. You would use it for linear livestreaming uplinking, for example.
Those are brands that are both selling technologies that are utilizing this technique, but it's one of the things that I like to use for my uplinking because not only can you add encryption and all sorts of fun stuff, but if you give it a five-second window, not only are you going to get way more throughput and you can shoot that thing around the world, whereas TCP would way back off, but you also get things like the ability to have bonding on the connection. So you can hook up multiple network connections and combine them.
Kirk: Okay. I'm getting a handle on that. Tell me about when you're originating a broadcast from a venue and you've got, as you said, several different paths for redundancy, you've got fiber, you've got a sat truck, you've got inverse multiplex wireless, perhaps, are you using different hardware or software encoders for each of these different paths?
Colleen: So the way that I treat all of these are as mezzanine contribution streams. So, these are sort of the master grade stream. Then those go up to my infrastructure and then they get live transcoded into the actual delivery formats.
Kirk: Okay.
Colleen: So then I would be using software-based encoding. I might do that on my origin servers or I might use some elemental encoders or something like that in a data center. Generally speaking, my preference is always to use x264 for 264 encoding, but there are some hacks you can do if you want to get that onto Wowza origin servers, which are the ones that I like to use. [Engine XRT] module is a great open source version of stuff like that.
But I'm actually not . . . since I'm going to do adaptive bitrate streaming for delivery in the end, I'm just trying to get these signals up that are the master sources, then I decode them and I transcode them to 5 Mb, 3 Mb, 2 Mb, 1 Mb or 1087, 2480, however you want to divide up your signals. And then package those in HLS or MPEG-DASH or something like that and deliver it in an adaptive bitrate way to a player. But another option is you can create all those on site.
But I would much rather have . . . since the bandwidth on site is so precious, I really don't want to push my limits. I'd rather have one 5 Mb-stream up and then pass that through and transcode the lower levels or something like that as opposed to saying, "Create everything on site and try to uplink it on the Internet." Plus, I can't do adaptive bitrate reliably if I'm creating on site and using a satellite truck. I can only do one stream.
Kirk: So I told you before the show this might happen. Indeed, he's here. Say hi, Michael.
Michael: Hi.
Kirk: Okay. Now, you've said hi to everybody. We're doing a show, okay? We're doing a live show. Show everybody what you got and then you can go.
Michael: Potato chips.
Kirk: Well, they're Doritos. They're kind of different than potato chips.
Michael: I know.
Kirk: Okay. All right. Say hi to Colleen. Say, "Hi, Colleen."
Michael: Hi, Colleen.
Kirk: Say, "Hi, Chris."
Colleen: Hi.
Michael: Hi, Chris.
Colleen: Hi, Chris.
Kirk: We'll see you later, buddy. You can play with that, but you've got to play with that upstairs. He's going to take this upstairs with him. All right, buddy. See you. Glad you're home from school. Bye. I hate to do that to him. He knows we're doing a show. So, Colleen, the follow-up question I was going to ask about the different encoding is how is the audio typically encoded in these streams? Is it always AAC or are there other things that are used?
Colleen: So, typically, you would use HE-AAC. Are you familiar with the difference between HE-AAC and AAC?
Kirk: Yeah. Sure. The spectral band replication is what's involved in the efficiency.
Colleen: Yeah, or HE-AAC v2 where you want to do parametric stereo or something like that. Typically, my sort of stuff that I'm working on is not actually stereo because it's one mic on stage and all that kind of stuff. So I always do HE-AAC v1, typically, sometimes AAC.
But I'm actually one of the biggest nerds for another delivery standard for video encoding. This gets into the audio encoding because of how you pair it. I have actually been doing some live streams recently in tests and I'll be doing them in production soon using the VP9 video codec, which is a next-gen codec, sort of like HEVC except that it's open source. It's created by a team at Google who used to be the On2 team before they were acquired.
Kirk: Yeah.
Colleen: They typically package their video codec in the WebM container, which is a modified Matroska container. The two audio codecs that you have choices of using are Vorbis or Opus, Opus being the superior one. So, in general, I'm actually moving towards wanting to use Opus as my codec, not actually HE-AAC.
I'll still have HE-AAC around as a fallback for people, for example on a legacy . . . well, people really want to punch me when I say this, but the only people I can't deliver VP9 to if I want to is really iOS web, any sort of native platform you can put a software decoder on, more and more hardware devices are going to have VP9 encoding in hardware. Chrome does it. Firefox does it. I believe that Microsoft has announced that Edge is going to be supporting VP9.
So as that moves towards being the only next-gen codec really available . . . there are always going to be two, the master and the apprentice. There's AM and there's FM. There's iOS, there's Android. But I'm going to have more and more and more Opus and I'm going to try to make Opus my primary and then I'll have a fallback to Opus VP9 WebM in DASH and then I'll have a fallback for legacy platforms such as HLS with HE-AAC and H.264.
Kirk: Okay. You've mentioned a couple times adaptive bitrate streaming. You mentioned MPEG-DASH and HLS. So are the encoding and the segmenting for those done up in the cloud somewhere? Is that done by Wowza or something else?
Colleen: Absolutely. It depends on what I'm viewing. But if I were to give you a reference implementation, let's say you wanted to create your own little live streaming network at home. What I would do is I would go, I'd sign up for a MetaCDN account because they use four CDNs. But they act sort of like one, but you get the best performance. The reason I recommend doing it at home is unlike Akamai or any of these other CDNs, you can just go log in and make an account and just like, "I'm just going to play with it," they give you your first month free or something like that.
Kirk: You said that was MetaCDN?
Colleen: MetaCDN, yeah. They're out of Australia. They're fantastic. And then I would spin up on EC2 Wowza server. Wowza has live transcoding, a nice little GUI. It does ACLC, but you can go in and modify it to do HE-AAC. It transcodes to H.264 or, because I bugged Charlie, the CTO over at Wowza a lot, now VP9.
Kirk: Okay.
Colleen: So basically, what you would do is you would send up a contribution stream to Wowza that you would run on EC2 or Google Compute Engine and then have it transcoded and packaged into DASH and HLS and you can choose any of them depending on the URL. You can do like /Playlist.m3ua at the end of your URL and then you've got HLS. You can do /Manifest.npd and you've got DASH. It's just the same encoding, packaged differently.
And then, what I would do is I would hook MetaCDN up to those origin servers. So you could use something like open broadcast software, which is like free Wirecast, basically. Take in your webcams, whatever, push it up to your origin server. It gets adaptive bitrate transcoded, packaged, pull it onto the MetaCDN platform and now you have streams.
You're going to need a player. The MetaCDN guys, I've been poking them a lot of some things and they have been nice to open source Shaka Player, the engine from Google to do adaptive bitrate streaming with the control layer on top of it integrated, which is video.js.
So you can download that player and you can put it on a web page, hook it up to your streams and now you have a fully adaptive bitrate platform for free other than paying for Wowza. But, again, you can swap out for Engine XRT module if you want to. But Wowza does have a free demo for up to 10 clients or whatever so you can still set it up for free if you want.
Kirk: Wow. Okay. Man, making all these pieces work together, that's the stuff you've been working on the last few years, isn't it, learning what has to happen to what?
Colleen: I'm basically a human encyclopedia of all the options of hooking things up and what works together and what doesn't work together and how many B-frames and reference frames you can have in your H.264 encoding for device X or Y or something like that. It's kind of like being a plumber, knowing what fits together, what needs to be in the truck and what order to do it in.
Kirk: I'm going to have some follow-up questions about that after our next break. I'm going to be asking you about let's say a broadcaster wants to do some video distribution like this. What kind of consultant would that broadcaster look to to get some of the expertise that you have to help them put these things together? Chris Tobin, man, I tell you what, I'm going to have to process this for a while. What comes to your mind to ask Colleen at this moment?
Chris: I'm curious. When you're doing your events, how do you manage your power, utilities? What precautions or safeties or protection, I guess, do you go to?
Colleen: So, in a sense, I outsource that to somebody that I would stab if they failed and that I trust. So there's a group called the Pixel Corps that I work with a lot. What I do is I hire them for these events to be responsible for the uplinking. As far as the compatibility and type of encoders and sort of signal flow and all that kind of stuff, I take a very close interest in how that's all going to happen and prescribe that very specifically.
I require them to make sure that they figure out all their redundant power with both battery backups that can, for example, hold the line should power go down until onsite backup generators kick in or things like that. But, for the most part, I know I said trust no one, I trust Alex over at the Pixel Corps. But, other than that, I trust no one. But I've got so much stuff to worry about. I will admit, and that is a very good question, that I do actually tend to outsource that one to somebody else that I trust.
Chris: That's fine. I would do the same thing. You can't expect to be able to do all, but power is very important, has to be managed a certain way and you do need to stay focused on it.
Colleen: Absolutely. It's kind of like a combustion engine, right? You need air, fuel and spark. You need bandwidth, you need source content and you need power.
Chris: Right. Absolutely. Cool. That's what I was just curious about. I've talked to a few folks recently about some broadcasts I just did for XM and Sirius. It was interesting when I asked about power distribution, where can I plug in and where the ground reference is and they looked at me like, "There's an outlet on the wall." Okay. Never mind. I'll bring my UPS in and we'll use an isolation transformer. Thank goodness we did that. At the sports bar, they had a problem because a group came in to do some other things and that breaker, all right.
Colleen: You actually bring up a very interesting point, which is part of the skills in these broadcasts is that you can test all you want, but all plans change when you actually meet the enemy. So I can be testing for a week if they want to setup in the events center ahead of time. But they don't have 50,000 people in the audience with 50,000 mobile devices, with 50,000 laptops plugged in.
So at Google I/O, for example, they run around with actually RF detectors to find people with hotspots on that are jamming the spectrum. 2.4 GHz transmissions basically don't work. So there was a year when they gave out the original Nexus 7s and those didn't work on 5 GHz. They only worked on 2.4 GHz. They were basically useless.
So, let's say that you want to have some sort of demo, for example, that's a big one that I try to stay out of, but I warn people if you want to have a demo on stage that involves using wireless technologies. You've designed and tested, let's say, some casting from a phone to another device or whatever and let's say the phone doesn't have the ability to use anything but Wi-Fi.
Well, now you've got so much RF noise in the spectrum, it might not work, the demo itself. But, in a sense, is it disingenuous to do a hack where you can hook it up via an Ethernet cable? Maybe because that's not the product, but at the same time, no one is going to, in their home, have 50,000 people clogging the spectrum.
Kirk: Right. Back it seems like years ago, Steve Jobs did a demo and it didn't work because he was trying to use Wi-Fi and there was plenty of Wi-Fi in the hall and it just occurred to me, why don't the engineers there just use some slightly out of band Wi-Fi?
Colleen: You can do that.
Kirk: Even if it's a little illegal at the moment. What's that?
Colleen: Well, you can actually carve off spectrum. So, in a sense, let's say there's Wi-Fi on site. You can carve off the spectrum and specifically make like an unlisted Wi-Fi network that's only transmitting on a certain part of the spectrum. I wouldn't do anything illegal because my job is not worth getting . . . I don't want to get fired. That's kind of the whole reason for all this reliability stuff is my first job in life is, "I like my job. I don't want to get fired. If it fails, then I get fired." But also, if I'm committing a felony, that also might get me fired.
Kirk: I hear you. Nobody wants to get fired. Hey, and if you're watching this show, we're going to try to keep you from getting fried. You're watching This Week in Radio Tech. It's episode 287. Chris Tobin is with us and our guest is Colleen Kelly Henry.
She is an expert at massive live streaming, typically of video events, large events. She mentioned Google I/O and I know there are others that she's been responsible for making sure that signal gets out of the venue and up to the cloud and then put into formats that the masses can enjoy easily on their mobile devices and PCs. So a lot of that technology is applicable back into radio. What can we learn from what Colleen does? In fact, Chris Tobin and Colleen have been talking about that very thing.
Hey, our show is brought to you in part by the folks at Omnia and the Omnia.7 FM HD audio processor. I've got to tell you, I own one of these myself. It's at our stations way off in the Pacific in American Samoa. I've got to tell you. We wanted to bump up our processing there. We actually have a fairly competitive situation in American Samoa. Our flagship FM station sounds really good.
We had been using an Omnia-3 for some years. And it's very adequate, does great, sounds nice and sweet. We wanted to get a little more punch in there and we wanted to enjoy some of the benefits that the Omnia.7 offers, like really good remote access and incredibly beautiful Omnia Toolbox tools to see FFT spectrum and real time analyzer, all kinds of loudness measuring devices.
And, of course, you can crank around with the undo and the de-clipping so you can clean up the audio that's going into it, audio that may have been a little over-mastered, for example, and then it's got Leif Claesson's incredible AGC and compression techniques and look ahead limiting, final clipping. It does all those things. They're all built in. Plus it has available as an option an RDS encoder.
So we could take out our old, old creaky RDS generator that we had and use the fabulous new software-controlled RDS encoder that's built into the Omnia.7. We did all that in American Samoa for under $6,000, in fact, a lot under $6,000 on a street price deal on the Omnia.7.
The Omnia.7 has most of the same features as the Omnia.9. It's missing a few, of course, and a few are optional. For example, the RDS, well, you pay a little extra to get the RDS. If you want to get simultaneous HD processing, then you can pay a little extra and get that. So you only have to pay for what you need with the Omnia.7.
I've got to tell you, here's the story, I never saw this Omnia.7. It was shipped to Samoa. Our general manager put it in the rack. I accessed a remote computer there. It was DHCP on the network. I used a remote computer to find it. I'm not even sure he could find it from the front panel. He didn't know what to do. I found it. I got into it. I changed it to a fixed IP address and then I went to town on getting it adjusted. We put audio into it and we were ready with the composite audio out to feed our old-fashioned baseband STL system.
Then, one afternoon, I'm on the phone with them and we swapped the cables over and I think we were off the air maybe a grand total of 12 seconds while some cables got moved over and, bam, we got us a beautiful new Omnia.7 on the air and me adjusting it from 8,000 miles away. It's really incredible. They are delighted with it. They have RDS, of course, on the air. It sounds great. It makes a great visual display in the control room too. It looks very impressive.
I want you to check this thing out too. It's the Omnia.7. It's got all of Leif Claesson's ideas and designs in there. Software upgradeable, of course, and the NF remote software package that you use to control it gives you an incredibly beautiful, real-time look at what's going on in the processor, plus it gives you audio feedback.
Over your remote IP connection, you get to hear what's going on with the processor. That audio connection, that connection automatically is as good as your IP connection allows. So if you've got a great IP connection, you can hear linear audio from the other end. But if your IP connection is a little slower, like ours is to American Samoa, then you hear compressed audio, but you can still get ideas about the timbre and the tone of the audio processor that you're doing.
So check it on the web. Go to TelosAlliance.com. Look for Omnia and then look for the Omnia.7 audio processor. It's part of a great family of other audio processors, including the Omnia.9, the Omnia ONE and the big flagship, the Omnia.11. Thanks so much, Omnia, for sponsoring This Week in Radio Tech.
Chris Tobin is here in New Jersey in a room full of punch blocks. Chris, I'm wondering, are you ever going to take those down or are they just there for nostalgia? Are they doing stuff?
Chris: It's half and half.
Kirk: Okay.
Chris: Stuff and nostalgia. They're slowly becoming active, but it's going to take a while.
Kirk: Yeah. I know you know that over time, you can replace those with some IP.
Chris: Absolutely. Trust me. I'm trying to work on that.
Kirk: Yeah.
Chris: It's coming along. We're getting there.
Kirk: Colleen, I have no idea how old you are, but I think you're very, very young and you probably have never dealt with punch blocks, have you?
Colleen: I think those were the things in my closet where you hook up RJ11 or RJ45 connectors if you want to push them down on the thing.
Kirk: You're even looking at a hybrid there. Chris' punch blocks, they're 66 blocks or like them. There's no RJ involved with those. They're just wires and punches.
Colleen: Yeah. I think that's how my ports hook up to the wires in the wall.
Chris: Yeah. That's a split block 110.
Colleen: I've never done an analog-based workflow. I've never worked with tape or film or anything like that. Sorry.
Kirk: Bless you, Colleen. I hope it stays that way for you. Chris and I and other old radio guys, we have very fond memories of splicing blocks, even though we probably cut our fingers on them with razor blades and splicing tape. White grease pencils, Colleen, have you ever used a white grease pencil in part of your job?
Colleen: I have not. But I've definitely suffered for my job when it comes to pain. Crimping ain't easy when you're doing all those cat-6 cables for TWiT or something like that when installing an audio system. If you remember how ripped up my thumbs were when I was . . .
Kirk: Oh, that's right. Yeah. That's right. Colleen, you also have, and we'll just digress here for a second, you also have a great interest in automotive engineering. You have a great understanding of internal combustion engines, what makes them run and what makes them super-performance.
Colleen: Yeah. My current toy is a Nissan GTR that makes maybe about 700 horsepower or something like that. That was always my dream care and I finally have it. So I guess I'm done. I beat the final boss. I'll have to make a new hobby.
Kirk: So, going forward, what are some of the technologies that are interesting to you now? 3D printing, is that interesting? What turns you on?
Colleen: Sorry. I keep talking over you because I'm an idiot. One of the things that I worked on for about six months was a lot of virtual reality stuff. When we were doing 3D printing for camera rigs. I did a live stream during F8, which is the Facebook Developers Conference where we had a 3D-printed camera rig of a bunch of different cameras and then stitched it together in software, although this little camera that I'm holding up here is sort of a production model that just came out, which is called a Giroptic, which stitches in real time on the camera itself. You can see the three different camera lenses there.
So here's another camera. So I actually took a picture and posted it online yesterday. I clearly have the weirdest gear bag for cameras and such. This actually gets to somewhere, I promise. But this is a light field camera. This is a LYTRO ILLUM. Think about it if video today is a rectangle, this is a cube, in a sense. It collects a volume of photons and you can actually change the depth information to such where you can adjust the focus or you can change your perspective slightly or things like that. That actually has a lot of audio ramifications and virtual reality ramifications.
So imagine, hypothetically, that you can build a camera rig that is either outside in or you're in a room with cameras all around or inside out where you have a camera shooting out where you collect all the volumetric information and you can actually walk around within the room.
So let's pretend you have the coolest virtual reality radio station. Ideally, you'd want to have cameras all around the room that have perspectives on absolutely everything in the room. Then somebody could put on virtual reality goggles and you'd also want to have object-based audio, sort of like Atmos, where you know where within the scene the audio is coming from.
And, theoretically, somebody could put on virtual reality goggles and walk around the room and sit next to the person in the room and everything is in the correct space and they're perceiving it in the correct way.
Kirk: Wow. My head is exploding trying to think about the possibilities here and the next questions to ask. First of all, light field camera, from a practical sense, I've heard of these things. What can you do either practically or theoretically with a light field camera that you can't do with a regular camera?
Colleen: So this Lytro Illum right here, which I would not say that you should go out and buy because the quality is not as good as a DSLR, but it's just interesting. But it is a functioning light field camera that you can purchase. But imagine now that instead of using it like Lytro does, which is you can re-focus it later, you can change your perspective slightly, you can do a couple little neat tricks, imagine that you have an array of cameras.
So think of it like . . . imagine that you have a green screen stage, almost, with a radio broadcaster's desk inside of it and then you have a ray of cameras capturing every perspective. And then you interpolate in software this sort of volume of photons.
So imagine, if you would, instead of looking from the outside of a fish tank at a rectangle that is the fish tank, you can change your perspective a little bit, where you can look up a little bit below the fish or whatever. No imagine you're in the fish tank. You can actually swim around inside of it. There's a volume of it. So the idea being that you could actually be somewhere else and change your perspective.
Because of that, you can do perfect 3D in the sense of where it's not like left eye, right eye, switching the two and assuming that the person's head is perfectly straight, but they could actually tilt their head and you can adjust for the space between their eyes not being the same on all humans. You combine that with microphone arrays and the sort of . . . Google "OTOY LightStage" if you want to see an image of something that is sort of a similar concept.
So what rustles my jimmies is the idea that you can actually not just create these rectilinear and mono or stereo representations of an experience but actually creating something that you can be inside of. Light fields are one technique of doing that with video. Another one is, of course, you can capture and generate a light field and then create polygon characters of the people doing things and use a game engine to be doing that as well.
Kirk: I would think that the data rate of presenting motion in a 3D space that you're in would be incredible amount of data every second.
Colleen: You are absolutely correct, but I don't think we have any idea right now exactly how much it can be compressed. So if you take a look at these new codecs that come out over the years, they're more computationally complex, but there's MPEG2 and then there's H.264 and then there's HEVC or there's VP6, VP8, VP9. Now they're working on VP10, which has light field support in it.
The thing is that these codecs are actually just evolutions where they become more and more complex and they can find more and more redundancy both spatially within the picture, within the rectangle or temporarily, which is between frame to frame to frame. I don't think there's been that much research or work done on compressing light fields. For all we know, it can actually be some more data but not that much more data. Let's say that you divide up . . . part of the light field's concept is to imagine a cube around your head.
If the light field is, say, a 1-meter cube light field, that means that you can change your perspective or look around wherever your head is within that 1-meter cube. Now, you pull your head out of it and you're screwed. But let's say that you just had a very good Internet connection. You wanted to do something like DASH or HLS where you segment it. Let's say it's like Legos, where when you move forward, there's a cube that comes in front of your face and then you stream down the data.
Kirk: Yeah, sure. Sure.
Colleen: There are a lot of ways that you can sort of break out. If you want to start watching "Iron Man" on Netflix, you're only watching what you need. You're not necessarily downloading two hours into it. So, do you need to stream, for example, what's behind the kettle in the kitchen if you're not looking behind the kettle yet?
Kirk: Yeah. Okay. So all the info doesn't have to be there, just what you're perceiving needs to be there, obviously.
Colleen: Correct, but it needs there exactly at the moment that you're perceiving it and ideally in under 20 seconds motion to photon, meaning when you move your head, it batter be there.
Kirk: Okay. I get that. Hey, you brought up one thing. You mentioned frames. I wondered will we ever break through or go past the concept of stoppages in time 24 or 30 or 60 or 120 times a second, will we ever reach a complete fluidity in our ability to capture physical motion visually that doesn't involve frames?
Colleen: Sampling, in a sense.
Kirk: Yeah.
Colleen: I don't think that . . . I think it will be theoretically possible, to a certain extent, but you would have to have the entire chain all the way through up into the display being able to do that. Displays refresh, for example. So is there any reason to have a . . . if your display can only do 60 frames per second but you're streaming 120 or some theoretical thing where it's perfectly smooth vector motion, does it matter if the weakest link in the chain is still going to turn into frames at some point? But let's say that you could. Does it matter? I don't know.
Kirk: I don't know either. Maybe I was thinking of . . . if you relate this to how we think about atmospheric sciences. We take as much of a snapshot of the entire atmosphere as we possibly can in the weather world and then we do calculus to figure out where all the particles are going to go and then we compare that with the next snapshot that we take and we see where we were wrong and did a butterfly flap its wings in China and cause a tornado in Kansas.
Colleen: Right.
Kirk: We use a lot of calculus in weather prediction. It might be interesting to make everything particle-based and then do calculus. I'm talking about way in the future and it may never happen and it may not be important.
Colleen: I think it's a very interesting question, both philosophically and from an engineering perspective. I'm wondering whether or not it matters in the sense of let's say the actual psychovisual perception of your eyes or pyschoaudio . . . what do you call it? Psychoacoustic, you're actually rattling bones in your ears, right? You're actually manipulating liquid/chemical photo red or whatever in your eyes.
Even if you could make it better and useful in some way, scientifically speaking, I wonder if beyond, say, 120 frames a second or whatever if there's any psychovisual benefit to having that level of fluidity because, in the end, it is turning into some sort of analog signal in your brain. Is that just you're beyond the visual fidelity of perception at that point? I'm not sure. I do think one thing that's interesting when we do that for a lot of effects that may be rendered out into something different.
So, for example, if you have a green screen, you're going to want to shoot that in 4:4:4 color space. But I'm going to stream it in 1:4:0 color space. The reason I'm doing that is I can pick out those hairs and Chroma key it into the background, whatever actually I want to do in editing and then once that's done, I'll rip out most of the color data and deliver it to the person because people can't perceive it.
But during the workflow, it's essential to have that color data while you're doing that technique. So I think, for example, recording without frames would be absolutely amazing. I'm not sure delivering without frames would ever be necessary. I could be totally wrong. I don't know.
Kirk: I was thinking a bit about how the eye in the retina works. To my knowledge, your eyes don't work in frames. I could be wrong. I don't know the physics of it.
Colleen: They don't exactly.
Chris: No, they don't.
Kirk: Now, we've gone off our field, which is very interesting. Let's see if we can relate any of this back to audio. In audio, digitally, we deal with samples. Colleen, you just mentioned that in editing in color space, 4:4:4 and once you're done with all the fine, high-resolution stuff, you're going to get rid of the color information you don't need and 4:2:0 is a typical way to send that out.
It's not so different for audio in terms of what we compress or we don't compress. So when we deal with audio in a production standpoint, we really try not in any way to deal with audio that's been data reduced psychoacoustically. We always want to deal with the same samples that we got. Do you see some correlation there?
Colleen: Are you kidding me? All video and audio are the same exact thing. It's all just waves. The whole point is you want to make sure you're not screwing up the waves until the very end of the system and then you rip out everything that people can't perceive and then deliver as much compressed, but ideally but as little psychoacoustic or psychovisual loss as possible. Video encoding and audio encoding are basically the same thing.
So even video and audio delivery are basically the same thing. They just have different bit streams. One of the things I was thinking we should probably talk about before the end of the show, for example, was how you can use something like MPEG-DASH for adaptive bitrate video delivery in HTML5. So even though this is typically a video technology, you can use it for live linear streaming on a radio station, but what's interesting there is that you can adapt up and down depending on the bandwidth of the connection.
So whereas most people do it for video because video is much harder to be able to get that much through the connection, you could, hypothetically, have the ability to adapt in real time from, say, a lossier, very low bandwidth audio quality to, say, a CD quality audio quality to lossless audio. One thing that I don't understand in your world, though, although I maybe guess why, is why there's not more production done using losslessly-compressed audio codecs, considering that audio is so . . . and you're going to punch me for this, easy.
I mean easy in the sense of bandwidth. A lossless audio stream you can put over a network relatively straightforward. Uncompressed audio, you can also put over a network pretty straightforward, but computationally making it losslessly compressed is really not that much of a hit. I wonder why, and maybe you can speak to this, there's not more lossless audio used in your world.
Kirk: That's a great question. I think a lot of it has to do with the least common denominator and no-muss, no-fuss for the end user. There are lossless or FLAC-based audio services out there or there have been, I don't know if there still are, that you can subscribe to. You can either download files or you can stream in very high quality, typically lossless like FLAC. But you've talked several times about adaptive bitrate, either Apple HLS, of course, there's Microsoft Smooth Streaming.
Colleen: No. [Inaudible 00:49:48].
Kirk: These are just now becoming popular. There are some products from Telos and products from other companies too that are geared toward letting radio broadcasters and other audio content creators stream with these.
In fact, I've been doing seminars on Apple HLS. In fact, I just built a PC across the room here where I'm streaming Apple HLS and before this show, I had to take my wife to the airport. I'm driving around town streaming at whatever bitrate the phone was able to do from the PC here using Apple HLS. It's coming and I think it's huge benefits for broadcasters and for listeners.
The thumbnail sketch on that is you get adaptive bitrate within a certain number of different bitrates that the content creator wants to provide. The highest bitrate could be very high. It could be linear. We happen to typically choose . . . if you want to be on iTunes Radio, you need to have 256 kilobytes available and then you can have lower bitrates available, typically three different lower bitrates or you can do four or whatever you want. But yeah, you could do linear.
Colleen: Or lossless.
Kirk: Chris, what are your thoughts about the future of streaming there with adaptive?
Chris: Well, if radio as an industry wants to survive, they better take it and run with it because Colleen's correct. It is simple. Audio is a simple thing to do. It can be tricky and sometimes people take it for granted how good audio can be to tell a story.
So, adaptively, if you have the ability to deliver your content using adaptive bitrate technologies, you can actually corner the market in places where bandwidth is so limited, maybe you're in Kenya and the cellular network, the GSM network has very low bandwidth, but you're able to actually get something out of it or maybe you're in parts of the United States where cellular service is so limited and terrestrial broadband is that your cellular service with the low bandwidth connection you have, the adaptive bitrate actually will get the audio through.
Since you're not putting any video across, you can definitely get audio out. That enhances the experience for the end user, the listener if you want to call it that. The industry needs to start looking at it that way. To answer your question, Colleen, as to why broadcasters don't do something that makes sense, a lot of it has to do with the fact that it's always been done this way.
Colleen: Got it.
Chris: The industry is so risk-averse that trying something new, well, I guess the old saying goes, "I never got fired for buying IBM." That's really what it comes down to.
Colleen: Makes sense. I was actually just using that yesterday as a description for a very similar problem in the YouTube/Facebook software engineering world.
Kirk: Colleen, it appears to me that you, in your job, you might have really a whole lot of options and opportunity to try newer technologies with the caveat that as long as it plays back in a browser, at least for a good percentage of what you do, you can do whatever you want, as long as the end product will play back in a browser.
Colleen: As long as it works and is flawless, which is almost impossible considering it's basically me and maybe one other person and we basically have to build an entire platform. But they don't care how you get there, so long as it works. Now, when you work at a place where you have a bunch of other engineers and other broadcast sort of engineers or software engineers and they all start to think they know what they're doing, they all start to have opinions and your options get limited.
At the same time, I find in my world that the requirements drive the adoption of next-gen technologies because, for example, let's use flash as an example. Flash was the way that people were constantly using to be able to stream audio in real time or video in real time. One of the issues is that for example . . . so, RTMP was a protocol that was only available in flash. You could open a connection. You could stream live, very low-latency ongoing. The thing for video, when they wanted to go to HTML5 or even audio, same thing if you wanted to have an ongoing stream that was live, you had to have segment, segment, segment, right?
Generally speaking, people say for security reasons at the company, we've got to get rid of Flash. All of a sudden, it's like, "Okay. Well, let me take a look at the technology space. Oh wait, in HTML5, I can use Opus." Now all of a sudden, "Oh, I can use VP9." Because people are saying, "It's got to be this. It's got to be HD."
Well, you've got to let me use the next-gen stuff because it's the only way that it's actually going to work. I can't do RTMP in HTML5. It just doesn't work that way. I have to start doing DASH. I have to start doing next-gen codecs. So there's more of an acceptance of change because the requirements, I think, are moving faster and so long as you're meeting the requirements, they don't care how you do it.
Kirk: Cool. All right. Hey, we've got to take a quick break and we're going to come back with some tips from Colleen and, hopefully, from Chris as well, caught Chris off-guard with that.
Our show is brought to you in part by our friends at Lawo, L-A-W-O, a German company that makes audio consoles. Usually, they make big, honking audio consoles, but they added this really cool little console, typically for radio called the crystalCLEAR. We've got a portion of a little video here where Mike Dosch is describing the crystalCLEAR. Let's see if we can roll that clip. Go ahead.
Mike: So crystalCLEAR is maybe the first of several different ideas that we've got for how to approach radio in a new way. A lot of customers are very excited about this so let's take a look at some of the features here. It looks familiar. We've got fader channels and faders. In this particular case, we've got sources that we've configured, microphones. We've got network audio inputs. We've got telephones and we've got codecs and all of these are available to the user all at the same time to make a radio show. We have a monitor section.
So this is going to control our loudspeakers. We have headphones. We can talk to our studio guests, for example. Maybe we have people in the other room in front of microphones. They've got headphones on. I can talk to them. Essentially, it's a full function radio control surface that is controlling our Crystal engine and making all of the mixes in the DSP.
Let's take a look at some of the deeper features. If we go underneath a fader channel, for example, it brings up a block diagram to show us what that fader channel is doing behind the scenes. We have output assignments like program one, program two and a record bus. And then we have a couple of interesting functions.
One of them is called auto gain. This is something that's new and it's unique to Lawo. If we push this button, we can now speak into a microphone and as we speak, the microphone gain will be adjusted and it will automatically be leveled to the appropriate level for this particular show. We can do that for the host. We can do it for guests. It's not uncommon for a guest to be a little mic-shy at the beginning of a show. So even in the midst of a show, we can open this up and we can push it and we can adjust it for the more exuberant guest who becomes more confident during the course of the show.
Auto mix is another feature that's unique to Lawo consoles. Auto mix, if we select it here, this channel now becomes part of a group that can automatically set levels depending on what it is that we wish to do. Now, this might be useful for . . . Let's say we're running a talk show and we want the host to always talk down to guests. So, if the host starts talking and the guests are talking at the same time, maybe we want the host to be a little bit louder and the guests to duck down during that time. Well, this can all be setup during auto mix. Another application for it is setting . . .
Kirk: I'm going to stop the video there. You can see the rest of this video online at the Lawo website. Go to Lawo.com and look for radio products and look for the crystalCLEAR mixing console. It's a virtual radio mixer. That exact video that you just saw part of right there is on that web page. It's about a six-minute demo that Mike Dosch takes you through the crystalCLEAR audio console.
It's really amazing. It's multi-touch. The multi-touch, 10 fingers at once and the console really behaves like you'd expect it to. It does what you need it to. It works with their Crystal mixing engine that has all the inputs and outputs and Ravenna and AES67 networking built in. Thanks to Lawo for sponsoring this portion of This Week in Radio Tech.
All right. For this show, Chris Tobin has been with us and Colleen Kelly Henry has been with us as well. Colleen, maybe you've got a tip that you can pass along to us from Kauai today.
Colleen: Can I do it after Chris so that I can think of one?
Kirk: Absolutely. No worries. Or maybe a helpful website you've come across. Chris Tobin, have you got any advice on 66 blocks . . . no, I'm sorry.
Chris: 66 blocks? Sure. My audio, I just got a zap of static, so, I'm using a mic on the webcam and the speaker on the output of the DC. When you do remotes, things like this do happen. So if you want to do a 66 block, sure, why not? We've got a punch hole right here. How's that? You want the actual block itself?
Kirk: For engineers who don't know, because a lot of engineers have not used these kinds of blocks, you pull the wire through, you drag it down across the fork that's in there and then you use a tool to push the wire down in there and the tool does three things. It pushes the wire into that fork. The fork itself strips the insulation off and then makes contact and then the tool can, if you're using the right blade, will cut the wire off. Hopefully, you've got it turned to the correct end. You're not cutting off the important side, but you're cutting off the leftover side of the wire.
Chris: Yeah. That's what you do. It's an insertion displacement connection, IDC.
Kirk: That's right, IDC, insertion displacement connection.
Chris: The phone company many years ago, they discovered that by doing it that way, the fork, when the wire goes between it, it creates a sealed connection between the two pieces of metal and oxidation that takes time [inaudible 01:00:15].
Kirk: We don't use those very much in radio stations, but every radio station probably has some of these left over, maybe from the telephone supplier because they still use these. Sometimes these cute little bitty ones with just a few phone lines on them. Your phone service may still come into the building that way if you're not already on VoIP.
By the way, those blocks, they did design some to behave pretty well in a cat-5 cable situation. I wouldn't think they work for gigabyte connections, but they work for 10 and 100-megabyte connections. They changed the spacing in between the rows to make them a bit more balanced.
Chris: Yeah. The 66 boxes are still fine because they were very, very popular for a long time for audio distribution and what not. For the tips, here's something people must remember. You can this for audio over IP connections and data. The tip that I have for this, this is guaranteed to work for everybody, inside is the typical tools that you need for making cat-5, cat-6 connections, RJ45s [Inaudible 00:01:01:24].
Kirk: Yeah.
Chris: But what's important to remember because you can't always remember all the details. So if ask you I need a crossover cable. We've got hooked up two computers and two pieces of equipment that are DTE devices or DCE, how do I make a crossover cable? Quickly, can anyone say? You may not be able to remember. Why should you? You don't do it every day. But you should put this in the box.
Kirk: In the box. Oh my gosh.
Chris: In the box with your tools. But then again, you say, "I don't need crossover cables all the time." That's right because then you should have the second piece of paper, which will be how do you make a straight through cable? See? It tells you the top and the front and where pin 1 and pin 8 are. I've done this for years. People have laughed at me.
One day, a couple of months ago, a gentleman came in, an IT person, actually a company we had hired. They came in to do some work and the guy goes, "Can I borrow your toolkit?" I'm like "Sure, what are you going to do?" He's like, "I've got to make a couple of cables." I say, "Are you familiar with the color codes?" He said, "Yeah. I think I can remember them." I said, "Take the kit and we'll see what happens." He comes back to me and goes, "Wow, I never thought to even think of that." I said, "Yeah, that's good. You owe me money now."
Kirk: You know, I'm glad that kit has the paper in there. That's smart. I cannot tell you how many times I have Googled RJ45 color codes.
Chris: I have worked in many environments at 2:00 in the morning at sometimes on locations on mountaintops or the top of skyscrapers where the Internet connection was not really available or at the time was just not good because we had no power. Because we were there, things had to happen. I learned this a long time ago.
Plus, after managing several dozen different tech ops centers and having people make the cable wrong and do something goofy, I was like, "That's it. This is how we're going to do things." Sure enough, it pays off. That's my tip. Put the information in the box with the tools and you'd be surprised how [Inaudible 01:03:10].
Kirk: Yes. I'll put that to use. I'll Google it and then I'll print it out and put it in my toolbox. I can even tape it to the top of the box.
Chris: No. Put it inside the box. Don't just tape the top.
Kirk: I mean inside, tape it inside.
Chris: Yeah, I guess you could do it. It's got foam, but you can do it.
Kirk: Colleen, what do you tape inside of your go-kit?
Colleen: Well, I have always have an HDMI or HD-SDI to USB 3.0 adapter.
Kirk: What brand do you like?
Colleen: Magewell. So, it's nothing about black magic or anything like that. I find that they require special drivers to work with things, but the Magewell ones are the ones that work with video for Linux too or any foundation and they just show up like a normal webcam and it just works. I can use it with FF-MPEG or I can use it with Wirecast or broadcast software and they're small and light.
They have kind of a funky cable. I kind of liked the plastic black versions that they had originally because there was a standard USB 3.0 cable as opposed to the new one where it's like USB A to USB A, which is very weird. I worry that if I lose the cable with it that I'm going to have the device, but not the right cable, so I tape the cable to it. But I'm not going to take it off to use it with something else because nothing else uses it.
Kirk: So did you have a different tip for us or do you want to make that your tip?
Colleen: I would make that my tip as far as signal activation, but I do have a different tip for you, which is MPEG-DASH streaming and HTML5 for audio. So I would bet that a lot of your listeners who have Internet radio stations right now are still using Flash. I would recommend they would look at using MPEG-DASH, not HLS. HLS is only working in HTML5 on Android badly, on iOS and on Safari on the desktop and now Microsoft Edge. HLS is sort of an inferior choice. MPEG-DASH is really what you want. MPEG-DASH works on everything with the exception of Safari web for mobile.
Kirk: Okay.
Colleen: But, in general, you can have the same bit streams in either one, so it doesn't really matter. You can just have something like a Wowza server or an NGINX RTMP server. You send the audio stream in. You deliver the HTP. Players that you might want to use would be Shaka Player or Video.js or the Bit movie player, but just because they're designed for video doesn't mean you can't use them for audio for HTML5 exclusively and with adaptive bitrate if you want. If you want more information about that, I would recommend going to the Wowza forums because they have a lot of good information there.
Kirk: Cool. I did a webinar with Wowza about a month ago and it was pretty informative. I'll look in the Wowza forums about information about that. Cool. Thank you so much, Colleen, for being with us. I appreciate it very much. Thank you, Chris Tobin, for being with us. Colleen came to us from Hawaii. That's a better place than you and I are in, Chris.
Chris: It's a matter of perspective. I've been to that island. It's a very nice place. I did enjoy it and I look forward to going back. Yes, I'm somewhat envious, but I understand. It's well worth it.
Kirk: Thanks a lot to hour sponsors as well. Hey, coming up next week, Colleen, I think you know our guest next week. Our guest next week is Dick DeBartolo, the maddest Mad writer.
Colleen: I know Dick.
Kirk: He's going to talk to us about really fun gizmos, hopefully, some we can use in broadcast audio production. Then we're going to get back into engineering shows where we talk about heavy-duty broadcast engineering. I'm going to be putting up an IP radio link 17 miles and running linear audio over that. It's going to be interesting. I'll report back to you and we'll see how to take some pictures and some video. I'll try to take the GoPro cam if I climb the tower to do any of that work.
So thanks again, folks, for being with us. Thanks to Suncast for producing our show and Andrew Zarian for founding the GFQ Network. Tell you friends about it. Join us next week on This Week in Radio Tech. Bye-bye, everybody.
Telos Alliance has led the audio industry’s innovation in Broadcast Audio, Digital Mixing & Mastering, Audio Processors & Compression, Broadcast Mixing Consoles, Audio Interfaces, AoIP & VoIP for over three decades. The Telos Alliance family of products include Telos® Systems, Omnia® Audio, Axia® Audio, Linear Acoustic®, 25-Seven® Systems, Minnetonka™ Audio and Jünger Audio. Covering all ranges of Audio Applications for Radio & Television from Telos Infinity IP Intercom Systems, Jünger Audio AIXpressor Audio Processor, Omnia 11 Radio Processors, Axia Networked Quasar Broadcast Mixing Consoles and Linear Acoustic AMS Audio Quality Loudness Monitoring and 25-Seven TVC-15 Watermark Analyzer & Monitor. Telos Alliance offers audio solutions for any and every Radio, Television, Live Events, Podcast & Live Streaming Studio With Telos Alliance “Broadcast Without Limits.”
Recent Posts
Subscribe
If you love broadcast audio, you'll love Telos Alliance's newsletter. Get it delivered to your inbox by subscribing below!