Adaptive Streaming Details with Ioan Rus | Telos Alliance

By The Telos Alliance Team on Dec 7, 2015 12:29:00 PM

TWiRT 283Adaptive Streaming Details with Ioan Rus

We’ve covered Adaptive Multi-Rate Audio Streaming extensively. But new streaming methods bring other advantages, too. Ioan Rus, Lead Developer for Telos Alliance streaming products, explains how new features in today’s streaming encoders really improve the listener’s experience.

 

 

 

Watch the Video!

Read the Transcript!

Kirk: This week in Radio Tech, episode 283, is brought to you by the Axia Fusion, AoIP Mixing Console. Fusion: design in technology become one. By the Z/IPStream R2, hardware audio processor and stream encoder. Stream like you mean it with Z/IPStream. And by Lawo and the CrystalCLEAR virtual radio console, CrystalCLEAR is the console with the multi-touch touchscreen interface. We've covered adaptive multi-rate audio streaming extensively, but new streaming methods bring other advantages too. Ioan Rus, lead developer for Telos Alliance streaming products, explains new features in today's streaming encoders and how they really improve the listeners' experience.

Hey, welcome into This Week in Radio Tech. I'm Kirk Harnack, your host. Delighted that you're here and I am live in studio with our guest, Ioan Rus. Hey, Ioan.

Ioan: Hi. Hey, Kirk.

Kirk: We'll get to the formal introduction in just a moment. I'm in Cleveland, Ohio in the training lab at the Telos Alliance and Ioan works for the Telos Alliance. He's a developer so he'll talk about code and I don't know, those little symbols that they use. It's really complicated and the benefits that are crude to us as engineers from the things that he does. Hey, also with us is our usual co-host reporting from New York City, the best-dressed engineer in radio, it's Chris Tobin. Hey, Chris, welcome in.

Chris: Hello, Kirk. Hello, Ioan. I am just enjoying a nice cup of espresso. Hang on. Let me get my SBE mug. There we go. Ah, very good. Nothing like a good cup of java in the afternoon or evening.

Kirk: I wish I had that. I'm enjoying a refreshing bottle of Gatorade.

Chris: Are you going out for a jog later?

Kirk: No, no. Actually, some of us are going out for a cigar later but don't tell my wife.

Chris: It's only the three of us here.

Kirk: Yeah. I told my colleagues we're going out for cigars. If my clothes smell like cigar, my wife is going to have a little problem with it so I'll be smoking naked.

Chris: There go my brain cells. Wow.

Kirk: At that point you might say, "You know, Kirk, have a good time." I might have to buy a change of clothes. All right. So what a way to start the show. This Week in Radio Tech is brought to you by some great sponsors and our show today is going to be about some details about adaptive streaming including... streaming isn't just the audio. It's all the stuff that goes around that to make it work and that's one thing that Ioan is going to fill us in on is how some of those details work. Things like being able to break away at exactly the right moment for insertion ads or changing whole programs, how to change audio processing on the fly, how to do that through a variety of methods, all those things that, it's almost like the audio part, "Hey, we've got that. It's not exactly easy, but we understand how to do that."

But what about the other parts that make controlling a broadcast, a streamcast if you will, palatable and make it interesting for the listener to enhance the listener experience? Because right now, with the way that we've been doing streaming over the past few years, there are a few things that are broken about that and they result in less than desirable experience for a lot of listeners. So Ioan has been working on ways to fix that. I imagine some of those are industry protocols and others may be, "Hey, here's a good idea that we'll put into our products." So that's what we'll be talking to Ioan about.

Our show is brought to you in part by the folks at Lawo, L-A-W-O. You can go to their website at lawo.com. A German company so we call it Lawo, but it's spelled L-A-W-O. They make a console that is really interesting. We've been talking about it for a while here. It is another console that works with audio over IP. It uses the Ravenna standard as well as the AES67 standard. It's the Lawo CrystalCLEAR console and what's interesting about the CrystalCLEAR is that as many consoles are today, it has a rack mount unit that is the DSP engine.

So this is where your audio inputs, your local audio inputs and outputs go. Your microphones can connect to it, local sources like a CD player or maybe you have a mixing, a record DJ console that has an audio output. You can plug that into it. You can plug a number of analog and AES inputs into it. It's got a few analog and AES outputs plus some incredibly high-quality mic pre-amps.

So all that's on the back of this one RU box that goes anywhere in the rack you need it to. But here's what else is in the box. The DSP mixing engine, all the smarts that run the console, that follow the instructions from the console and dual power supplies as well as that Ethernet connection to give you AoIP in and out, and also control from the surface itself.

Now the surface, this is the cool part. The surface of the CrystalCLEAR is actually a touchscreen. It's a multi-touch touchscreen monitor that lets you put 10 fingers on at once. I'm not sure I can control 10 fingers at once, but I could run a couple of faders up and down together, turn a source on or off at the same time I'm running a fader up and down, run my monitor volume up or down, the headphone volume, that kind of thing. So that's what this does for you. It's an app that has been beautifully designed to fit your hand on this good-sized touchscreen monitor. And the app actually runs on a computer that's built into the back of the monitor. I believe it's an HP computer, although they might use something else. It runs as a service under Windows so it's very reliable and robust. And then there's the application level, the presentation level that comes out on the screen that you control and adjust.

Now when you design a console totally in software, there are some interesting things that you can do. And a lot of that has to do with context sensitivity. So if you have a mic channel and you push a button that you want to, say, run the mic pre-amp gain up or down, it's easy to do. You push the options button and only the things that are applicable to that mic channel pop up. So you don't have to wade through a bunch of menus. Almost everything you need is one menu deep. It's right there.

So check it out, if you would. I think this is a great concept in controlling an audio console and the folks at Lawo do as well. Go to Lawo.com, L-A-W-O.com and look for "radio products" and then look for the CrystalCLEAR radio console. Interesting idea and I think it's got a lot of merit.

All right. Here we are at This Week in Radio Tech, episode 283, and Ioan is our guest. Ioan, so you've been working... we talked before the show you've been with Telos doing software development, yeah?

Ioan: Yes.

Kirk: For what, about 18 years now?

Ioan: Sounds about right, yeah.

Kirk: Yeah. What were you hired... Steve Church hired you, right?

Ioan: Right.

Kirk: And what did he hire you to do here?

Ioan: Well, initially, I was hired to work on the Audioactive product line, which was our streaming product.

Kirk: Streaming before its time, as I recall?

Ioan: That's right, yeah. Streaming before its time.

Kirk: You know I, on eBay, I found an Audioactive encoder box and I bought it. It's at my house. I have no idea how to use it.

Ioan: Well, you can send an email afterward and we'll talk about it.

Kirk: Can I have a manual for it?

Ioan: Yeah, sure.

Kirk: Well, okay.

Ioan: Okay.

Kirk: So the Audioactive brand was an early attempt at online streaming and a number of people did buy these and stream MP3 out.

Ioan: Yes.

Kirk: Yeah. So, you've been in the streaming realm for a long time and it seems like a few years ago we began to make a real push toward streaming since the streaming world had caught up with what we knew we could do and there are some new codecs available. So I guess it seemed like next did you work on Omni AX or AXE?

Ioan: Omni AX, Omni AXE, and then the follow-up products were the X2, 9X2 and then, well on the hardware side, we have the Z/IPStream R1 and now Z/IPStream R2.

Kirk: So as we get into this discussion, there are number of technologies we'll talk about, but one thing that I would like to impress upon our viewers, and this has been true for me as well, is that your radio station, or your webcast, you may be doing streaming now, but you may be doing with some free software. You may be doing with some knock off encoder that isn't a licensed product so it may sound okay, but not as good as it could be.

It's almost like the early days of FM radio when an AM station, and that was the bread and butter, got a license for FM and then what do they do with the FM? Well, the technology may not have actually been that good back then. The exciters and the audio processors weren't all that good, but then they need to be because there are very few listeners. Well, what we're finding now is that according to some of the latest research, in 2015, there will be a 153 million people in the US alone that will be listening at some time or another. Actually, I think this is monthly. No, that can't be right, that's half the U.S. Sometime during the year, they'll listen to streaming audio and something like a quarter of all car drivers have used a cell phone tapped into their stereo system to listen.

Streaming audio is becoming a very real... and it's becoming what FM became. It's going to be eventually the bread and butter. So in saying that, I'm thinking people, engineers and programmers, broadcast owners ought to be taking streaming pretty seriously now. So how has your work fit into that notion that streaming has becoming a serious thing now?

Ioan: Well, initially streaming was, "Oh, wow, look what we can do. We can put audio on the Internet." I think the focus now is becoming much more in looking at the listener experience and what that is because unless you can make the listener happy, they won't be around for long. So there are a number of things that we do to make sure that the listener has the best experience. You mentioned some of them, you mentioned using reference codecs. We license front offer codecs. Also, we do audio processing. We take all of our expertise from doing FM processing and AM processing for all these years and we apply this to our streaming products. So these are some of the things that we do to make sure that the final product, when it arrives at the listener, it sounds the best they can.

Kirk: You know actually, on our shows prior, we've talked a lot about processing for streaming so we won't cover that so much.

Ioan: Sure.

Kirk: But just to say that all of the products that you've worked on, the guys at the Audioactive encoder didn't, but everything since then has included some audio processing. So you make that audio... part of the idea of processing, of course, is to get the level right, but another part is to make sure that the audio can be efficiently coded. Make sure there's nothing about the characteristics that make the coder just go nuts and thereby reduces sufficiency. We're already cutting out what 95% or so of the data if you will. I've realized it's psychoacoustic so it's not quite a relationship like that. But we're trying to take those few bits that we have and make them really meaningful for the listener’s ear.

Ioan: Absolutely, yeah.

Kirk: So, hey, Chris Tobin, you're along. I wonder if you might relate some early streaming experience you've had to us where... and maybe it still continues now where a station that you know of is still... was doing it wrong or, you know, in a really bush league kind of way, maybe they still are and why they need to move into the future?

Chris: Well, I've actually come across a few folks that are still doing it kind of the old-fashion way if you will. I think what the hardest part is for folks in the broadcast industry is to get their head around the metadata and understanding why the newer codecs and algorithms and methods of streaming are actually to their benefit. And what Ioan and you are talking about is what people need to understand. If you're still doing it the old-fashioned way, time to wake up and realize there's revenue to be made. But most of the stations I've talked to and recently traveled to, they're getting up the speed with the newest stuff, but I do come across a few folks that asked about the Audioactive. I was like, "Oh, I still have a software disc of that one too."

Kirk: So you mentioned metadata as really adding to the experience and that's a great point. I still... I have laid my heart there on the show. I am just so weak on this metadata thing. To me, it is actually very confusing at least trying to integrate it with automation systems and other maybe software integration or software aggregation programs. One of my radio stations, we have the Arctic Palm Center Stage Live for handling our RDS data.

Well, maybe I should start with this. A lot of engineers out there have, in fact, interfaced their automation system with an on air encoder and so they're getting the title and artist and maybe the occasional call letters or slogan out there. Can you characterize, Ioan, how is it different? How is metadata for streaming different, if at all, than metadata for RDS?

Ioan: Well, in a sense, it's not that different. In a sense, it's ultimately just information about what's playing now. What's interesting is that you can do a lot more. You can include information about what's going to be playing up next. You can include more data about the song that's playing now. You can include a link for someone to purchase the album if they're interested.

Kirk: And we know there are services like Next Radio's TagStation that makes this really easy. They do the heavy lifting on the databases for you. They get the title and artist and they crunch it from there. What about data formatting? Okay. Let's back up a little bit. What metadata do you really want to put into the stream that you're sending out of our encoder? Typically, what items go in that?

Ioan: Well, one of the challenges that are actually... even before it gets to our encoder, there are many playout systems out there and, unfortunately, there's no standard metadata format. Every one of them will send some variation of artist, title and other available information. So the first challenge is how to ingest the metadata from a variety of systems out there. So we do this with what we call little filters and each filter will match a particular playout system. Now, once you get the metadata into the system, you can then include that in the stream.

Kirk: So in a typical stream, we're putting, of course, title and artist. That's the most common thing we talk about.

Ioan: That's right.

Kirk: You mentioned that you could put into the stream upcoming songs or maybe even, well, would you put in past songs?

Ioan: You could. This depends on your player. It depends on what kind of experience you want the listener to have. So your player and the player's user interface will dictate what kind of metadata you actually want to get to the player.

Kirk: Now, before the show, we were talking about a newer, more capable metadata engine that you're working on. What we've had in the past has been this idea of a number of different filters to take this metadata from an automation system or even from an aggregation program, I suppose and then encode that into the proper places in the stream.

Ioan: Right.

Kirk: Now if you have my question, I was going to ask about that, oh, yeah. So, title, artist, station identification or is that a separate...

Ioan: You can include that as well.

Kirk: Okay. What about things like contesting information or just, "Hey, listen tomorrow morning at 6:00 AM for the word of the day."

Ioan: You can actually include just about anything you want in the metadata. As a matter of fact, with the metadata you just mentioned, you even have access to GPIO commands, which allow you to press a button on a console somewhere and insert custom metadata into the stream.

Kirk: Okay. So this is pretty interesting. The new metadata engine that you're talking about here that's in one or two of our products?

Ioan: It's currently in R2.

Kirk: In the R2 products. We'll talk about that in a few minutes. So this sounds like it has the capabilities of some other people's software that they would run on a PC separately in the radio station, maybe an aggregation program in which you could... can you schedule things like public service announcements? Or what if every 20 minutes, I want the metadata to say, "Hey, listen to Johnny Rocket. He'll be at the Ford dealer tomorrow at 5:00"?

Ioan: Sure, you can do that.

Kirk: So you could schedule announcements? Okay. Now, this GPIO, GPIO means contact closure, right? So you push a button, you could push a button somewhere and make a given message show up?

Ioan: Yes, absolutely.

Kirk: Okay, okay. I wonder what would be a good application for that. Chris, if you had a button you could push and make a phrase show up, what would you give the disc jockey the power to do?

Chris: Oh, that's dangerous. I don't think I'd give them the power to do anything. Yeah, I know. Now if you're doing call screening, you want buttons that pop up and say things like, "Take the call,", "drop a line," but giving the DJ power, that's very dangerous.

Kirk: You know this is going on the stream so...

Chris: Even on the streams, I regulate the same way. We're accustomed doing broadcast, you still need to be somewhat careful.

Kirk: Well, yeah. I'm thinking that maybe sometimes I listen to a talk station in Cincinnati, WLW, and they always say, "Call us." They give the phone number. "If there's ever an open line, call us at blah, blah, blah." I notice they act alike and maybe it's true, maybe they act like they just never have an open line. Theirs was always full. As soon as they drop a call, it fills right back up again. Maybe you could push a button and say, "Hey, there's an open line. Try now," to get your callers to call in.

Ioan: Well, that's a good idea, actually. But in addition to sending messages, you can use metadata to signal things to the player so that your custom player could do something or react in a certain way when you push that button.

Kirk: Okay. So if a broadcaster had a custom app, that custom app could react to a hidden message, a closure, a GPIO in the stream. Well, that would be interesting. Wow, okay, okay.

Chris: I'll give the DJ the ability to take over your iPad.

Kirk: Yeah, there you go. "Hey, we've erased your C: drive." Oh, okay. I can think of more evil uses than the non-evil uses, I guess. All right.

Chris: Well, same principles of metadata in a cable box, right, for television with the double D and all that stuff.

Kirk: Similar, no?

Ioan: Yeah, I think so.

Kirk: So, let's see, what else? Well, okay. So on this metadata engine that's been going in the products, how do you garner the ideas to what to put in? What makes this thing do? Is it customer request or what?

Ioan: A lot of it is customer request and then what we do is we take the customer requests and then try and see if there are any patterns, general patterns that could be applied to other uses. So that's generally how we work.

Kirk: Now, I think earlier we were talking about this newer metadata engine and you mentioned that you could filter the data coming in from the automation system along with schedule announcements and you could actually format it for the stream, but also send that same data elsewhere, too.

Ioan: Yes.

Kirk: What would be a use of doing that?

Ioan: Well, for example, if you have a web page that displays information, you can upload your metadata to your web server.

Kirk: So the same engine parsing data and formatting data could both do this instead of having to run two disparate systems to get that done?

Ioan: Yes. You can also use it to feed another system. So if you have RDS, you could use the functionality built into the system to massage the metadata coming in to filter certain tags that you don't want streamed. You can do all sorts of things too.

Kirk: Any chance of correcting case mistakes? So many songs are written... they've entered the data in all upper case, all caps.

Ioan: Yes.

Kirk: I would love to be able to fix that on the fly.

Ioan: You can.

Kirk: Really?

Ioan: You can, absolutely.

Kirk: Okay, okay.

Ioan: The capabilities are that the new metadata engine actually breaks your incoming metadata into separate fields. Then you have access to each separate field. And, for example, if you have a blank field, you can fill it in with some data, default data. You can change case, you can change the field value based on rules that you apply so you have full control over your metadata.

Kirk: What would you say is the learning curve for an engineer or programmer, whoever is going to be setting up this filtering?

Ioan: Well, there's certainly a bit of a learning curve, but everything is done with a very simple interface. The concept behind the metadata too is instead of a signal flow diagram... so you sort of understand how data flows from ingest to transform filters all the way to the output. So it's fairly straightforward.

Kirk: So, Chris, earlier, one theme that we hit upon and Ioan actually mentioned it first, I think, was about making the experience better for the listener. It seems, as broadcasters, we've got to do whatever we can to make listening to our content effortless, whether you're on a PC desktop or an appliance like a Grace Digital radio or a C. Crane Wi-Fi radio or your smartphone or who knows what's coming with the connected dashboards. Chris, I got to believe that that's key, making it so it's effortless for the listener.

Chris: Oh, absolutely. You need to engage the audience. What people have overlooked over the years, in the recent years, broadcasting radio and television but we'll stick with radio has always been somewhat of a passive environment until we did the phone calls, as you'd said. You know, "We have calls, our lines are open," contesting, blah, blah, blah. But when there is no contest, there's never been an engagement. Now we come along to a medium that allows engagement and we haven't chosen to pick it. So with this new technology and what you're working on, it gives you the ability to engage and that's what's key. So you think of all the social media applications and platforms and everything that goes on, all the wild and crazy stuff, now broadcasters can do their own version or work in tandem with the others. But without engagement, you might as well just turn off the lights and go home.

Kirk: The engagement, I wonder if you could do any contesting with this metadata if you had your own app? I don't know, could you make it flash, could you turn on a button in the app that would say, "Call, answer now," or even choose an answer. Have your cell phone squirt that data back? Well, there's a whole world of possibilities here.

Ioan: Absolutely. Of course, this depends on having a custom app that would have those features, but this would be a great use for that GPIO command. You press the button, say, "Okay, contest starts now." This would enable the contest button in your app and you're good to go. The first one to press the button or the tenth one.

Kirk: Radio stations could have their own "The Voice" competitions. You could vote through the app. Oh, my goodness. There are a lot of possibilities.

Chris: You could have a countdown clock of callers on the app.

Kirk: You could. I was just one...

Chris: The first 10 callers and you watch it go from 10, nine, eight, seven.

Kirk: One fly in this ointment, though, is buffering. If listener A is getting this about two seconds behind real time and listener B is on a 20-second delay due to buffering or whatever method they're listening, there could be a lot of disparity for when listeners experience what they're hearing or seeing.

Ioan: Well, since the metadata goes along with the audio and it's synchronized to the audio, whenever the announcement goes out they would sync up.

Kirk: Yeah, yeah.

Ioan: So as long as you take that into account in the player design, you should be okay.

Kirk: Okay, okay. Wow, I think the possibilities are mind-boggling. All right. We want to talk a bit about... Let's see what time it is here. Let's go ahead and hit our second ad for the Z/IPStream R2 and then we're going to talk about a little bit about HLS or multi-rate streaming. We've done a few shows on that, but we'll talk about how we do that in this new product from Telos and we'll see how you can still do your legacy streaming and do a newer kind of streaming at the same time. So you can keep those older listeners or listeners that had been with you for a while. They're used to getting your content one way and then we'll see how the new ways of doing it are even better.

So you're watching or listening to This Week in Radio Tech, episode 283. I'm Kirk Harnack along with Chris Tobin and our guest is Ioan Rus. He's a developer and in charge of the streaming product line at Telos Alliance. One of our new products is this Z/IPStream R2. People ask us this question, "What should I use for streaming, software or hardware?" And in the software world, I guess people are used to even picking up a free product here and there, like Edcast, for example, or some free audio processing program. And you could even use the Shoutcast player. I guess there were plugins to make it stream back out.

But some other folks like hardware. And at my office, I've got an older product from Telos, the ProSTREAM. Well, of course, the Audioactive was a hardware device for MP3 streaming. So people ask us, "Hardware or software?" Well, we have a new piece of hardware to tell folks about and describe. It's called the Z/IPStream R2, R meaning rack, rack mount. That's how you know it's hardware, it's naming convention. And I'm not sure what the 2 stands for. Second generation?

Ioan: Two stands for kind of the next generation.

Kirk: Next gen, okay.

Ioan: If you count Audioactive, this is much more like third or fourth-generation hardware.

Kirk: I got to play with this box last week for a while and it's a one-rack unit box, the Z/IPStream R2. It has a couple of different ways to get audio into it. By the way, analog is not one of them. AES Digital inputs and also Livewire inputs so you can go in either way. If all you have is analog, well, heck, I think we'd be happy to sell you an xNode and you can run your audio and analog into the xNode and then go Livewire into your Z/IPStream R2.

But this Z/IPStream R2 is really... imagine, if you would, a transmitter for the Internet. Now imagine that you have up to dozens of transmitters for the Internet in this box. So one-rack unit box, put audio into it, up to eight different programs can go into it, eight stereo programs go into it. But then, you can process and then stream these programs in many different ways.

So let's think about... let's say you're a cluster of radio stations. Let's say you're here in Cleveland, Ohio and you've got, oh, I don't know, six stations under one roof. So you're going to bring six stereo programs into the R2. And let's say that you have to do legacy MP3 streaming to keep some of your listeners happy. And you want to send out a stream that's also maybe HEVC2 at a very low bit rate, like 48 kilobits. And then you want to have the next thing available, which would be Apple HLS adaptive streaming or Microsoft Smooth Streaming. You can do that too. You can take that one program, audio process it, and then make all these different streams and send them out either directly to listeners, not as likely, you need a lot of bandwidth, or to a CDN, content distribution network.

So, Ioan, when we came to you and said, "Take the software that we've been making and put it in this hardware box," what are some things you're excited about this R2? How does it make life better for engineers?

Ioan: Well, it makes life better in many ways, but one of the key points is that it has a lower total cost of ownership. If you'd look at managing a computer and dealing with the IT department and having to deal with software updates and constant reboots and things like that, the cost of owning that streaming computer over a year or so is going to be much higher than running a very simple rack-mounted custom product. So that's what we're offering in the R2.

Kirk: So when you plug this box in, it has a little LCD screen on the front.

Ioan: It's an LCD screen on the front.

Kirk: It sets your IP addresses there?

Ioan: Exactly.

Kirk: And after that you don't touch the front anymore, do you?

Ioan: You don't touch the front anymore. Everything is done through an HTML 5 web interface. You can fully configure the product remotely, you can do it from home on your iPad if you have the connectivity and you're good to go.

Kirk: I got to play with all these boxes, like I said, last week. We've got Telos, this is still the commercial, we've got a video coming out about the R2 in just a few days. It will be up on the Telos website and on our YouTube channel. Check it out. It's amazing and the total cost of ownership thing is really interesting. A lot of times, we as engineers think, "Oh, I can build a computer for $300 of $400 and install Windows on or Linux or some operating system." Man, I've done that for years at my office and I've got to tell you, I have put a bunch of time into my streaming computer at my office and hard drive will fail or the fan on the CPU will fail. It's just one thing after another and I know I've spent a ton of time and I've had a fair amount of down time.

At my house, for example, I provide NOAA Weather Radio for Nashville streaming. So if you ever tune in through Weather Underground or something to NOAA Weather Radio, you're tuning in into my stream. Unfortunately, the people I stream through only take Edcast as the streamer so it's pretty inefficient, it's MP3 at the wrong bit rate and la, la, la, there are lots of problems. And I have spent so much time keeping that computer up to date. Well, an appliance like the Z/IPStream R2, you don't have to do that. You never have to update it unless we tell you that you have to, I suppose. But unless you want new features it just runs, it's an appliance.

Very cool, very cool. Check it out on the web, go to telosalliance.com and look under Telos for our streaming appliances and look for the Z/IPStream R2. I showed you actually on the show a couple of weeks ago, I held it up briefly, but we'll have the video coming out soon. All right. This week in Radio Tech with Kirk Harnack. Chris Tobin is along and Ioan Rus, a developer here at the Telos Alliance.

All right. Let's get into scenarios for streaming. A lot of radio stations started out streaming at X bit rates, maybe even with Real Audio and they start... and then maybe they went to Windows Media. Some people are still doing that. But on their website for the radio station, you might see, "Click here for our 32-kilobit stream," and, "Click here for our 128-kilobit stream." Now, my parents would not know what to do. They have no idea what that means. Yet, some stations still have to do that. What are we doing to get away from that problem and make it just, "Click here," and you'll hear it?

Ioan: Well, first of all, in addition to not knowing which one to click on, the other problem is that the available bandwidth, especially with phones, smart phones, is quite variable. So you might have great bandwidth when you're at home on your Wi-Fi, but you step outside then all of a sudden you don't have a good connectivity anymore. So to solve that, adaptive streaming comes into play. What's really cool about adaptive streaming is that you encode the same program at multiple bit rates and then it's the player that makes the final decision as to which bit rates they will listen to.

Kirk: Only the player knows, has an idea of what its bandwidth available could be and if you have 10,000 listeners, you have 10,000 different bit rate scenarios.

Ioan: Yes.

Kirk: Okay. How does the player decide? How does it know what's available and how does it decide which bandwidth to play?

Ioan: Well, I'll go back to how the adaptive streaming is being produced. So the way it's being produced is we use a specially designed encoder, which encodes the same program at multiple bit rates, but it's done in a frame aligned way so that the player is able to switch between those seamlessly. So there are no glitches, no problems. In addition with the audio content, we also produce what's called a manifest file. This is usually a simple text file with odd-looking characters, but you can sort of read it if you really try. But this manifest file tells the player what bit rates are available, what streams are available, and what the player can request from the server. So this information goes to the server, the client downloads the manifest file, it knows what bit rates are available and then it's able to request the bit rate that is best suited for the current connection.

Kirk: So unlike traditional streaming, I'll use the word SHOUTcast and Icecast interchangeably, but that kind of traditional streaming, where there's this constant connection, right?

Ioan: Right.

Kirk: And so a streaming server actually has to have special software to manage dozens, hundreds, or thousands of individual connections and each one takes a bit of space, memory space. So it sounds like this other kind of streaming, adaptive bit rate streaming is more file-based.

Ioan: It is, actually. That's the part that's surprising. Even though it's live, the audio is produced in segments so you may have a three-second segment or a 10-second segment. You have the option of specifying this. You end up with the file with a chunk of audio and this file gets sent to the server. And on the server side, you can use just a plain HTTP server to serve your content. You don't need the specialized streaming server.

Kirk: So the end user, and I've played with this, I'll describe the experience. I use a TuneIn and a number of other apps to listen to content and most of the apps listing to a standard, normal SHOUTcast, Icecast stream, you hit the Play button and then you wait. And often times, the program will tell you how much is buffered up, 20%, 35%. You're waiting and you're waiting until it buffers up. How much time are these players typically buffering up?

Ioan: Well, the player doesn't need to buffer up. You can start...

Kirk: I mean with the traditional streaming.

Ioan: Oh, with the traditional streaming...

Kirk: Yeah, how much time am I typically...

Ioan: It depends, it depends. A common number that's been used for a long time is about six seconds but, of course, some players can buffer more or less.

Kirk: And if you're using a SHOUTcast and Winamp, you do get something called burst on connect where as soon as you connect, it downloads, it fills up the buffer as fast as it can. It doesn't stream to fill up the buffer and wait for that. It fills up the buffer quickly and then starts playing as soon as it can. But on adaptive streaming with these files, the scenario is different than waiting for it to buffer up. How's the user experience better with that?

Ioan: Well, the user experience should be better because when you request the stream, the server already has a few segments available. So you will download the appropriate segment and you can start playing as soon as you get that initial.

Kirk: As soon as you get the first file.

Ioan: Yeah.

Kirk: So let's say that you pick a file that ends up being, I don't know, 150 kilobytes in size, it will start, it will down that usually pretty quickly.

Ioan: Sure, it depends on your bandwidth.

Kirk: Yeah, but it starts playing right away. I've described this before. If you're a Netflix subscriber or other streaming video subscriber and you go start a program... and I start cartoons for my kid all the time. And cartoons are interesting because a modern cartoon was produced on a computer and so the graphics look like vector graphics. You can tell right away if they're close to perfect or if they're really low bit rates. You can really see whereas a movie of some outdoor scene, it's a little harder to tell. So I'll go to Netflix for my son and I'll start some cartoon and the first few seconds are blurry looking. Now I'll be thinking, "What's wrong with my bandwidth?"

And it downloaded, in this case, the player is programmed to download the lowest bit rate or a low bit rate first to get that, so the kid doesn't have to wait there for that first cartoon to start it starts playing right away. And then it determines, "Hey, I've got lots of bandwidth," the next chunk of download will be the higher bit rate. Now I don't know if audio players typically work that way, the designer of the player could have some control over that.

Ioan: Yes.

Kirk: But he could design it so download first a low bit rate to get you started right away immediately and then scale up to the higher bit rate files if he wants to.

Ioan: Yes, absolutely. That's a pretty good strategy to start with the lowest bit rate. If the player knows that it has a good connection already, maybe based on the fact that the Wi-Fi is available, then you could go with the highest.

Kirk: So it could make some assumption, "Hey, I probably have a good bit rate."

Ioan: Yes, yeah, yeah.

Kirk: Well, if it had the Wi-Fi at my hotel, it wouldn't have a good bit rate. All right. So once this adaptive bit rate stream is started, and again it's not a stream per se, it's file and then five seconds or some seconds later another file, another file.

Ioan: Exactly, yeah.

Kirk: And the player knows how long it took to download that file.

Ioan: Yes.

Kirk: And it knows how long that file lasts so it can then do the simple math to figure out, "Hey, I'm not having a problem here," or "Ugh, it's taking four and a half seconds to download five seconds worth of audio, maybe I should ask for a lower bit rate."

Ioan: Right.

Kirk: Point is the user doesn't have to fiddle with anything.

Ioan: Exactly, that's the beauty of it.

Kirk: If you're listening to an HLS or adaptive bit rate stream and your audio goes away, you probably just don't have any data available at that point, right?

Ioan: That's right.

Kirk: Okay. All right. The other interesting thing is many corporate firewalls in the past have restricted streaming media, television, Skype, radio stations, audio streaming media. And adaptive streaming works like HTTP, like a file transfer.

Ioan: Exactly. Since it's just an HTTP download, as long as you can browse the Internet, you should be able to listen to HLS.

Kirk: The firewall doesn't know the difference.

Ioan: It doesn't.

Kirk: This maybe a way to get around those evil IT administrators. I'm not making any friends in the IT department, am I? But it's interesting to note that that means this kind of streaming will work from any hotel room, which, often times, you do have restrictions.

Ioan: Yes.

Kirk: Wow, cool. The uptake of adaptive multi-rate streaming right now, I guess my Android phone will do this. Any Apple product will do Apple HLS, Safari.

Ioan: Quicktime.

Kirk: Quicktime. Actually, I haven't had the best of luck with iTunes yet, I don't know why.

Ioan: VLC will also play adaptive.

Kirk: Okay. All right. When you're listening to an adaptive stream, now we talked about making three or maybe four different bit rates available. Would you use the same codec at different bit rates? Let's say you had a couple of lower bit rates. Could you use an HE codec for those and use regular AAC for the higher ones? Can you switch codecs?

Ioan: You can switch codecs but, of course, this depends on the player.

Kirk: Oh, okay.

Ioan: Some players will have absolutely no problem with that. Some other players may not switch quite right.

Kirk: Chris Tobin can chime in on this. One of the difficulties of low bit rate encoding, Chris, as you know, has been that the codec can be optimized for voice as the Skype codec typically is, whereas telephony codecs are, or it can be optimized for music as typical psychoacoustic codecs like MP3 or AAC are. And at low bit rates, they tend not work so well for the type of audio that they're not designed for. At higher bit rates, it doesn't matter so much. So if you've ever been put on hold at a company and they're playing music on hold and it ends up sounding like white noise, you know that the codecs between you and it are not designed for music. You've experienced that, Chris, haven't you?

Chris: Oh, absolutely. Yeah, in the early days of streaming, at the places I worked at, we discovered a few things that were not good at low bit rates. We did learn and come up with workflows that we decided whatever we put into the pipe, we need to make sure it's properly, if you will, process for the bit rate, whether it's high or low bit rate decoders at the far end. So I think broadcasters now need to start thinking seriously about how they process their audio for voice and music so that they can avoid the white noise music on hold effect that you get at the low bit rates. Because the new AAC stuff, the low bit rate stuff is really, really good so if you put the right stuff into it, you should be able to get through most of your low bit rate environments.

Kirk: One of the other things though that falls apart is let's say you're doing a sports broadcast and you happen to be using some codec that's more intended for voice but there's a lot of crowd noise. Well, crowd noise, to a codec, is a bit indistinguishable for music. So crowd noise can really tear up the audio because there's a lot of wide band energy there whereas voice is much more staccato and narrow bandwidth.

Well, the point I'm getting to is this, there's a new codec out from Fraunhofer. It's an extension of HE-AAC, it's called xHE-AAC. Boy, I think they need to come up with a better moniker. That is hard to say, xHE-AAC. I think Fraunhofer worked on this for years to try to figure out how do we get voice or music at a low bit rate. And I first heard this codec, oh, my goodness, probably four, five years ago at the AES convention in New York. And they were demonstrating it down to about 24 kilobits per second and it sounded pretty good.

And the key to it was instead of trying to make one codec work for both, they actually analyze the audio coming into the encoder and determine is this predominantly voice or is it predominantly music. And they actually switched encoders instantaneously. You can't hear it switch based on the type of audio coming in. And it can switch just, I don't know, frame by frame, I suppose. So it's optimized for the type of audio coming in and then it can get down to a lower bit rate and still give you a pretty reasonable experience. So this xHE-AAC codec, I understand that we've got this... we're putting this into some of the products at Telos.

Ioan: Yes. The first one we have this is the R2, we've showed that at IBC.

Kirk: Oh, you've already showed it.

Ioan: Yeah, and X2 and 9X2 are getting that as well.

Kirk: Okay. So we want to stream at higher bit rates and I guess this xHE-AAC isn't just for lower bit rates, it expands fully to up in the 300 kilobits per second area.

Ioan: Right.

Kirk: Okay. So it sounds as good as any other codec at that bit rate. But when you get down to 16 kilobits per second and regular AAC or HE-AAC is going to sound pretty rough at that point, xHE-AAC will sound better.

Ioan: Yes, absolutely.

Kirk: Okay. There's a radio station in Norway, it's Radio Haugaland, I believe.

Ioan: Yeah, you pronounced it better than I did.

Kirk: Did I? And you could download their app, it's actually for Android. I don't know if they have an IOS version, but they have an experimental 16 kilobit per second feed. And you're not stuck with 16 kilobits per second. They're forcing it to that low bit rate to let you hear what's possible at that low bit rate. And I've got to say, I'm not sure I'd listen to it all day long, but it's not bad. If I was driving in the car in my connected dash, all of a sudden got really bad data area and I was listening normally at 128 kilobits and it had to slide down to 16 for a little while, it wouldn't be bad. I mean, I'm still connected, still listening. Care to make a prediction about the demise of traditional streaming and adaptive streaming being all we have? Do you think we'll reach that point?

Ioan: I'm not very good with predictions. I'll let you do that, Kirk.

Kirk: I'm a big fan of adaptives. So many advantages for the user. One other advantage I want to talk about is this idea of audio replacement. So one of the horrible experiences that I just suffer through when I listen to a stream of a station, a station that either has to replace some commercials due to AFTRA considerations or due to other licensing considerations like, okay, we can't play this old Paul Harvey five-minute broadcast or we can't play the Rush Limbaugh Show on our stream, we have to replace it with some other audio. Those cutaways, especially for the spots, those cutaways and rejoins are usually so ragged. And the level is different, the texture is different, they're not processed the same, and it's like really it sounds like a high school, bush league radio experience here. What is technology allow us to do that we've put into the Telos encoding products?

Ioan: Well, we do a number of things. The fact that we do audio processing, we'll take care of equalizing levels, first of all, but we also have the ability to do a sample accurate switching of program content. So the products will allow you to switch at your pre-defined time. So you can actually specify in advance the precise time and you do this using PTP time. So you can specify the precise time when you want the switch to occur and in that case, it will be sample accurate. And you have the ability to switch to another audio stream or another audio feed or you have the ability to switch to a file playback.

Kirk: So where could the switching take place, player, the CDN server in your encoder? Where are we talking about things happening?

Ioan: This is happening in the encoder.

Kirk: In the encoder, okay.

Ioan: Of course, with metadata, you have the ability to switch downstream, but then you have, you know, some additional issues that you have to take care of. You have to make sure that the inserted content is also processed equivalently.

Kirk: Right, it wouldn't be processed. If it was downstream, it wouldn't be processed by our processor.

Ioan: Exactly.

Kirk: But at least you can now get... Why was some of the stuff so ragged earlier? What was wrong with the technology in making such a ragged cutaway and rejoin?

 

Ioan: Well, one of the issues is that the cutaway was done on a stream, on a bit rate stream, for example, MP3. MP3 has the issue of frame back pointers. I'm not going into the details, but the point is that when you actually cut a frame you're actually missing some of the data of that frame, which kind of introduces audio artifacts and little squeals and little glitches in the audio.

Kirk: Sure.

Ioan: So you cannot just cut and insert an MP3 stream.

Kirk: Which is why an MP3, if you lose a packet, you don't just have dead air for that time. You may have introduced a squeal or other weird artifact in the preceding or succeeding audio.

Ioan: That's right, yeah. Now, AEC is a little better. As long as you cut on frame boundaries, you can do a little bit better. But even there, you would want to do it the right way. And the right way is to actually do the switching prior to the encoder.

Kirk: Yeah.

Ioan: What we do is when you switch from one audio stream to another, we do a crossfade between them to make sure that the audio levels are appropriate and you don't have audio glitches this way.

Kirk: Chris, what's been your experience in this problem of audio replacement? I can see if your automation system itself was doing that, you could probably not have too many problems. If you take care of your timing, your audio levels should be good. Downstream does seem pretty tough, though. And I know that some companies, that's been their model thinking that that was okay or good enough when really it's been pretty challenging to listen to, a lot of times.

Chris: Well, I think, unfortunately, ad insertion in general, whether it be in radio or television, has too many [chiefs 00:51:47]. I can tell you from working on facilities where we did our own ad insertion at the automation level, the experience, the audience engagement was similar to what they expected on the radio and on the stream. When we move to I'll call it an IP-based add insertion, exactly what you said occurred. Strange things, timing was off, some audio did get clipped, other audio may have some artifacts, and the experience and the engagement went down.

I don't think it's going to change anytime soon unless a lot of folks who do ad insertion at that level really come to realize it's important that the end user or end listener or viewer really does pay attention. And once that happens, the newer technologies, even AAC you can do it in the crossfade technique that Ioan is talking about makes total sense, then you'll get to see and hear improvements. Things will change, but that's a toughie. Trust me, I worked in a place where we were doing great numbers, the com scores, everything was what you wanted and then somebody got a bright idea and said we can make more money supposedly by doing it this way and do the classic geo-tagging and, well, things just never were the same after that.

Kirk: I got a feeling that that downstream switching, though, probably does have an opportunity to get better. I know that there are some standards. Probably the biggest player in that industry has set some standards to try to make that work better.

Ioan: Yes, yes, that's absolutely true. By paying attention to the overall loudness of the audio coming in, as well as the replacement audio, you can do much better in switching.

Kirk: It would be interesting to see how that goes along. As an engineer, I think pretty linearly and I'd love to see replacement audio happen before the audio processor and just have one encoder, not, I guess, stream splicing. I would love to see that, but we'll see where they... Well, stations are making a bad decision about that and end up with a bad sounding stream. Well, they suffer, as Chris said, and the listeners show.

Wow. Hey, we're going to wrap up our show in just a few minutes. Hopefully, Chris Tobin will have the tip of the week for us. We'll pay attention to that and, also, I wanted to see if there's anything else about this new product, the R2, the Z/IPStream R2 that you wanted to mention before we got off of that subject. I think of it as multiple transmitters. If you have a cluster of radio stations in a market and you want a solution, man, it just seems like here's one appliance that does all these things. And I guess we almost hit on a little bit, you can produce your legacy streams as well as get into the new world of adaptive bit rate streaming.

Ioan: Yes.

Kirk: Yeah.

Ioan: There are a couple of things that I wanted to mention. One is that once you bring the audio in, you can encode, you can process in different ways, you can encode and send to a SHOUTcast-style server, Icecast style server, you can contribute to Akamai or any of the other CDNs. You can also do adaptive at the same time. The R2 also includes support for Trident Digital so if you're a Trident Digital customer, you don't need that separate computer or separate box. You can just use the R2 and you're set to go. Finally, I also wanted to mention the dual power supply. I'm kind of happy about that.

Kirk: Yeah.

Ioan: It makes the product very reliable.

Kirk: And a lot of folks want dual power supplies in case one of the power supplies fails. But a better reason for dual power supply is if you've designed your engineering plant to have two AC power sources, maybe one is on commercial power and the other one is... or one is one UPS and one is in another UPS, one's in commercial power and the one's fed by a generator, whatever it may be, if you can not only have two power supplies but two power sources.

Ioan: Yes.

Kirk: Well, that's a real benefit. And if you lose power in one of them, the R2 will let you know.

Ioan: Yes, it will.

Kirk: It will absolutely scream at you. All right. Hey, our show has been brought to you in part by the folks at Axia and Axia Livewire, a terrific architecture to build studios with. And Chris Tobin has a great deal of experience with that and I want to let Chris tell you about the benefits that he's personally felt by using audio over IP to build the studio. Chris?

Chris: For those of you watching, you know what I just said. For those of you listening, I can tell you I've got a great IQ and that is in console, Axia Audio Console. I tried something different for the ad, how is that? Well, you know what, we're talking about IP, we're talking about things you can do in the workflows and how things have changed. So everyone is aware of Axia Audio and Livewire, the protocol and how things become much better.

Believe it or not, when I introduced a facility to IP technology and the concept of routing audio around the plant on a wire that only had a couple of pairs in it and it was actually balanced audio but it was encoded on a packet, basically the statement to me was, "Well, it either works or you're fired." And that's a true statement. So we did make it work and it worked flawlessly and we did a lot of things that we didn't realize we could do. First, the workflow changed. We can suddenly be in places that before you have an ISDN line, maybe POTS, and then there was the idea of, "Well, what if we would do a remote broadcast and we need to change something? Oh, yeah, how do we do that? We're in the field." Oh, you can do that too over IP. So you can do a lot of little things.

We've had witnessed the Axia ad so we'll talk about Pathfinder and in Pathfinder, it can be programmed to do many things. And one of the nice things about it was you could remote in and all of a sudden now control your IFPs for that remote broadcast. From the remote side itself, maybe you had other remote locations feeding into the studio and somebody says, "Hey, the levels are off." You could, from your primary remote location, log in to your system and make adjustments. You could switch program buses, you can move things around while you're at the remote location. So now you're not tethered to a physical plant.

So I used to call it distributed audio. So basically, you now have distributed your audio workflow across the planet. So depending on where you are and what you're doing, you can do a lot of things. And I say this because, in Hilversum, Netherlands, the broadcasters there some time ago decided to lay fiber and interconnect studios remotely and use IP to bring around uncompressed video back to a central point to produce programs. Interesting, isn't it? Well, NEP, the production folks, you may have heard them if you're in the TV world, you definitely know them. They bought a company in the Netherlands and they have now begun using a workflow similar to that in some of the places they're going to be here in the States. They have offices around the world.

So, from the world of audio, the video and IP, all of a sudden now the workflow is no longer tethered to one location. You could be almost anywhere through imagination. And then you said to yourself, "Well, what's that got to do with Axia and Livewire?" Well, if you had known anything about SDI video, you know about the SDI xNode.

Now, I say this because today's radio plant, we'll use the word plant, is more than just audio. It's a multimedia extravaganza because you're doing web. What does web have? Oh, video. How do you get the video to the web? Most cases, using professional camera. What kind of output does it have? Oh, it's that SDI thing because it's got embedded multiple channels of audio. "But I'm an Axia plant. How do I make this work? Am I going to have another box, another desk, a switcher and de-embed and figure out how to get the audio into a node?" That's right. If you had an Axia network and the SDI xNode, you now have a multimedia plant on one platform.

Oh, but maybe you're an AM/FM or TV facility and you're all owned by the same place but for years TV had their grass valleys and their [Inaudible 00:59:47] and SSLs and the radio stations had their other stuff. And when it came time to bring things back and forth for programming reasons, well, that was interesting mix between the two. Here we go again, IP. And then you have AES67. It allows you to bring in the multiple different types of things. So now, all of a sudden, your imagination is no longer limited by that piece of wire, physical wall, or anything else your may be running into. So now with one platform, you can do audio, video, and great programming.

So think back to how you do stuff and think about the IP and the audio over IP and Axia Audio Livewire gives you the opportunity to really look at many things. Video, audio, the mix of two, consoles that give you the ability to do almost anything, it's all software-driven. We'll call it the software-defined console. You've heard of SDNs? Well, it's an SDC. So now you got an SDC and an SDN with the video as well. All of a sudden, you've got the acronyms like crazy. Oh, you could feel like you're in Washington with all the acronyms.

"Hey, we got a three-letter plant." Well, we've got more than three letters but well... You see what I'm saying? You're already thinking. I could tell you're already thinking back in your head what? Oh, by the way, have you ever heard of a television broadcaster called Telemundo? They've just recently introduced to their plant, their facilities ENG trucks that are based on IP. That's right, using KA uplinking, it's all IP, from the video, from the truck, up and around.

So let's think about this. You're a Telemundo facility or not. Maybe your facility has a KA uplink because you're experimenting with it and you've got an Axia plant back at the facility. You suddenly now have the ability to take that video, bring it back into the SDI xNode, break it out, bring it across your audio network and send to wherever you like. And you've done all through IP, all the way from the start, from the OV van, all the way back, and then off to wherever you like.

And then you do streaming, oh, that's right. You could do a Livewire plug into your streaming box, the R2, and all of a sudden now you're still on a platform that's IP but it's a common platform and the workflow goes from the camera or microphone and radio, through all your systems, out to wherever you like to go all in one platform, both video and audio. So, see? In one short moment, we've discussed video, audio, multimedia, and the ability to bring back the engagement that you need for the audience both in visual and audible capabilities, all based upon it.

Kirk: Ioan, I'm not sure what Chris just said, but it sounds pretty amazing, doesn't it?

Chris: Well, think about it if you can.

Kirk: Yeah.

Chris: That's based on Axia Audio Livewire.

Kirk: I haven't thought about using the SDI node, which you really haven't talked about much in a radio plant because you're right, radio is getting more video. And we're going to see SDI xNodes from Axia in radio plants, aren't we?

Chris: Yes. And a lot of folks are probably saying, "Well, we use web cams with the USB connection, blah, blah, blah." Yeah, but I bet you 90% of folks we're trying to do professional video with that web cam, break it out to an SDI card to get it into their switchers and move it around. We'll call it the cheap version or the economical version, do it yourself version. It's a web cam into a computer, into an SDI card out to your network.

Well, here, I'm talking about taking your SDI cameras, so you spend $800, get a nice camera, plug it right into an SDI xNode and off to the races. I mean, total cost of ownership is actually better than a PC or something that constantly breaks like you mentioned earlier, Kirk. And I got the fans cooling into that, the power supply is gone, this is crazy. I mean, fan-less type technology. You need something more reliable and you've got to be on the go so this is the way to do it.

Kirk: I'm convinced. This IP is the coming thing.

Chris: It is.

Kirk: Yeah, it sure is. Thanks a lot. That's a great endorsement for Livewire and IP audio in general. And, of course, I think the way that Livewire does IP audio is so convenient because of the stuff that goes around it, being able to make routes easily. In fact, just today, we had an automation computer fail on us at my radio stations in Mississippi and we needed a way to make some switches. And the way things failed, we didn't have a button to do that. And so, hey, from here in Cleveland, Ohio, I remoted in and literally in 35 or 40 seconds had us back on the air just by hitting the right button. And the better news is I showed the general manager what I had done. So he says, "Yeah, you know I could do that again if I had to. I could switch it back if we need to do that again." That IP system is amazing stuff.

Chris: That's exactly it. That's what I was trying to get at. Yes, it was a different approach and an unorthodox way of selling an item. We're talking about it, endorsing it and what we want to say, but it's more about what you can think about, think of how to use it in the workflow. That's what it's about.

Kirk: Yeah.

Chris: You've got to engage the audience, you want people to watch and listen, you'd better do it right.

Kirk: Well, thanks a lot to you, Chris, and thanks to Axia for sponsoring This Week in Radio Tech. It's time that we may talk about a tip of the week or so and I've got something that actually that I want to ask Ioan about that I think is a tip that I've used quite a bit. Sometimes when you set up a streaming encoder and you send your stream to a SHOUTcast server or, more generally, a content distribution network, sometimes you don't always get all the stuff right. Maybe your login credentials are wrong, maybe you typed your mount point incorrectly, maybe you've got everything running correctly and you get a call and says, "Oh, your stream is down." And you test it and, oh, yeah, sure enough the stream is down. But is it the CDN that's down or is it my encoder that's down? And one of the things that built into at least our products, the Z/IPStream products, are a built-in confidence monitor. A little SHOUTcast type of server that... and what I do at my radio stations, I mean on really odd port numbers, remote ports to go in and I've got on my phone. Actually, you know what, I've got them saved in TuneIn. I can install TuneIn on any phone, log in to my account and my secret URLs are there in my preferences, my preferred radio stations. So I can go check on not the CDN stream but my stream that I'm sending to the CDN and see where the problem is. What do you think about that?

Ioan: I think it's awesome.

Kirk: Genius, isn't it? I know you meant to say genius. Thank you for building in this confidence. And you know this little confidence server built in the R2 product, that can serve a bunch of people, can't it?

Ioan: It sure can. It can easily serve hundreds if it needs to.

Kirk: Okay. And even in our smaller product like the Z/IPStream R1, the little one-rack unit box, that can serve a couple of dozen people without any sweat.

Ioan: Right.

Kirk: Cool. All right. Chris, did you happen to come up with the tip of the week for us?

Chris: I have a tip, yes, actually two tips. The first tip is more and more AES is popping up in plants so invest in a product that can give you AES audio out in a speaker handheld device. There are several brands out there that can do it. I'm not going to say which ones. Just remember that because I ran into an incident recently where somebody was testing equipment and they realized, "Oh, the only audio in and out is AES." I was like, "Oh, you better have that little box." So that's the first tip.

The second tip is something I recently came across. A friend of mine had an extra IP codec and he didn't know what to do with it. It was a low end, just a simple little box. They no longer use it for a lot of their major events. So I suggested, "Why don't you put the output of your mod monitor into that box and using Lucy software, you can remote in, so to speak, in on your phone and check in on the station. And it's off the mod monitor so you know if you get a call that you're off the air, you can listen to that feed." I know it sounds crazy, but I've been testing it. I've been using it on my phone and listening. I'm like, "You know what? This is pretty cool." And it's just something if you have something laying around. If not, you could even use what you have, maybe a second channel or return channel on it and use the same concept. Those are my two tips.

Kirk: It's great to be able to disambiguate what the problem is because so often, sometimes anyway, as engineers we'll get a call and we'll say we're off the air, but where's the problem? And I'm a seven-hour drive from my radio stations in Mississippi so I've got to be able to tell remotely or at least narrow it down what's causing us to be off the air.

Chris: Right. And a lot of folks are going to say, "Well, remotely, I could do that now in my phone. I can dial in." Yeah, but if you're at dinner or at lunch and you get this call, you can quickly call up the app, listen right away and go, "No, we're not off the air. You must have turned the monitor off in the studio." You could do the same thing with the output of your stream too. So consider that.

Kirk: Yeah. Good deal. Well, Chris, thank you so much for being with us on This Week in Radio Tech. I appreciate your insight into streaming and where we're going with it and your experience with it and also your tips. So I appreciate it very much. Our show has been brought to you by the folks at Lawo, also by Z/IPStream and the new Z/IPStream R2, and by Axia and the whole world of Livewire audio over IP networking that, hey, engineer, fellow engineers, makes your life a whole lot easier. Our guest has been Ioan Rus. Ioan, thank you so much for being with us.

Ioan: Thank you, Kirk.

Kirk: I appreciate you explaining a lot of these details of streaming, both regular and multi-rate streaming, and the metadata that goes along with that.

Ioan: It was good to be here. It was a lot less painful than I expected it to be.

Kirk: Well, let's go smoke a cigar, how about that?

Ioan: Sounds good, sounds good.

Kirk: Thanks a lot to Suncast for producing the show and to Andrew Zarian for founding this terrific network, the GFQ network where you'll find lots of terrific shows. We'll see you next week on This Week in Radio Tech. Bye-bye, everyone.

Telos Alliance has led the audio industry’s innovation in Broadcast Audio, Digital Mixing & Mastering, Audio Processors & Compression, Broadcast Mixing Consoles, Audio Interfaces, AoIP & VoIP for over three decades. The Telos Alliance family of products include Telos® Systems, Omnia® Audio, Axia® Audio, Linear Acoustic®, 25-Seven® Systems, Minnetonka™ Audio and Jünger Audio. Covering all ranges of Audio Applications for Radio & Television from Telos Infinity IP Intercom Systems, Jünger Audio AIXpressor Audio Processor, Omnia 11 Radio Processors, Axia Networked Quasar Broadcast Mixing Consoles and Linear Acoustic AMS Audio Quality Loudness Monitoring and 25-Seven TVC-15 Watermark Analyzer & Monitor. Telos Alliance offers audio solutions for any and every Radio, Television, Live Events, Podcast & Live Streaming Studio With Telos Alliance “Broadcast Without Limits.”

Recent Posts

Subscribe

If you love broadcast audio, you'll love Telos Alliance's newsletter. Get it delivered to your inbox by subscribing below!