Audio watermarking for audience measurement (Arbitron/Nielsen PPM®) is in the broadcast industry news. Third party research is offering more detailed measurement of watermarking’s effectiveness on differing audio content. Further, this research is demonstrating that improvements in reliable watermarking detection are possible. We hear from MIT PhD, Dr. Barry Blesser, along with Geoff Steadman, President of 25-Seven Systems.
Watch the Video!
Read the Transcript!
Intro: This Week in Radio Tech, Episode 256, is brought to you by Lawo and the crystalCLEAR virtual radio console. CrystalCLEAR is the radio console with the multi-touch touch screen interface. By the Telos Z/IP ONE, the world's most advanced IP audio codec, and by the Axia Fusion AoIP mixing console. Fusion, where design and technology become one.
The audio watermarking for audience measurement is in the broadcast industry news. Third-party research is offering more detailed measurement of watermarking's effectiveness on differing audio content. Further, this research is demonstrating that improvements in reliable watermarking detection are possible. We hear from MIT Ph.D. Dr. Barry Blesser, along with Geoff Steadman, President of 25-Seven Systems.
Kirk: Hey, welcome in, it's This Week in Radio Tech, our 256th episode. This is the show where we talk about radio technology, everything audio, from the microphone to the light bulb at the top of the tower and everything in between, too. It's getting to be where we're talking about a lot of digital stuff and streaming, and technologies that weren't hardly even around a few years ago.I'm Kirk Harnack, the host of the show, and along with me is one of our usual co-hosts from... I don't know where he, I have no idea where in the world is Chris Tobin, the best-dressed engineer in radio. Hey, Chris, how are you.
Chris: Well, hello, Kirk. Today I'm in a radio station's wire room, 66 blocks behind me, and I'm just playing with an Eimac 4CX300A Tetrode, for those of you who pine for the days of vacuum tubes.
Kirk: What kind of transmitter would that be out of, likely?
Chris: That came out of a 10 kilowatt — I think it was a Harris — tube transmitter. This was the IPA stage, intermediate power amplifier stage.
Kirk: It puts out in the neighborhood of 300 watts or so, and that feeds probably the grid of the PA tube.
Chris: Yes. Yes.
Kirk: The PA tube has about 18 or 20 dB of gain, probably, in a 4CX tube.
Chris: Yeah, about that. Yeah, it would be probably like a 4CX1500 or a 4CX3000, I forget what the other... no, 5CX3000.
Kirk: Yeah, 5CX3000.
Chris: Yeah. Good stuff.
Kirk: A 4CX, how about a 4CX5000? That was a real popular tube.
Chris: That was a popular tube. Yeah, I played with a lot of the 1500s for the 3 kilowatt stations.
Kirk: Oh, yeah. Yeah, yeah, yeah.
Chris: But they had an A and a B.
Kirk: A and a B . Yeah. Yeah, yeah.
Chris: And there was a very big difference between the two.
Kirk: Yeah. There's a lot of variability. But as we mentioned, the 4CX5000 tube, and man, it seems like I took care of a bunch of those slightly bigger transmitters, those were very consistent, to me, from transmitter to transmitter and tube to tube. They just behaved about the same. Those things, those 4CX5000s were solid. They were hosses.
Kirk: Well, we’d better move on, or it's going to be This Week in Tubes.
Chris: Ah, This Week in Tubes is not a bad thing, but yeah, most folks will have no idea what we're talking about. That's okay.
Kirk: Also joining us and the subject of our show today, really, is about audio watermarking, and we’re really glad to have with us a fellow by the name of Geoff Steadman, president of 25-Seven Systems. Hello, Geoff. Welcome in.
Geoff: Kirk, Chris, how are you? Nice to be here.
Kirk: We're glad to see...
Geoff: It's strange to see myself on a delay here. I do have the dump button behind me in case I say something bad.
Kirk: In case you say something bad?
Chris: It's the Internet. You can say anything you'd like.
Kirk: Oh, that's right.
Geoff: That's right. We're not regulated by the FCC here, I guess.
Chris: No, our HD2 contracts haven't started yet, so we're good.
Geoff: [Inaudible 00:03:49] , right?
Kirk: As president of 25-Seven Systems, you have overseen and probably initiated the development of a number of products that 25-Seven builds. I think most engineers would know you guys for profanity delays. As you alluded to, there's a dump button behind you. Can you give us a little of the elevator speech about 25-Seven Systems?
Geoff: Sure. Well, in some ways I am fond of saying that 25-Seven Systems was the delivery mechanism for a pre-existing team. The crew that makes up our group, we'd all worked together at various audio companies for decades, really. We came up with the idea for Audio Time Manager, which was our first product, and basically it was, “Let's get the band back together. Let's get the band back on the road, back in the studio, and see what we can do.” That was the excuse to form the company.
We were an early Livewire partner. Actually, we were the first hardware manufacturer that became a Livewire partner with Audio Time Manager. From there, we really started getting to know the folks at Telos. We produced several products before... as you probably know, skipping forward several years, we became part of the Telos Alliance just at the beginning of 2013, but we've been working with the folks in Cleveland for many, many years.
Time compression was really our main claim to fame, and ATM was the first product we put out. And then PDM, which is you really become the redefining profanity delay out in the market space today, so we're quite proud of it. It's actually been on the market since 2008, but that's sort of by way of biography.
Kirk: So if my understanding is right, the claim to fame, the intellectual property that has helped 25-Seven get its products together and come out with some innovative things, is this notion of time management, stretching and squeezing audio over time in a way that is basically imperceptible, so that radio stations can do what they need to in the time domain with getting audio programming out to listeners. Isn't that pretty close?
Geoff: Yeah, that's close. I mean, I'd say that that's kind of the piece of hard-to-do tech that we really started with in terms of bringing something to the table. I wouldn't say that it's a limitation. We'd all worked doing other things. There's a deep engineering bench and a lot of history underlying that. That's actually what really allowed us to take on the whole Voltair initiative. But yeah, I think it's fair to say our roots are in... You know what, actually, biographically, what I'll tell you is if you remember the Orban Odyssey, and actually, you can see one right behind me here...
Kirk: Oh yeah, sure. The Orban Odyssey, sure.
Geoff: The Orban Odyssey was really the 2517. It was a great group and so really, we needed an excuse to work together again and that's where the genesis of this company really is.
Kirk: So there's a bit of your roots. We're going to tease a little bit and then take a commercial break. But what we're here to talk about on this show today is about the next technology that you guys have been putting to use, and that is to understand watermarking and understand how to monitor it and maybe how to improve it so that, well, we'll get into that explanation in just a minute.
By the way, I should also give full disclosure. Our show is brought to you in part by the folks at the Telos Alliance, and 25-Seven Systems is part of the Telos Alliance. Since Geoff is also part of the Telos Alliance, he's actually my boss. He's one of my bosses, so we've got to treat him nice here on the show. As we always would.
Hey, our show is brought to you in part by the folks at Lawo. You may know Lawo for making big, fancy German audio consoles, and also for being a proponent of the RAVENNA AoIP audio system. Well, the folks at Lawo... that's spelled, by the way, L-A-W-O, so if you want to check out their website, go to L-A-W-O, lawo.com. You ought to take a look at their radio products for radio stations, smaller mixers. The crystalCLEAR console is something that we were introduced to a little bit less than a year ago by Mike Dosch. He's the director of virtual radio projects at Lawo. They showed this console again at the NAB show just a few weeks ago.
The crystalCLEAR console is a bona fide radio console, and it works as so many consoles do nowadays. The architecture is that there is a DSP unit in the rack. That has all the audio inputs and outputs on it, and it has a network connection for a RAVENNA or AES67 AoIP network. Plus there are power supplies and the DSP is in there, and so you have this rack-mount unit, it's 1RU. It's got a couple of mic pre-amps built in, XLR connectors, you plug them right in. It's got some connectors for line level, audio, stereo, analog line level. Plus AES3 inputs and outputs are on the back as well, and the network connections are on the back, too. Plus the power-supply connections are there as well.
Now, for the control of this console, you can get the Lawo Crystal console, and that's a traditional-looking radio console, sits on your desk, it has physical faders. But the crystalCLEAR console is this new idea in a console that is a multi-touch touch screen, and on the touch screen is running an app that looks like a console. In fact, it looks really nice. Because it's multi-touch, you can move two or three faders at the same time. It's a 10-touch touch screen. You can move 10 things at once if your brain and fingers work together that well.
Because the console is a software-defined console totally, there's no hardware they have to wok around, that every button, every function on the virtual console is totally contextual. It doesn't have to do anything that isn't related to exactly what you're doing right then and there. The console plugs together like a traditional console. It's got mic inputs and line inputs and line outputs and AES in and out. It's got all that stuff.
It's got GPOs on it for tally lights and if you want to mute the speakers and things like that. Actually, speaker music is built in when you turn a mic on. You define how each input behaves and you define how the outputs, where they're going to come out.
Check it out if you like. If you have the notion that a multi-touch touch screen would be a pretty cool interface to run a console, then you really ought to check out the Lawo crystalCLEAR console. Go to the Web, go to Lawo.com, look for radio products, and look for the Lawo crystalCLEAR console. There's a video on the page about crystalCLEAR where Mike Dosch totally goes through the operation of the console, shows you how it works, and I think you'll be intrigued. Thanks to Lawo for being a sponsor for This Week in Radio Tech.
All right. Just before the NAB show, the folks at 25-Seven and the Telos Alliance came out with some news about a product that had been apparently being sold a little bit privately, a little secretly, amongst friends, and that is a product called the Voltair. That got the interest going in just how does this PPM, how does audio watermarking work, and how do radio stations in the top markets, those markets that are measured not by diary, the audience is not measured by diary, but measured automatically and electronically with little pager-like devices that listen to the environment and they're aware of what audio that their wearer of the pager is being exposed to.
The idea is to get that accurate and get it right, and in hopefully any reasonable kind of audio environment, whether it's noisy — you've got the car windows rolled down, you've got kids screaming, you're nice and quiet all by yourself — whatever it may be, we'd like to be able to measure audience. So the whole idea of this audio watermarking relates to measuring audience.
I just want to mention in preface, engineers who are wondering, "Well, why is this really important? Is it so important?" It's important in terms of a lot of things, but a lot of it includes money. Millions of dollars of advertising and respect of the radio industry is placed based upon how many people are listening.
By measurement or by diary or whatever methodology is used, it's important that the industry be able to present to advertisers, people who are going to spend money on advertising, just what the audience is. Do we know the age range, do we know how many times a day they listen, do we know how long they listen? If we're going to do any kind of measurement, we want it to be accurate.
People in radio have complained for years and years and decades that diary measurement may not be all that accurate. It depends on people remembering what they listen to. Telephone callout, which used to be done, sometimes the mom would answer the phone, and say, "Oh, you want to talk about who's listening to the radio. Here, let me let you talk to my kid," and they'd pass the phone off to somebody else.
For quite a few years, scientists have tried to figure out how can we automatically measure listening? The science of audio watermarking was developed and advanced quite a ways with this. So first of all, we have an NAB paper that was videotaped, and we're going to watch that.
Geoff, is there anything else you want to say about setting up this talk from Dr. Barry Blesser?
Geoff: Yeah, Kirk. I’m just going back to sort of the 10,000-foot view here and give a little bit of context about Voltair, because there's been a lot in the news and it may seem to some that this just all of a sudden happened and it came on the scene just in an instant. For someone who's really kind of spent the last four years, five years of his life working on this, I can tell you that there's sort of a deeper story, and I think it probably bears mention, and it's this.
Before 25-Seven was part of the Telos Alliance, we had a number of engineer friends come to us and say, and one in particular, "Can you help me explain the following? I've got a show host, and the alarm light's always going on during her show. But the phone callers are lighting up. We know that she's got a great following. It's a great program. All the management believes in this host. Yet we don't think the ratings are really tracking reality here, and we're wondering about some of the nuances.”
This was really our on-ramp in terms of starting a study and delving into some of the technical properties of the currently used watermark system. At the time it was from Arbitron. We did a bunch of research, we published a white paper in 2009 that really made the rounds. We explored a lot of publicly available information and we developed some theories based on our own science. We knew there was something there, that there was kind of a worthy project.
What I'd say is that 25-Seven as constituted at that time wasn't up to the task in terms of creating what is now Voltair. We needed some help, and when we started talking to Frank Fodey and the crew at the Telos Alliance and we were in negotiation to become part of Telos, that's where we started to understand that there was, in a parallel universe, on the Omnia side of the world, there was a similar inquiry born of similar questions, that was taking place at Omnia. In a sense, Voltair was really a place where we combined forces. It's a 25-Seven-branded product but it's really the first product that is a collaboration of Telos companies and talents as well. There's some very, very deep benches, as you well know, in terms of processing know-how, manufacturing audio expertise.
When we started working on Voltair really in earnest was in 2012. We'd done a lot of research, but that's where we really started working hard on it. About a year ago, actually more than a year ago now, we actually had our first units on the air, and we were testing them and trying them out, and we had pilot units. We didn't really want to talk about it because we wanted to really do our homework. We really wanted to give this the unobstructed attention that it was due without having sort of the big circus of, “Ooh, ah, look at our hot new product.”
As our tests started looking better and better, and we spent several months really hardening this product so when we were ready to hit the ground, we would hit the ground running. We put Voltair out into a number of test markets, and we said, “Here's our theory, here's what we think is going to happen, here's the behavior, here's the fact that we've had many months of trouble-free runtime. Why don't you guys try it in your setup,” because we all know that the world of broadcast is the laboratory. You have a lot of equipment here, but in no way can this office approximate the complexity of the broadcast ecosystem, with all of the codecs, with all of the sound sources, with all of the formats, all of the programming decisions, all the STLs, all the transmission. It's a big, big ecosystem. It's not any one single thing, so we really wanted to vet this product.
We put it out last summer with a few friends, and they wouldn't give it back. What came in were P.O.s. We didn't do a big, loud, beat-the-drum press release on it, but we put it on our website the first week of September of last year. We were public, we just weren't crowing. When it got time for NAB and we had these papers in process starting in October, you know you need to apply for these with NAB quite early. That was really the point at which we would become very, very public. But leading up to that, in some ways it became a secret that no one wanted to talk about but everybody was talking about in broadcast. So yes, by the time we got to NAB this year, yeah, we had over 300 on air.
When we watch Barry's talk, he's talking more about the science behind much of his contribution to this project, and probably less about that sort of high-level, “What does this knob do and how do you run it?” But what I do want to impress on your audience is how much science and know-how went into this, and the models we built and the field experience we've had, and especially the response from real broadcasters who saw real effect.
We don't have access to their ratings data. They don't share that with us. But we think every time someone orders one or more and is back for the second or third or even fourth time, we're taking that as a vote of confidence that yes, they're seeing some results and the effort and value in what we've created here with Voltair.
Chris: Uh-oh, we have no audio. You there, Kirk? Somebody finally shut Kirk's mouth, I guess.
Geoff: Oh, that's not a nice thing to say, but then I guess you could say that.
Kirk: How about that? Is that better?
Chris: Much better.
Kirk: Gosh. Jeez. Been doing this for years, you forget you've muted the mic. So Chris Tobin, were you in the audience when Dr. Blesser gave this talk?
Chris: I did not get to be there live. I did watch it on YouTube, though. It was a very nice talk.
Kirk: Okay, well, great. Oh, you've seen it. Well, it's a review for both of us and brand new for most of our audience. Let's go to Dr. Barry Blesser at NAB with this paper about audio watermarking. Go ahead.
Monitoring for Ratings: Putting Yourself in Your Listener's Ears Observations on Radio's Watermarking Ecosystem Dr. Barry Blesser Director of Research, Telos Alliance
Barry: Thank you, everyone. I'm delighted to be here. I'm especially delighted because this is one of the most unusual episodes in my half-century career, and it's for the first time I get to share it in the public space. So a quick short history.
It began six years ago, when an industry colleague approached 25-Seven and said, "Something appears to be wrong with one of our announcers in terms of the ratings. Can you come look at it?" At the time, I never heard of watermarking, I knew nothing about PPM. I said, “It's a waste of time,” but they dragged me along. I listened to my colleague and my curiosity was highly stimulated, and I said, “I want to figure out what's going on.”
So 25-Seven, with my guidance, did a pro bono research investigation trying to make sense out of it, because I love a challenge. We eventually published our research on our website, and it's a white paper, and then all manner of discussion started. The phone didn't stop.
We became a public clearinghouse. Radio engineers shared their experiences with me, including anecdotal insights, audio samples. Callers told me that Arbitron had the reputation of not listening to feedback, shutting down dialogue, and they wanted somebody to listen. Shortly thereafter, I was invited to be a consultant to the House Oversight Committee hearings to determine if Arbitron's PPM technology discriminated against minority stations. They didn't. They discriminated against the formats.
Now, six years later, I can share what I learned. After being purchased by Telos two years ago, 25-Seven turned their knowledge into what we now call the Voltair product. My goal has always been to provide our industry with knowledge and tools so they can proactively manage their relationship to audio ratings. They are a major stakeholder. Knowledge is power. Knowledge requires tools. Knowledge requires science, and knowledge requires transparency.
Today, I'm going to be presenting my analysis of watermarking as a scientist, without regard to the implications of what I'm going to say. Science is reality. The technology I'm going to describe and the analysis may or may not be appropriate for a particular application. It may or may not satisfy people's expectations. I have nothing to say about expectations and applications.
However, by necessity, if I'm actually going to look in-depth into how this systems works and how to analyze it, I'm going to assume the audience has a reasonably significant technical background. Some of you may get some of it. I will try to do the translation. I'd like to ask for those people that actually understand it, when you go home to your organizations, you translate it for the benefit of those who do not have the technical background required to appreciate the implications.
As a side note, PPM, which is a registered trademark of The Nielsen Company, belongs to a larger class of audio watermarking system based on multi-channel, multi-level frequency-shifting keyed encoding with perceptual masking. My analysis applies to any system within that class. I am not picking on any one particular embodiment of this technology.
For months, Telos and Nielsen have engaged in meetings and discussions to find a win-win path for all the relevant stakeholders. Along the way, Eric Rubenstein, Nielsen's chief legal counsel, asked me to read the following statement. "Telos is working with Nielsen to support their efforts to test the effects of Voltair on audience measurement with the results to be validated by the MRC." Now we can do some technology, which I always enjoy more than the politics.
Radio's Watermark Eco-System
Okay. The first step in this is watermarking is a system. It has to be analyzed as a system. This is a cartoon-like description of the system. It starts with the program management selecting content. It goes out into your transmitter chain, including watermarking. It then goes through the transmitter. It comes to the receiver. It goes to some acoustic environment and some decoder and ears pick it up.
It then may go to a computer, which may do other kinds of data manipulation, which I call Assembly Rules and Edit Rules. It's a system. We need to understand this is a system. It's not trivial, even though it looks like it is, because it's a time-varying, nonlinear, two-channel system and there are multiple versions of this around in the entire ecosystem.
Let's now take a look at... by the way, I'm compressing 5 years of research into 30 minutes, and even though 25-Seven is a time-compression company, I'm not going to be successful. I could try talking faster. Okay, the first thing to understand is we in the industry are managing a dual-channel system. We forget there's a second channel because we never had any visibility into the second channel, and we never had any ability to influence it, so we just quietly forgot about it, but let's now look at this second channel.
Dichotomy of Communication Signals
The top line here is your main channel that you're all familiar with. Somebody creates some speech or music, you have human listeners, and there's noise corruption, which may correspond to watermarking tones and the environmental noise. The purpose of this channel is to attract listeners. Okay. You know that.
The second channel is a watermark encoder, which creates its own signature payload. Its target is not the listener, its target is a portable decoder. The type of signal is frequency shift tones in multiple channels. There could be other technologies, I'm just talking about this one. In this case, the key, the noise corruption is the music and speech and the environmental noise. It doesn't care. The purpose of this is to register listeners.
You've got to think about it as a two-channel system, because your programming is to attract listeners, and the second channel is to catalog their existence so you can get advertising revenue. Doing one without the other gets you nothing. If you have a perfect watermarking system and no listeners, you're off the air. If you have a zillion listeners but the advertisers think you have none, you're off the air. So to think about the two channels, and that's really the purpose of my talk.
Okay. So let's start looking at the second channel. I'm going to be using what I call the common language of the second channel, and it's a time frequency spectra. Most of you are comfortable with time domain representations a la your oscilloscope, and you're comfortable with frequency analysis. There's a combination hybrid which is the central key. You have to look at time and frequency amplitude simultaneously.
This is a typical message that might come out of a watermarking frequency shift-keyed system. You can see here these big lobes. If I could turn this around, you'd see they're sequential. At any given time slot, there's one frequency, and then at the next time slot there'd be another frequency. You can look at this... any sophomore in college could do a signal processing technique to extract this. You could get the watermarking payload perfectly if this was the picture. Unfortunately, this is not the picture.
Okay. Now, that picture I just showed you before wants to be injected into the audio stream. Well, guess what? If you just injected it at full level, nobody would listen to your programming. I had a demo of what it sounds like. It's really uninteresting. It's just tones, steady-state tones, for half a second and then the tone changes a little bit in frequency. So we use the principle of masking.
Masking in Time & Frequency
Masking says if you have a large audio signal, in this case the square block with some kind of envelope, the ear becomes temporarily deaf before it, believe it or not, even though the signal came before, and after it, and at frequencies above and below this. So in other words, you cannot hear anything under this quilt that I've laid on top. You're deaf. So this is a perfect candidate for two activities, and I'm going to mix together the fact that there are two activities involved in this.
Obviously, the watermarking system can stick tones and other signaling under this, because you can't hear it, and so it's great. You just can't hear it if you get under it. Just as a complete jump to the next lecture some years later, it turns out this is related to the codec industry because they do the opposite. They sit there and say, “Anything under this blanket I don't have to encode. I'm just going to throw it away.”
So oddly enough, watermarking and codecs are first cousins of each other, and very often they will fight each other for this turf under the blanket. One says, “I'm going to throw it away,” and the other says, “I'm going to put it in,” and so it opens up a new class of things to think about. Who owns the under-the-blanket area?
Example of Time-Frequency Map
Okay. Now, let's look at what happens in this time-frequency spectra. This is a typical piece of audio that might be broadcast. It doesn't look familiar because most people don't look at it this way. Now, if you go back to the previous slide with the blanket, the watermark encoder takes that blanket and has to lay it over this complex time-frequency map of the audio program, and it has to figure out where in this soup can I stick the tones?
Now, oddly enough, perceptual masking, which is what I'm describing, has been around for a very long time. I was taught that in the '60s as a student. Unfortunately, the academics were not interested in what we're interested in. They were doing a scientific modeling of the auditory system for masking, and they were using clicks, sine waves, tone bursts, all these scientific signals that were easy to reproduce and easy to describe. To my knowledge, there is no literature that tells you how masking works with real audio as in speech and music. There is no literature I've been exposed to.
However, there is a massive quantity of research, especially in the last 10 and 15 years, in the codec industry. There the research, unfortunately, is proprietary. It's not academic, they don't publish it. But they have really been pushing the envelope in understanding how to get the masking to take advantage of these complex codecs.
The same thing should be done for watermarking because it's the same issue. I want to understand where to stick it. If you were the encoder, and I gave you this picture, and says, "Okay, where are you going to stick the tones?" you might scratch your head. This is a non-trivial challenge for the designer, and you can't escape this. I have not heard anything about the upgrades to various watermarking technologies in terms of including new masking technology. I don't know how available it is in the public space, but certainly there are about a dozen different organizations that are actively engaged in studying how to do this.
Okay. Let's move on. Now it gets really exciting. Remember I said before the frequency shift keying is a matter of putting out a sine wave at a frequency. In a typical frequency-shift-keyed system, say with 16 levels, you might have 16 possible frequencies. They might be spaced 30, 50, 100 hertz apart, and one of these signals, which I call a symbol, may go on for 400 or 500 or 600 milliseconds.
Full Duration 400 ms Symbol
If you look at this piece of spectral content, it has a nice peak at 1 kilohertz, as you'd expect. I give you a half-second of 1 kilohertz and I say what's its spectrum, and you say 1 kilohertz. It's easy. What I've shown here in these blue lines in this hypothetical watermarking system, this might be neighboring target frequencies. The decoder has to determine is that 995.5 hertz that has one meaning, is it 1,000 hertz, or is it 1004.5 hertz? It's got to decide, because in some of these technologies, the choice of tone frequencies is very closely spaced.
100 ms Duration SymbolWell, we know that any sophomore could figure out what tone was being transmitted. However, remember I said we've got a masking system which is not leaving that tone on, it's turning its amplitude up and down. What happens if I take that 400 millisecond signal, and let's say there's no masking for 300 milliseconds, and it comes out as a 100 millisecond burst. What does the spectrum look like?
Well, two things happen. The amplitude drops dramatically. In this case, it drops by about 14 dB, and the signal-to-noise margin in the neighboring channel goes from a really nice 15dB to two-point-something dB. In other words, the margin of tolerance to extraneous junk, whether it's junk that came along with your program or junk that got introduced in the listener's environment, the tolerance goes down.
25 ms Duration Symbol
And guess what? If I get the channel producing a 25-millisecond sine wave, representing the 1 kilohertz, there is no technical way anything can decode that. You have a fraction of dB margin, and unless you had a pristine system with no anything, that is lost forever. It is undecodable.
1.0 kHz Tone (400, 100, 25 ms)Okay, just to give you a perspective, here are those three cases I just described. This the 400, this is the 100, this is the 25. You can see very clearly, this is real signals with a real spectrum analysis. Basically, this gets flattened out, there's nothing left. So the duration of the signals becomes the key piece for decodability.
Injected Tones: Singer + Piano
So what happens in the real world? This top thing... I'm sorry the picture's not so good... that's a singer with a piano. It happens to be my son, so I don't have any copyright issues. You got to worry about it, I know there are lawyers in the room. My son signed the waiver.
Okay, now, if you now say, “Let's go back to that time-frequency map,” and say, “How much Channel 1, which is around 1,000 hertz, could be injected?” It's going to analyze this for how much masking can be produced using whatever masking algorithm. The first thing you notice is the amplitude of low, but it's also been amplified by 20 dB, and you notice these watermarking tones are going up and down, sometimes very rapidly. In other words, the information is being turned on and off, but from the previous slide, if I don't leave it on long enough, I get nothing. When you get up to Channel 5, it's really low. It's there. Some of these encoders can code down to minus 70, minus 80 dB. You just can't get it to the other side.
Sparse Time-Frequency Map
Okay. Now, there's a class of signal out there which has a time-frequency spectral analysis which shows there's nothing up here... 1 to 3 kilohertz, you've got these tiny little needles, and the encoder says, "Oh, I can turn a tone on, oh, I've got to turn it off, oh, I can turn it on, I can turn it off." You get these little needles. As anybody knows, if I give you a half-millisecond of 1 kilohertz, it sounds like a click because that's what it is. So this kind of program material will not encode.
Now, at this point you might say this sounds pretty bleak. Well, there's another side to this, which is very optimistic. The typical systems have massive redundancy. So the same message may be sent out thousands of times in a 15-minute interval. You don't have to get many of them to assemble it. So you get Symbol 1 at this moment from this channel, Symbol 2, you put it together and you assemble it, you're great. The system can have massive errors and work perfectly.
What's the assumption? The assumption is these accuracy in errors is random. You know if it's really random like Gaussian, everything will work fine. Unfortunately, statistics can lie if you don't have the assumptions. The statistical assumptions about all program material is one thing. The statistical assumptions about your typical male announcer with a cold is completely different. Each program material has its own statistics. You can't take global statistics and apply it. If you happen to be unlucky, like the Hispanic stations in 2009, their statistics worked against them.
Zoomed Map of Difficult Audio
Now, you can say there is energy, and what you can do if you do these maps you can often see a lot of energy, but they're needles. If you get needles, you're out of luck, or rather the decoder is out of luck. If you zoom in on one of these things, you get even a more graphic picture. This is a zoomed-in version of a real signal off the air.
Now, if you happen to be an encoder at 1 kilohertz, all you care about is this stuff, and as you can see, it comes along, turns on, and immediately turns off. Then it's off for awhile, comes on, turns off. So you'll get stuff coming out of the encoder, you’ll get stuff arriving at the decoder, but there's no theoretical way that anybody can separate a 1,000 hertz from 1,005 hertz. The mathematics will not allow you to do that, because the time-frequency product is always approximately 1. So if you want a 5 hertz resolution, you need 200 milliseconds. You just multiply the two together, and if the product is significantly different than 1, you're SOL. Sorry for the foul language.
Okay, so let's do a quick summary of what's going on. You have to follow the time-frequency spectrum through the system. This is on the second channel, and the main channel because the main channel's what turns on the second channel. You have to look at the program material in terms of its masking and what kinds of tone are going to be injected. You have to examine the spectrum of the watermarking tones.
Then at the decoder, you have to consider the fact that the program itself is overlaid. So the same program that turned on the watermarking tones also is sitting there to clobber them, and so it's kind of an arms race with it. Then you have to consider the listener's acoustic environment. The question is can the decoder find enough tones — it doesn't have to get a lot — to assemble it into a plausible message or even an approximate message, such that the edit rules can go figure it out what's going on.
Okay. We came up with a product called the Voltair, and its purpose in life is to do two things. It's to give you some visibility... this is an emulation. I had no intellectual property. I just sat there and looked at the signal coming out of the encoder and said, “Okay, what would I do?” Then I came up with the fact that there are theoretic limits. So the Voltair actually is based on the assumption of doing the best that is theoretically possible. It doesn't tell you that that's what a particular system is actually doing.
Watermark Monitoring & Processing
What you have up here is before processing, after processing. By changing the time frequency spectrum relationship between the signal and the watermarking in the critical frequency region, you can get dramatically better results. I was going to talk about this, but you can grab somebody in our booth later if you want to know more about it.
Tones in 10 Channels: RobustOkay. When the system is working well, as in you choose the audio perfectly. You do the Voltair enhancement. You have a quiet environment, everything perfectly. This is what you get for the watermarking signal. You'll notice, in this case every channel's got energy. It's a bit chopped up, but it's mostly on so this channel's got a little bit of an envelope. This will decode perfectly. You'll get hundreds of accurate messages, no problem whatsoever. The system can work great if the assumptions are there.
Okay. Now I'm going to jump to what do we want to do with all this. This is all academic. Now you say, "How do I make sense out of this in the real world?" We're not interested in a particular listener, even though I've only simulated one listener in the Voltair. You're interested in your hundreds of possible listeners out there, and you're trying to figure out what enhancement to use, or you've got a more-or-less knob on your processor and you want to say, "How much should I do?"
Probability for Single Decoder
Well, if you were to take all the listeners that you're getting credit for, or let's say you take an individual listener, and what I'm going to call Parameter X — it could be anything in your chain — and you plot it, as Parameter X goes up, you get 100% probability of getting this listener. As Parameter X goes down, you get a zero chance. Depending on the program material, depending on the acoustic environment, depending on everything in the phase of the moon, you'll get some S-like curve which says at one extreme you get nobody, at the other extreme you get 100%. Nobody's surprised.
Sensitivity Parameter ChangeOkay. What you get from the mother ship is all your listeners collapsed together, and you could do a mental experiment and say, "Okay, I've got a possible of 100% decoders out there. What percentage will I capture for a given level of Parameter X?" You don't know, because none of us have access to it. You could, if you make a measurement here, and notice you've got so many listeners, however it was, and a measurement here, you get a different number of listeners.
Well, in the Voltair we built in a GPIO, which allows you to toggle from one value of enhancement to another. You could also do a GPIO on your processer. Anything you wanted.
Now, you could toggle this every odd and even minute. You take all your odd minutes, let's say a value of 12, and even minutes, a value of 13, and then weeks later maybe you get the results. You now do an analysis, and the beauty of this, because you're toggling every minute, you don't have to deal with changes in school vacation, you don't have to deal with weather, you don't have to deal with all the anomalies out there, because the world stays relatively constant over one minute. It doesn't stay constant if you do an experiment one week, and then six weeks later you try a different experiment.
If you do the statistical analysis, you can now draw a straight-line approximation. You know where you are on the curve. You know if you're up here, such that if you crank up more Parameter X, you're not going to get anything. Or you know if you're down here, you're in deep doo-doo, or if you're up here you can then decide. But I wanted to go further.
Approximating Total Decoders
I said, all right, let's do a three-point GPIO system, where you could make a measurement here on minutes divided by three, modulo 3, and you get three data points. Guess what? With three data points, we can approximate a parabola. It's extrapolating. It's not truth, but you can actually approximate what you would get for the percentage of listeners had you set Parameter X over here.
Now, maybe you don't want to set Parameter X up there, but you now have a measurement tool, using Voltair, that will give you an approximation for what percentage of listeners could you have gotten had you manipulated Parameter X. It doesn't have to be just X. It can be Parameter X, it can be Parameter Y and Z. You can do all the experiments. You can do an experiment like this with your STL codecs. You can do this with your processing.
Voltair is allowing you to measure the entire system chain for this second channel. Remember, you're in a two-channel business. You're not just in the program business.
Conclusions & Follow-up Discussions
Okay. So let's do a quick summary of what we are. The radio watermarking is a complex ecosystem. It's not a box. A box is not a system. And all the stakeholders are involved in the total system. Nobody owns the system because it includes all kinds of things. Nobody owns the program director. Nobody owns the music sources. Nobody owns the listener's environment. Everybody can be a stakeholder to improve it.
Audio processing tunes the program. Voltair can tune the watermark channel. It's a direct analog. You've got boxes that tune Channel A, and now you have a box that tunes Channel B. Programmers can tune the system by evaluating the content. It's now a level playing field for everybody to be a participant in the management of the system that's so important to them.
I did manage to squeeze it in. I know there are going to be lots more discussions, so I'm going to be available, because we're going to have another talk in a few minutes. I got the timing perfect for questions, which I wasn't sure I was going to. Thank you very much.
Kirk: All right. That was Dr. Barry Blesser, MIT Ph.D., and on the staff at 25-Seven, and does his own consulting as well. You're watching This Week in Radio Tech. I'm Kirk Harnack, along with Chris Tobin. He's joining us, and he's familiar with audio watermarking, having worked in a large market. Also with us is Geoff Steadman, president of 25-Seven Systems, and that's the company that has produced and is now making available this Voltair product to measure what we think is good decoding of audio watermarking and to enhance audio watermarking. We're going to be back and have questions for Geoff, and Chris will be with us here in just a minute.
I've got to take a moment and tell you about our sponsor, Telos. Telos, maker of the Z/IP ONE IP audio codec. Now this is a broadcast codec. It's intended for professional operation. I've got to tell you, the Z/IP ONE codec is a lot like a Swiss Army knife. I bet you hear that a lot. Boy, wouldn't you love to be the Swiss Army knife and get a dollar every time that was mentioned? The IP codec in the Z/IP ONE is designed to do a variety of different things.
First of all, it's designed to be easy to use. You can send an intern out to a remote location to do an outside broadcast, a broadcast remote, and hook him to existing wired, hook into Wi-Fi, hook into something like YMAX or 4G LTE, and be able to go on the air and do a broadcast. That happens all the time. Right here in Nashville, where I am, we've got a radio station here in Nashville does two or three broadcasts every week, music-length, full program-length broadcasts.
The Z/IP ONE can also do other things. It can be, say, a temporary link between two cities. If you've got a disaster going on or news going on in a foreign city and you need to get reports back in, set up a couple of Z/IP ONEs. A lot of them were set up during, for example, the Sochi Olympics from Russia. Z/IP ONEs were extensively used.
Then another way to use them is as a full-time program link, like a studio transmitter link. Our friend Dave Anderson, who's been on this show before, is using something like 36 Z/IP ONEs to send program material to various transmitter and translator sites and to get audio from remote studios back to the home office, if you will. Some of these are working over dedicated links. Others are working over the public Internet. The Z/IP ONE is designed to do exactly that.
I've got a Z/IP ONE right here in the office. I'll show you how easy it is to use, if I can move the microphone out of the way. I'm going to make a quick call to another Z/IP ONE that's actually in Sydney, Australia. I punch the button, and right now the Z/IP server, a free service to use, is negotiating a port with Sydney, Australia. It's working on that, working on that, and in about 10 seconds it ought to make a connection, which I'm not sure I've got it brought up here.
There we go. Made the connection, and let's see if I do have it available here to pop it up. Ah, yeah. Here's the audio. Put that in front of the microphone. I know it's hard to tell quality across that, but that's stereo audio, low latency, coming from Sydney, Australia. Total time of that from Sydney is probably under 200 milliseconds, including encoding, decoding, and the whole trip in one way. Of course, a domestic connection's going to be a lot quicker, not going halfway around the world.
The Z/IP ONE has saved many broadcasts, let's put it that way. I know of several, even from the Sochi Olympics, that the Z/IP ONE absolutely saved and enabled broadcasters to go on the air. It's in service 24/7 at a lot of stations. Check it out if you would. Go to the Web to TelosAlliance.com, and go to Telos and look at the IP codecs, and check out the Z/IP ONE. Now shipping with analog, AES, and Livewire audio inputs and outputs.
All right. We're back on the show. Chris Tobin and Geoff Steadman are with us. Chris, what are your thoughts about Dr. Blesser's talk, and what questions would you like to pose to Geoff to follow up on that good talk?
Chris: Well, I thought the talk was spot on, and I will say that some many years ago, Geoff and I were actually talking about something of this sort on a street corner in Manhattan near Fifth Avenue somewhere Mr. Neil's house, if memory serves me right. We talked about it, and then some time after that, I was doing experiments with audio processing and my PPM-encoded audio, watching the decoder function and not function, depending on where in the frequency range of about 800 hertz to 2 kilohertz I would process and either destroy the audio, depending on who's listening, or enhance it or not. In the early, you're trying to learn what the PPM was about because we were basically living and dying by it.
I thought his information basically backed up what myself and a few others that I worked with thought about it, and said maybe, what if you had a program director who's very concerned about PPM and was concerned that he had heard rumors that possibly audio processing could impact its ability to properly encode and decode. We did that many years ago, so it's good to see that somebody else is following through and is looking to find a way to just, I guess, qualify the system that's being used to measure audience, because I know I've watched that encoder alarm box light flash on occasion with certain programming over the years and was fascinated by why it would happen, when everything we know or understood to be normal was where it should be.
Kirk: Well, and actually now, let me turn some of that into a question for Geoff. Geoff, you know, plenty of engineers familiar with audio watermarking and PPM technology know that there's a monitor box that you get to match the encoder, you've got a decoder box. There's a light on there that's red or green and tells you whether the signal is there or when, at times, it's not there. Either it's failed or the audio that you're running is not allowing reliable coding. But it's kind of like, I wouldn't really call it a confidence light. You don't really know to what degree that the reliability is out there. What can you tell us about the Voltair and its ability to monitor under simulated real-world conditions?
Geoff: Well, first, just to your point about the existing monitor. It's always been the beef of engineers that it was considered better than not having anything, but that not really sufficient for any nuance, given what's riding on it. This really was a major part of the Voltair development. If you think about it, the Voltair is really two boxes in one.
One is a real-time monitor system where you can really get a very rapid understanding of how well things are encoding and what the encodability is of different types of audio. I joke to a lot of people, and you're probably tired of hearing me say this, but the best format for radio for successful watermarking would be the white noise format and the dead air format would be the worst. Then between those absurd examples, you've got sort of everything in between. But that's really sort of the world we live in. The variations, the variability, the different spectral content, the different periodicity of audio makes a huge difference.
I think stations have always wanted that granular view of what's taking place so they could take action if it's just a matter of rearranging the clock, or changing out some processing. In some ways, Voltair has become a tool that we can use to build other tools, because now we do have some visibility.
The other thing I'll say about existing confidence monitoring is that in the sense that you're monitoring an off-air signal, but you're not running it through a transducer, you’re not running it out in the environment in which a listener would be demodulating radio, in other words, being several feet from a sound source or turning it up or turning it down, depending on their own personal taste. It's like a straight wire. It's like listening through headphones. There's no acoustic space introduced in that.
That's one of the things we've built into our monitoring system in Voltair was this simulator to basically place that audio content in an acoustic environment. It doesn't affect the audio output. It's a simulator. It's to get a sense of how is my audio going to fare in a restaurant-bar kind of acoustic where the program is just 10 or 15 dB below the content. What's that fight going to look like out there in the signal-to-noise world? Those are a couple very important things that we built into Voltair on the monitoring side.
With that knowledge, of course, also came the idea that there were things that we could do to the audio to enhance it. I think the analogy of that two-channel world that Dr. Blesser was talking about where we have the human listener, and we've also got these very important meter listeners that are a sort of box, if you will. We're sort of processing to two different ends, and really the part of the discovery we can do now is what's the happy medium between those two different types of listeners, now that we've got some tools that we can do some investigation.
I think the response from a lot of our users has been this is some sunlight in what's been a dark room in terms of getting a handle on the very important signal that's going out over the airwaves.
Kirk: Geoff, I wonder if you could answer just a few questions on a yes/no basis, some questions that have come up in various discussion groups. I think probably yes/no is going to be the quick way to get through these questions, quick enough.
Is it the case that the Voltair just injects white noise to make audio encoding more possible?
Kirk: Okay. I take it yes/no, it's a bit more sophisticated than [inaudible 00:59:08] .
Geoff: It's a lot more complicated than that, and listen, if it were that simple, everybody would have been doing it nine years ago.
Kirk: Is it the case that the Voltair can give a station ratings for listeners that it doesn't actually have?
Geoff: No way, because we can't conjure up panelists up from thin air. If you do not have anybody listening to your station, Voltair's not going to help you. If you have poor programming and a poor signal and a miserable reach, there's not anything that we can do. What we're trying to do is where you do have those panelists out there that every minute with them should count and should be registered, and that's where we think we're really kind of moving the needle. I get folks who think it's a knob that you can turn your ratings up with, and that's just a simplistic notion that we've never said anything like.
Kirk: You know, it's really an interesting science, because we're trying to inject a signal into something that we're broadcasting that we don't want people to hear. We don't want anybody to be irritated by this technology, and yet we do want a device the size of a pager that they're wearing in order to be able to hear what is masked, what's under. We're fooling the human ear, but we're not fooling a device that they're wearing. It's two seemingly different goals, although the way Dr. Blesser explained it, there are things that we can do that these are not at odds with each other. Because of the masking effect, we can do this.
Let me ask you one more question that's come up in chat rooms. Isn't all you're doing is just doing some compression and level lifting in the 1-to-3 kilohertz area, which is the area where these audio watermarking tones are injected?
Geoff: Well, that's the channel, but it's a lot more complicated than that. Part of this gets into some of the secret sauce, and I would tell you, given the team that built this and the models that we built to create this product, it was a big reach. There's a high barrier to it. I know there's been a lot of folk wisdom out there over the years, about, well, if you EQ things in the 1-to-3 kilohertz range and add some reverb and stand on your head and howl at the moon, you'll get this effect six weeks later in your book. There are some, I think, attempts, and some paths that people went on with changing out their regular processing that were effective.The problem was they never had a way to evaluate what that effect was, and that's actually one of the things that Voltair's helping us with now. Because I can tell you as Voltair has created a lot of interest and places that we can take it in terms of what we can learn next. A very exciting next chapter in what we're doing with this is in the whole cross-section between Voltair-style processing and Omnia regular general audio processing. How can one inform the other for the best effect?
In summary, it's all a trade-off. In watermarking, it's a trade-off between the audibility and the robustness of codes, and there are so many factors involved that really, we get into a highly statistical game. You know, they say radio ratings is a game of inches, and in that game of inches, microns count. This is a place where we're seeing some real effect in terms of being able to actually tune some of these trade-offs and try to make the overall system better and more accurate.
Kirk: I want to hear Chris Tobin's last comment in just a minute, but I want to point out or reiterate something that Dr. Blesser said in his talk. He really pointed out the fact that this masking effect, he pointed out those spikes, those needles, he called them, in the time frequency spectra display. He pointed out that there are opportunities in typical audio, but what's needed is time. You can't have a 2-millisecond spike of audio and expect to be able to put some watermarking behind that. It doesn't work.
When he showed the various spectra display for putting in a tone over a certain amount of time, if the time is too short, it's just this big blob, and a decoder can't discern that from something else, so time is important. I think it's important to point out that, as you did, that you can't just plug in some white noise. People wouldn't like that, and you can't just boost the audio in that area. You can do that, but is it necessarily going to solve your problem of not having enough places for the watermark to go in, and not having opportunities.
I guess the last thing I’d like to say about that is that what I hear Dr. Blesser say and what you say, Geoff, is that the Voltair is a device that lets you measure, on a minute-by-minute basis, what the expected decodability is of the water mark, and it has tools in it that let you simply increase the opportunities for watermarking. Doesn't guarantee that you'll have more, but it can, on most audio content types, give you more opportunities for the watermarking to go there, which you've got to have in order for it to go get decoded in the first place.
Geoff: Yeah. I mean, it's visibility and it's some control, some strategies that you can take. Some strategies may simply be programmatic, but in terms of helping you debug your plant, we've had a lot of tales already of some real success stories from engineers just on that monitor front. Certainly the enhancement processing, changing that trade-off, it's a neutral, and I think we're learning. As we bring this out, a lot of people come to us and say, "Well you must have all the answers." No. Because again, the laboratory is the entire broadcast ecosystem with all of the variables, all of the different components that make up the complete system, but it's nice to have the opportunity to learn and measure, and things are just getting rolling. It's really a quite exciting time to be in broadcast.
Kirk: Chris Tobin, I imagine you've got a comment or two, and I know that one thing that engineers have stated publicly is that the monitoring part of a Voltair gives you the opportunity to evaluate how your own processing choices may affect the encodability and decodabiliy of a signal. But I'm sorry, I'm putting words into your mouth. Go ahead and express your own comments.
Chris: Well, I will agree with that, and I will say that Voltair is definitely an opportunity to evaluate the system and get a better understanding of what's going out there. I also agree with Dr. Blesser in saying that this entire process is a system, it's not one component, it is from start to finish or a complete circle, if you want, and it's always been that way, from day one.
I think also in the industry in general, it's time to start thinking about how you process your audio, why do you need to process it in an arcane manner. If time is important, and it seems like that's always the case, then maybe it's an appropriate approach and Voltair gives you the tools. As Geoff points out, it's not going to solve any problems you have as far as... you don't have an audience? That's not Voltair's help, nor is it PPM. But at least it tells you or gives you a baseline or a benchmark to say, hey, this is interesting. It's not doing what it should be because the masking can't be accomplished because I'm doing something to affect the algorithm. It's a systematic approach. I don't think there's a silver bullet involved.
I know there have been a lot of comments. I heard several dozen interesting comments at NAB regarding Voltair and people's opinions of where it can go and couldn't go and why some can and can't use it. But I think PPM in general, given the science behind it, if you really pay attention to Dr. Blesser's speech, read the papers that were both in 2009 and most recently in 2015, you'll come to understand why it's important that, at least in the case of Voltair, from what I can see... and somebody else may come up with a box of a similar ilk and do a similar approach from a different perspective.
I think it's a smart move, and it's about time that we look at the encoder's ability to encode with a device that we can look at and says, okay, yeah, you may be missing the boat. Maybe not. Because that box in the rack right now with the little green light, quite frankly, I think you could fool it to think that everything is good and clean and the reality is it's not. It's just a tool. That's how I would look at it. I would take advantage and say, “Hey, this is something good to do.”
Kirk: This Week in Radio Tech, we're talking about audio watermarking, the science behind it. I'm Kirk Harnack along with Chris Tobin, and Geoff Steadman is our guest. We're just about to wrap up. We're going to look for a final thought from our guest, Geoff Steadman, in just a moment.
Our show is brought to you in part by Axia, and the new Axia Fusion AoIP audio console. How did engineers at Axia come up with the Fusion? Well, they took the very popular... I mean there's 5,000 of them out in the field... the very popular Element audio console, and they chose the very best aspects of that. Then they changed the things that appealed to some broadcasters. Feedback we've gotten over the past 10 years, “Hey, we want an all-metal top, no plastic on the working surfaces. We want markings that will never wear off. We don't want any paint, no printing. We want markings that will be there 10 years from now, 20 years from now.”
The Fusion console incorporates a complete metal top. It's not sheet metal, this is all brushed aluminum metal that has been routed from blocks of aluminum. The end panels are metal. The markings are all laser-etched and then double-anodized. They're not coming off. They can't come off. The console will look as new in 10 years as it does the day that you get it.
It's also got something that users are finding very helpful, and engineers, too, and that is a high-resolution OLED display for every single channel. So not only are you looking at a channel number and a channel source, but you also get real-time incoming audio levels, so that's like a pre-fader level, and real-time outgoing audio level for mix-minus or a headphone feed. Any kind of back feed going out based on that channel that shows up on the OLED display.
The buttons have been reengineered, their shape, although the internal workings have not, because we have found those to be extraordinarily reliable over the last 10 or 12 years now of building consoles with these buttons. The faders on the Fusion console, just like other Axia consoles, are side-loading faders. That is, dust, dander, dirt, things that fall in from the top slot, they don't go into the fader. They land on the top of it or they go around it, but they don't go inside the fader. Just a brilliant design from a high-end fader manufacturer.
Then you can get all kinds of modules that go into the Fusion, so you can customize the Fusion in literally tens of thousands of ways. You can make the Fusion exactly like you want it. You can make it fit your space, you can give it the number of faders you need, a lot, a little, somewhere in between. You can give it a large, full-functioned monitoring module, or you can give it a smaller monitoring module. You can still get to everything, but you may have a couple buttons to push.
Lots of options are available... oh, built-in intercom. Intercom is totally available. Intercom panels of 10 buttons or 20 intercom channels, all available for the Fusion console. It's an amazing console, it is absolutely gorgeous. We've already got some installed in France, in San Francisco in a gorgeous installation. We're going to do a little expose on that in the coming months, and just beautiful.
You ought to take a look at it. The Fusion Console. It's at the Telos Alliance website. TelosAlliance.com, and then look for Axia, and look for the Fusion AoIP audio console. Hey, when you marry it up with the Axia xNodes, now you have got RAVENNA and AES67 compatibility, in addition to Livewire. It's the only AoIP system on the planet that gives you compatibility with these three different AoIP standards, Livewire+, RAVENNA, and AES67. Check it out on the Web at TelosAlliance.com, and look for the Axia Fusion.
All right, Geoff, this has been a very intriguing show and I'm glad you've introduced, with Barry Blesser, our audience to the notion, some of the background in audio watermarking. What final comments might you have about this subject and the science and where we might expect it to go in the coming years?
Geoff: Well, it's a wide open field. We are really just getting started. What I would like to say, I guess, in a kind of a closing way, is that technology's never finished. We are always working on continuous improvement and you can only do that with visibility and knowledge and science. That's what we're really striving for, and I'm really grateful to what's become my new corporate home, to have this opportunity to work on products like this and to try to make our industry better.
That's really what's at the core of all that we've done with Voltair, and where we're headed next, I think, on some of the topics and intrigue that will be spawned off from this. I can tell you right now that the intersection with our Omnia processing is following close on the heels of the work we've done for Voltair. It's going to be really exciting. I'd say check back with us, check back on the Telos Alliance website and see what we're up to.
Certainly Voltair's going to stay in the news, I think, for awhile. It's been a little bit of a circus, and there's certainly been some real circus barkers out there, but it's some very intriguing tech and I'm proud to be part of it. I thank you guys for putting on this program.
Kirk: Geoff, I thank you for being with us, and thank Dr. Blesser for working on this as well. I just want to see radio broadcasters get all the credit due for listeners. If you're depending upon a system to measure listenership, you don't want to get 90%, 95%, even 98% of the counting done. You want to get credit for everybody.
It's like if you were running for mayor. If I was running for councilman, I want every vote for me to be counted. In broadcasting, especially because it can mean a lot to the future and the continuation of advertisers seeking radio, and then on the competitive situation, station to station, you want to make sure your station gets counted for every single listener. That's where I think that this kind of technology, understanding watermarking, goes a huge way into making sure that happens.
It's the best tech we know of right now, and if we can just make sure that we get counted for every listener, every listening experience, that is absolutely critical. Chris Tobin, you have a final thought on audio watermarking?
Chris: I think, again, it's just learn the system, understand the technology. It's been around a long time, it's been used in many places, and I think it's time to reevaluate — as Geoff pointed out, technology doesn't stop, it just keeps evolving — your methods of audio and transmission systems.
Chris: Why not? This is probably the perfect time for it.
Kirk: Cool. All righty. Thanks very much. Geoff, thank you for being with us from Boston, Mass. Appreciate you here, Geoff. Also thanks to Chris Tobin for being here. Chris is a consultant in the broadcast industry, both to radio and TV stations. Chris, if folks want to find out where they can hire your expertise, where would that be?
Chris: At email@example.com. I’ve had a few folks from British Columbia drop a note to say hi, they like the show, so I just hi back to them.
Kirk: Well, good deal. Geoff, I believe there are additional technical informations on the Telos Alliance website. I believe there's one or maybe two white papers from Dr. Blesser there, and plenty of information about audio watermarking. Isn't that right?
Geoff: Yeah. There's stuff on their site, and there will be more content as time goes on. We're rowing as fast as we can go here. Yeah, check our website.
Kirk: And hey, viewers and listeners, if you know of other engineers, especially those who are in measured markets, PPM markets, be sure you tell them about this show. A couple days from now, we do it live, on Thursdays, within a day or two the show will be posted on the ThisWeekinRadioTech.com website, also on GFQNetwork.com website. You can get it there. Usually a few days after, it's also posted on YouTube and dailymotion and other places as well. So you can always find it online, and review Dr. Blesser's presentation about the technology behind PPM.
Thanks very much to our sponsors, Lawo, the Telos Z/IP ONE, and the Axia Fusion console. Also, thanks very much to Andrew Zarian and the GFQ Network for bringing all of this to you and making the technology available to put this show together. I appreciate you very much, Andrew.
We'll see you next week on This Week in Radio Tech. Goodbye, everybody.