[Chad Davis] 12:02:51 Hey, everybody. Welcome to our February 2026 webinar. This is actually, believe it or not, this is our 25th webinar, which is not a big round number, but still for us is a little bit of a milestone, because it means we've been doing these now for over two years. So if you are on. [Chad Davis] 12:03:11 live, uh, and you have access to the chat. One, please start adding in questions, because honestly, like, we're going to drive this by the chat if we get enough questions. I've got plenty to ask, but… but we will rely on the chat more. [Chad Davis] 12:03:25 rely on my questions. And two, tell us, you know, kind of where you're dialing in from, and say hi to everybody in the chat. So I'm Chad Davis, Chief Innovation Officer, Nebraska Public Media. This webinar is being brought to you by Public Media Innovators. [Chad Davis] 12:03:42 Um, but we also thank NETA for supporting public media innovators. And you'll notice up here, Amber Samdahl, my very capable co-host, she's gonna be right in the chat today looking for questions while I kind of engage with our panelists. [Chad Davis] 12:03:57 Um, she also does all of the editing and all that stuff after the fact. So, um, in terms of extra housekeeping, um… Public Media Innovators has a newsletter, um, maybe you want to subscribe to it, like, if you do, drop a link in the chat. We've also got a website. You can check that out as well. A link will show up in the website. And I think that covers us for housekeeping. So, um, now I want to thank, uh. [Chad Davis] 12:04:22 Our folks from NPR. You know, Amber and I are we both come from visual communication backgrounds, and we always talk when we do our planning meetings weekly. [Chad Davis] 12:04:31 About how, gosh, we just need more radio, we need more radio representation in this… in this group. So this is, like, um, a happy moment for us, because we actually get to really kind of dig in with folks from NPR and on a topic that is on everyone's mind. [Chad Davis] 12:04:48 which is AI. So we've got Erica Osher, Tony Cavin, and Sharahn Thomas, only in the order that they appear in the little strip of boxes on my screen. And I want to start, I guess, really by just having them introduce themselves. [Chad Davis] 12:05:05 And the question I want to ask is, tell everybody when you introduce yourself, kind of what you do, and then. [Chad Davis] 12:05:14 just how you've used AI this week. So, you know, we're on Thursday, like, what have you… what have you used it for at some point already in the week that's almost done? So, uh, Erica, you want to kick us off? [Erica Osher] 12:05:25 Yeah. Hi, everybody. I'm really excited to be here, and thank you, Chad and Amber, for hosting us today. It's awesome to see how much participation we have, um, from so many different places, so that's, that's awesome. [Erica Osher] 12:05:40 So, I'm Erica Osher, I am VP of AI Labs for NPR. I have been in the NPR family for more than 14 years. I came from the sponsorship products and revenue strategy kind of side of the business. Um, and have been working on AI. [Erica Osher] 12:05:56 Very closely with Sharahn in particular, um, and some other folks for more than 2 years now since the launch of ChatGPT back in 2023. Um, and am now in the VP of AI Lab's role. What does that mean? It really means just trying to figure out broadly what. [Erica Osher] 12:06:13 AI means what generative AI means for NPR and for, you know, also for our station partners in the network. And that's everything from the external piece. So how does this impact how NPR and our content and our stations all exist in the world? [Erica Osher] 12:06:30 Um, how is our content being used and changed? How do we want to interact with these big tech companies that are building all this huge influential technology, and what do we want to advocate for in those spaces? And what kind of partnerships do we eventually want to build with those companies or not? [Erica Osher] 12:06:46 Um, how do we want to monetize our content, um, and protect our content in these spaces? Like, do we want to block things? Do we want to allow licensing? All those big questions are really hard to answer, but we're trying to figure those things out. And then it goes into the. [Erica Osher] 12:07:03 Um, the kind of product side of things, like, what AI technologies do we want to integrate into our technology stacks and our infrastructure, our digital infrastructures? How do we want to do that scalably, securely, and cost-effectively? Because these things can get very expensive very, very quickly. So we really want to make sure that we have the right ROI and the right approach to these systems. [Erica Osher] 12:07:24 As we integrate them to do things like have better content metadata, to have better formatting of our content and distribution, there's all those types of things to figure out. [Erica Osher] 12:07:35 It's also then thinking through the governance piece, and how do we want… what policies do we need to have in place internally for our own use of AI? What platforms and technologies do we want to provide usage to for the organization? What enterprise tools do we want to onboard? [Erica Osher] 12:07:53 How do we kind of enable people to use AI safely and discourage shadow AI usage? So people using kind of unapproved tools that aren't safe for our data. And then not only how do we want to do that? How do we want to again do this. [Erica Osher] 12:08:07 Cost effectively and not spend tons and tons of money on all these different systems, but then also, how do we want to train people on that? And figuring out what the right kind of cadence is to be able to roll those things out and train them and support them appropriately. [Erica Osher] 12:08:23 And then there's a lot of collaboration with the newsroom, a lot of collaboration with our labor teams, um, to figure out what does this mean for the content teams? And how do those things differ when it comes to content, and how can we support that work? [Erica Osher] 12:08:36 Um, so big, long answer, because it's a big, broad role, but very excited about the opportunities that we have here. For me, I use AI obviously every day. I used it this morning to… see if I could use Gemini to make a slide deck for me that explains Google's prompting best practices that I could share with folks. [Erica Osher] 12:08:59 in the Npr template. It went okay. [Chad Davis] 12:09:05 Um, lots to click on there. We'll dig into too many of those things you talked about. Thank you for the kind of detailed explanation, because I think, like. [Chad Davis] 12:09:13 you have a role that a lot of stations, frankly, just can't afford to have, so it's great to kind of know your kind of… Which wall you're standing on, and what things you're watching beyond the wall. So, um, Tony, uh, you're literally next vertically on the strip for me, so you wanna, like, introduce yourself, and then, like, AI, what have you used? [Tony Cavin] 12:09:34 Sure. My name is Tony Cavin. I'm the managing editor for Standards and Practices at. [Tony Cavin] 12:09:40 NPR, which, as I'm sure you know, it really comes down to essentially protecting the reputation of our news organization, making sure we're doing. [Tony Cavin] 12:09:49 Things the right way we're doing things in a transparent way. And obviously, when it comes to AI. [Tony Cavin] 12:09:57 What can we introduce that will not violate our standards? How can we make sure we aren't violating our standards? And at the same time, we want to be transparent with our audience, let them know where AI, if AI played a major factor in something we did. [Tony Cavin] 12:10:12 We want to let them know that we are still at the very early stages of bringing AI into the newsroom. So I'm going to cheat on your question and say that the way I have dealt with AI this week. [Tony Cavin] 12:10:25 was to help some of my colleagues work on the guidance we're putting out for the newsroom on what can and cannot be done with AI. And while that's not, you know, I didn't ask Gemini to write that for me. I probably should have. It would have been a lot more efficient. [Tony Cavin] 12:10:40 But nonetheless, um, that's really where we're at with this is still trying to figure out, you know, it's like somebody just gave us a chainsaw, and it will be great to cut down trees, but we really want to make sure we don't knock down the garage while we're doing it. So that's essentially where we are. And the one other thing I thought. [Tony Cavin] 12:10:57 Thought I should mention in this. is that when I wrote the initial AI guidance for the NPR Ethics Handbook, which is online and public at mpr.org/ethics. [Tony Cavin] 12:11:08 I think the most important point, which I wrote then and still is now, is that anything you see, hear, or read. [Tony Cavin] 12:11:16 on NPR.org will be the product of human beings. Those human beings may use AI as a tool, but we are going to make sure that it is human beings who produce NPRs, what people now refer to as content. [Chad Davis] 12:11:32 Excellent. Thank you. I was responsible for our policy in Nebraska, and, you know, my first draft, which almost none of it still exists. But my first draft was like a combination of chat Gpt edited by by Claude having. [Chad Davis] 12:11:47 dropped in as… into the prompt a bunch of other organizations' policies that had them, you know, in early 2023. So, like, yeah, using AI, totally legit for getting started on a heavy lift like that. Um, Shiram, like, introduce yourself, like, tell us about AI. [Sharahn Thomas] 12:12:05 Yeah, sure. Hey, everyone. How are you? I'm Sharahn Thomas. I'm Vice President for Content Operations. So and I am the content AI lead. So which brings me to, as Erica was saying, our partnership on AI initiatives through specifically. [Sharahn Thomas] 12:12:22 As it pertains to the content division. But as, you know, part of the leadership team at NPR, I am invested in all the things that Erica listed in terms of our own AI exploration and aspirations in terms of partnerships and just what it means to how we further the mission and do our coverage. [Sharahn Thomas] 12:12:41 Again, in the terms of my, what I called my day job, you know, is focused on the day-in, day-out. [Sharahn Thomas] 12:12:50 content production practices as they relate to editorial and technological. So a lot of my partnerships already in the day job or with product teams and tech teams, IT, audio engineering, just to support the news magazines and podcasts. [Sharahn Thomas] 12:13:07 news gatherers every day. But, you know, where I'm interested in this connection on AI is that part that, again, that Erica spoke about is how we learn to utilize generative AI to support our journalism in the, as a lot of the back of house sort of work, you know? [Sharahn Thomas] 12:13:28 and research making their lives easier. There's a lot of news to cover. People are moving quickly, and so if there are efficiencies to be gained by AI, I'm very interested in trying to explore those for our journalists. [Sharahn Thomas] 12:13:44 Um, but the partnership, obviously, the strong partnership, like with Tony, representing standards, what's most important also is we safeguard our content and our… the reputation of the organization and our journalism, and uphold the standards that NPR has become known for. [Sharahn Thomas] 12:14:01 And I don't see them as, you know, they're not diametrically opposed. I mean, they just you have to be smart as like Tony said, the humans need to be in charge. And so that's where we're focus is bringing in the tools, wanting to see our journalists learn to use them in the ways that we feel are best for us to use them. [Sharahn Thomas] 12:14:20 Um, and not just for those efficiencies, but then also, I firmly believe, you know, in the last two years of being involved in the AI. [Sharahn Thomas] 12:14:29 work and exploration is like anything, you have to know the thing in order to figure out how to best. [Sharahn Thomas] 12:14:38 leverage it for, you know, your own goals and missions, unless you find that it just cannot be so. And I don't think that's true, but that's that's the other really important thing for me, is that seeing that our journalists have a safe way to get their hands on tools that we have vetted from a legal and IT standpoint. [Sharahn Thomas] 12:14:57 and that we're giving them the sandbox, you know, to do the exploration. And probably come up with very smart, new ideas about how to present our storytelling in new ways. That's what I want to see. I would love to see. [Sharahn Thomas] 12:15:14 at some days, but right now we're, you know, we're at the beginning stages. So that… that's what my role is and my involvement with AI, and how did I use AI this week? Yesterday, I set up my first, we're using a tool, so I set up a. [Sharahn Thomas] 12:15:28 What I hope to be a repeatable sort of search mechanism around transcription tools, of all things, because transcriptions remains elusive in trying to find just the right tool and with AI tools, there's a lot out there on the market. So I was… I set up a chart. [Sharahn Thomas] 12:15:46 That I hope will keep being able to add to that, can keep comparing on the, um, the criteria that I've set in there to do that and do some side-by-side so we can make some investigation about that. So that's what I did. [Chad Davis] 12:16:02 Excellent. I'm going to stick with you actually for the first question. So so you know. [Sharahn Thomas] 12:16:06 Sure. [Chad Davis] 12:16:09 Everyone kind of alluded to like how fast this was changing and and I want to kind of get into what is hype and what isn't. And I think, Sharahn, if you could kind of just tell us like looking from your vantage point, what about AI. [Chad Davis] 12:16:26 has actually changed the newsroom, like, just say, like, in 2025. Um, and what, you know, maybe corresponding to that, what looked like it was going to change, but, uh, you know, just kind of ended up being hype. [Sharahn Thomas] 12:16:38 But what do you. Yeah. So I think what seems like has changed, and I mean, just looking across the industry and just taking in what other people are doing is it's it is so integral to almost any tool or technology these days. I mean, whether so. [Sharahn Thomas] 12:16:57 Whether it's ours or not, I mean, any third party that we're talking to right now, and that we're looking at from, you know, the AP to other colleagues in the space, um… AI is there. And so… [Sharahn Thomas] 12:17:11 That is interesting, and I think in that, I wonder some… we're still trying to figure out what the hype might be, because I think everybody has jumped on it. [Sharahn Thomas] 12:17:22 smartly or maybe not. I mean, some people are probably retreating from certain things that they're already doing. I feel like I did see something in the last week where Washington Post or something, besides their other woe, like, we're backing away from some usage that they were employing. [Sharahn Thomas] 12:17:37 And so I'm saying like, I think it still needs to be bared out exactly. But I'm trying to think because clearly there's been a lot of. [Sharahn Thomas] 12:17:48 I mean, maybe the most basic thing is just out the box, just thinking that it's step-saving, time-saving. It may not be, you know, it takes investment to get to that point, I think, to know it, because it is such a new technology, you have to expend the time to. [Sharahn Thomas] 12:18:05 to learn it. Learn the ins and outs, as I was saying, and this is why it's important for journalists, I feel like to do that because otherwise you won't know it until you put your hands on it and you won't know the intricacies or what, you know, whether it's prompting, whether just to realize everything it gives you. [Sharahn Thomas] 12:18:24 Back is not… sacrosanct is not true, like, so you need to be able to vet that. And so, in the end, if you're doing that kind of vetting work. [Sharahn Thomas] 12:18:35 you know, you're gonna have to pick and choose what the things really that make sense to you, because you would have already done that due diligence as a journalist through some other method. So is that just adding to your plate? That's one of the things I would say right now. And so I don't, again, just to be clear, it's. [Sharahn Thomas] 12:18:51 It's not all hype on that and the time savings, but I think you just have to be smart about where you're expending the time to learn and realize that your takeaway may be that that's not the thing to use, and that's fine. And that's well worth it. But you found the other thing that really does make sense. [Chad Davis] 12:19:08 Yeah, a lot of trial and error. Erica. Because a lot of the folks who are on today, either they're in local newsrooms or adjacent to local newsrooms or run local newsrooms. Is there anything that you could point to that, like, you know, NPR had the luxury of being able to try something, but, like, hey, we wouldn't recommend. [Sharahn Thomas] 12:19:10 Yeah. [Chad Davis] 12:19:31 You guys try this locally? You know, is there any one thing you might center in on? [Erica Osher] 12:19:34 Oh… [Chad Davis] 12:19:39 And I'm happy to stump Tony next if you want to think about that. [Erica Osher] 12:19:39 I think. No, I mean, I… there's not, like, a specific one. I think I've been pretty wary of some of the really kind of small startups, to be honest, just because… Or, like, the kind of bloatware, in particular, right? So there's a lot of companies right now that will offer you very expensive services. [Erica Osher] 12:20:03 to build these managed AI systems on top of other models. [Erica Osher] 12:20:08 And I don't… I've done… I've had a lot of demos of that technology, and feel free to disagree with me, but, like, I have not found that to be worthwhile, um, because. [Erica Osher] 12:20:21 A, the cost is so much higher, and B, I think that the technology companies themselves, the ones who are actually making the models and in charge of the models, are better at creating the interfaces and have more capability to create the interfaces that you're going to want. And these… still require integrations and a lot of support and hand-holding that doesn't replace the work that you would have to do anyway, just using, kind of, like, baseline ChatGPT or Anthropic or, um… [Erica Osher] 12:20:47 or Gemini, so I don't think that they're worth it. Um… I also worry that a lot of those companies… A lot of the demos that I've seen, especially around, like, multi-agent workflows, so, like, things where, oh, you're gonna have all of these different agents that are gonna do all of these different jobs that you have. [Erica Osher] 12:21:05 I'm super skeptical that those things will actually serve us very well, because generally they don't have good enough QA mechanisms to make sure that each step along the way is going to be good. And they get too close to kind of, like, taking over the human behavior, um, and the human judgments that I think we as journalism organizations really value. [Chad Davis] 12:21:25 You weren't thinking about open cloud, are you? [Erica Osher] 12:21:28 No, no open cloth, no DeepSeek! Make sure your data is not being stored in countries that you don't want it to be stored in. And please don't integrate lots of different things onto your personal devices with your work information. [Chad Davis] 12:21:36 Yes. Excellent point. [Chad Davis] 12:21:42 Absolutely. If anybody takes away anything from this webinar, that is probably the biggest thing I would ask you to take away. We have questions in the chat coming in. I want to do one for Tony, and then we're going to go to chat questions and see how we end up on time. Tony, how do you balance? [Chad Davis] 12:21:58 And I know you work a lot with Erica on this, but how do you balance? [Chad Davis] 12:22:02 kind of the speed of innovation versus, um, you know, caution when it comes to protecting the standards. Like, how do you see that? [Tony Cavin] 12:22:12 Well, probably much to everyone's chagrin. I always, when I do the our. [Tony Cavin] 12:22:20 Little introduction to NPR I do for new employees. One thing I always like to point out to people is that anyone who's a journalist can tell you. [Tony Cavin] 12:22:27 who got something wrong. They're all these famous issues of different media that have screwed things up. The most famous, perhaps, being Harry Truman holding up the front page of the Chicago Tribune saying Dewey wins the presidency. [Tony Cavin] 12:22:41 I think nobody remembers who was first, and I've used that, I think I'd rather go slow. I'd rather not take advantage of the pluses of this wonderful tool to make sure that we aren't screwing up. [Tony Cavin] 12:22:56 Because it's much harder to recover from reputational harm of making a mistake than it is, you know, than the advantage you get from introducing this quickly. So, for example, Erica and Sharahn set up a system where we took a bunch of people from the newsroom. [Tony Cavin] 12:23:15 and just have them play with the AI that we'd purchased, not for anything that would ever go on the air, but just to play around with it for what probably was three, four weeks, see what worked, what didn't work. [Tony Cavin] 12:23:27 You know, and you think about cell phones. We all have smartphones now. You know, we use them, we exchange pictures all the time. When I got my first cell phone, it never occurred to me that this was going to be having the ability to shoot high res video wherever I was. [Tony Cavin] 12:23:42 and share it with anybody anywhere in the world. It's going to take us a while to figure out what the most practical uses, what the most effective uses of this thing is. And I think because we haven't conceived that yet. [Tony Cavin] 12:23:57 A lot of what we're looking at is, oh, it can somehow replace a journalist. It can do writing, it can do research. And I think. [Tony Cavin] 12:24:04 As we play with it, as we figure out what it can and cannot do. [Tony Cavin] 12:24:09 We start to realize what's safe and what's not safe, and we have made very clear to everyone at NPR that they are responsible for the end product that comes out of that they put out. You can't point the finger at. [Tony Cavin] 12:24:25 at AI and say, well, it plagiarized. You were responsible for that. You're responsible for any bias. You know, this AI scraped the internet, and it's not like the Internet is somehow free of bias, so you need to be very aware of the bias that it's introducing. You need to be very aware of the possibility that it's plagiarizing. [Tony Cavin] 12:24:44 And you have to be very aware, obviously, the thing everybody knows sort of the cliche almost is of hallucinations. So bottom line is I'd much rather go slow and get it right, even though others with maybe more resources or more time or simply fewer concerns. [Tony Cavin] 12:25:02 are willing to introduce these things faster. I want to. I want to be the ones who got it right, because I think in the long run there will be a sort of dichotomy between those newsrooms that. [Tony Cavin] 12:25:13 Leave humans in control. And what I expect to be essentially a tsunami of AI slop that will be out there very soon, because it will be easy to make a lot of money using AI to do this sort of aggregation that people were doing constantly. [Tony Cavin] 12:25:27 in the internet, when it first became possible. It's a long-winded. I'm sorry, but… [Chad Davis] 12:25:32 No, no, it was good. It was important perspective. I'm glad we took the time for it. Erica, anything you want to add to that? [Erica Osher] 12:25:40 Um, no, I think Tony… I mean, I think Tony said it well. I think, um, there's a lot of pressure, especially, like. [Erica Osher] 12:25:49 Linkedin is kind of really bad for like making you feel like you're failing and you're far behind because you haven't launched all of these different products using the thing, and I think checking yourself against that is just really important, making sure that, like, you're going back to kind of basic. [Erica Osher] 12:26:05 Product principles, basic ROI principles, and and really understanding, like. [Erica Osher] 12:26:09 What is the actual value that we're getting out of this? Is AI actually even making it better? Or not? Or is generative AI better than maybe more traditional forms of AI in many cases? So like Alexa is a really good example of that, where like the old version of Alexa is a lot better at setting a timer for you. [Erica Osher] 12:26:28 Then the new version of Alexa. So, um… you know, I think that's the main thing. [Chad Davis] 12:26:32 Right, machine learning versus generative AI, really. Yeah, yeah, right. [Erica Osher] 12:26:35 Exactly, exactly. So, like, um, thank you for clarifying what I was saying there. And so I think it's just making sure that you're grounding yourself in your own reality versus kind of getting sucked in. [Erica Osher] 12:26:47 to the hype cycle as much as you can. Yeah. [Chad Davis] 12:26:49 Got it. Cool. Well, the chat is lit up. This is possibly the most active the chat has been this early on 11 since we've started doing these NPR folks are always quick to raise their hands. So I give them give them a lot of credit on that front. So we're going to just dive in. And I'm going to take the questions. I think mostly in the order that they were submitted. So we will. [Chad Davis] 12:27:11 We will get through all of them. Keep putting them in there, Amber's harvesting them for me. So, uh, first one came from Rick. [Chad Davis] 12:27:18 And Rick is developing resources for teachers and students on AI and education. [Chad Davis] 12:27:26 wants to know, how can stations educate and inform our various publics about the possibilities or dangers of Gen AI? Like, how should stations be talking to their communities about this? Maybe that sounds like an Erica one, maybe a little bit. [Erica Osher] 12:27:41 I would love to talk. I've never stopped thinking about this and media literacy, I think is media and AI literacy is so important. And I don't have an answer for this yet, but I've been thinking a lot about how we can work together as public media to kind of further this mission. [Erica Osher] 12:27:57 How we can work with educators, how we can work within our communities, because I think we have this really unique opportunity to do that, to help. [Erica Osher] 12:28:07 People across the broad demographics that we have, right, really figure out what this means for them. How do you understand how to check sources? How do you understand how to figure out what's real and what isn't real? How do you understand, um, you know, how to avoid kind of getting scammed? [Erica Osher] 12:28:23 Um, by fake phone calls and things like that that are only going to get more and more prevalent. How do you understand how to even think about the news? And I think… I don't have… again, I don't have an answer for this, Rick, but I would love to talk about it more, and love to work with you and anybody else who's interested on what that strategy could be, because I think it's. [Erica Osher] 12:28:41 essential to our mission, and will be increasingly core to our mission to do that work. [Chad Davis] 12:28:48 Um, I think we're gonna… let's alternate, like, Amber, you want to take the next one and everyone can hear your voice too. [Amber Samdahl] 12:28:55 Sure. Well, just giving voice to Greg Peterson's question. Greg writes, should we be investing in the time and resources to train people in just how to use AI? We all know that a poorly implemented query will result in a poor answer. [Amber Samdahl] 12:29:11 garbage in, garbage out. [Sharahn Thomas] 12:29:15 I'll take part of that, and anybody can chime in. I mean, because I was speaking about. Yeah, I I do think it's important. I mean, that that's just one part of the use, right? Like, I think… putting in… [Sharahn Thomas] 12:29:28 Prompting or querying is just one… I would say it's surface. It's the surface. It's the beginning of your exploration, and then better learning about how to. [Sharahn Thomas] 12:29:44 what the toolset is, or the feature set of any particular system that you're using and how to manage it or not manage it is the key, and then how are you, you know, when you have deeper, like we're doing, our techs have. [Sharahn Thomas] 12:30:01 done, you know, code assist work and doing deeper type of… so they can build. [Sharahn Thomas] 12:30:07 other systems for us. I think that's really… it's the it's the partnership and just understanding all parts of that. Like, anything that's being built that I would say at NPR is going to… you know, amplify the content that we make in the journalism. It's the back end of doing that. So that's the deeper part. It's not for all of us, necessarily, to learn it, but our technologists will need to learn it. They learn different parts, but we need to know the front-end parts about how it looks from. [Sharahn Thomas] 12:30:36 Again, from a journalist point of view, the storytelling part. So I do think the investment of time, you know, the simpler things like the prompting or just learning about the tool, I think. [Sharahn Thomas] 12:30:51 I will contend until you put your hands on it. I think you won't even be able to know that that's not the thing, like, eh, you know, I don't need to spend any time with that. But this, if I do a little bit more, it's going to give me, you know, what I need to. [Sharahn Thomas] 12:31:06 do my work a little bit more efficiently. If that's the baseline for most people that they want out of it. [Sharahn Thomas] 12:31:14 That's… that may be worth it. [Chad Davis] 12:31:17 Tony, um, kind of on the same vein. I have a follow-up, which is, how do you train folks on the policy? You're working to put the guardrails in place for the brand. How do you train staff? [Chad Davis] 12:31:28 You know, because that's actually almost as important as just using the tools quickly. [Tony Cavin] 12:31:29 Well, we… [Tony Cavin] 12:31:33 I mean, the guidance we've put out is an integral part. We have a rule that no one gets access to the AI tools we're using until they've gone through the training, and the guidance that standards has put out is an integral part of that training. Now, will this? There are no. [Tony Cavin] 12:31:50 Ironclade guarantees, you know, it's like any other… frankly, like any other legislation, you know, you can say the speed limit is 55 miles an hour, and a lot of people are still going to drive 70, but not everybody's going to drive 70. And so we, you know, we'll do our best to enforce that. It's not bulletproof by any means. [Tony Cavin] 12:32:09 But I think it does help because we want. [Tony Cavin] 12:32:13 I find our journalists at NPR at least, the most outspoken ones, and I think most of them. [Tony Cavin] 12:32:20 It's not like they're dying to get their hands on this to cheat. They're very concerned that this will somehow water down the authenticity of what they are doing, and they they want to do the right thing. So, you know, in some ways it's. [Tony Cavin] 12:32:36 fortunate for standards is that we have an audience that is very receptive to what we're asking them to do, because they too are very concerned about the potential reputational harm that would come from misusing this tool. So it's not like, you know, maybe the metaphor I used was bad, or the allegories of the speeding ticket. [Tony Cavin] 12:32:56 Because everybody wants to speed. This is a case where most everybody wants to drive the speed limit, so you don't really have to, you know, you don't have to have a patrol car hiding under the underpass. [Tony Cavin] 12:33:06 I think we are finding, you know, going back to somebody asked about prompts. [Tony Cavin] 12:33:12 I'll defer to Erica and Sharahn on this, but I think what AI is going to prove useful for much more than these prompt sort of things is sifting through large amounts of information. For example, somebody showed me a demo a while ago. You recall that a few years ago, there's a lot of talk about the problem of shoplifting. [Tony Cavin] 12:33:31 in stores in Central City or downtown, whatever they call it, San Francisco. [Tony Cavin] 12:33:36 what they were able to do was look at the shoplifting losses from CVS stores around the country and compare those to various metropolitan areas. That's the sort of thing that AI can do very, very well and very quickly, as long as you sort of figure out how to put that data in. And I think that's where you're going to find in that sort of data journalism. [Tony Cavin] 12:33:56 The Times today, the New York Times has a long explanation of how they went through the Epstein files. It was not without envy that I read that it had 24 people assigned to this, which is not something we can do. But this was just 3 million documents, and AI obviously plays a role in going through something like that. [Tony Cavin] 12:34:15 So I think that's where you're going to start seeing a use for it. And my one concern is that. [Tony Cavin] 12:34:22 Not everyone has the resources the New York Times has, and can't put that many people on it, so we may find sort of a, you know, there's like they talk about the digital gap between people who have access to the Internet and those who don't. I'm hoping smaller media, of which many public. [Tony Cavin] 12:34:37 stations are don't encounter a digital gap or an AI gap where they just don't have the resources to keep up with what others are doing. And I think the response to that will no doubt be just doubling down on covering their local communities. [Chad Davis] 12:34:51 Cool. I have one more follow-up. This one's for Erica. Um, we've talked a lot about editorial and newsroom, but, like, how do you also guard against shadow usage? You've used that term earlier, outside of the newsroom, like on administrative parts of the organization? How do you govern for that? [Erica Osher] 12:35:09 It's very difficult. I think. The thing that I'm trying… that we're… there's a few things that we're trying to do, and I think the way that I keep thinking about it is enablement versus policing. I don't want to police people. I think it's very hard to police people's use, especially when people use their personal devices. Like, I don't… there's not really a good way to do that. [Erica Osher] 12:35:28 And it also creates the wrong relationship, I think, with with staff and management. [Erica Osher] 12:35:34 Um, so what I'd prefer to do is try to… Um, like a combination of education and enablement, right? So education on the dangers of using unapproved tools, right? If you use a free tool, right, if you… we use the enterprise version of Gemini, for example. If you use the free version of Gemini, you're giving Google all of that data. You're training its models based on everything that you put into it, and that's true with every single one of those products. If something is free, you are the product and your data is the product. [Erica Osher] 12:36:03 Um, so that's always a good thing to remember. Also, you don't know where that data is going and how that data might be used, or who it might be sent to, like government agencies, law enforcement, all of these other places. So really trying to kind of make sure people understand the risks and why we want people to use only approved tools. [Erica Osher] 12:36:21 And that's just messaging and constant, kind of, like, we're gonna have to work really hard to keep training people on that. Um, and the enablement side, this is why we've been working really hard to think about, like, what AI features we can turn on and the tools people are already using, um, and what we can approve as broadly as possible to solve as many of the major business needs and user needs that we have across the organization. [Erica Osher] 12:36:43 Even if it's not everybody's preferred tool to at least have a solution and make those things broadly available to people with the right training on how to use them. [Erica Osher] 12:36:51 And then finally, it's making the actual approval process better. It's arduous to get a new tool approved. I think, for many of us, especially, you know, universities, I know that can be pretty challenging. For us, that can be pretty challenging. People always don't know what they're supposed to do to do that. [Erica Osher] 12:37:08 So I think there's a lot of work that we are doing, that we can be doing to make those really easy and make the approval processes much more transparent. So that people will go through those processes versus just downloading the thing on their phone and not, you know, and trying to go offline. [Chad Davis] 12:37:26 Before we go to the next question in, like, 5 seconds, Lauren Komarowski up in Connecticut asked if there's a way to share some of these training materials. So just before we drift too far from this particular topic, I wanted to flag that. Nancy asks, I've been trying to develop an AI tool that would sweep social media to find, quote, ordinary people, close quote. [Chad Davis] 12:37:46 to interview. I haven't had any success so far, though I know the New York Times has developed a tool like this. I think, Erica, this is maybe a you question. Do you have any ideas on how to accomplish this? [Erica Osher] 12:37:58 I think there's some vendors out there that do this, so that's an option. Um, I don't know how well they work or not. I think part of the problem is the challenge. We've been… I've played with this a little bit, is, um, whether or not those. [Erica Osher] 12:38:11 Systems are blocked. So like Google, for example, has a deal with Reddit. So you can access a lot of Reddit data on, um… through Gemini products, for example. Um, but because, for example, meta platforms are largely blocked by the scrapers, um, it's hard to build a scraper that will actually, like, like, use their terms of service and go through it in an appropriate way to scrape that data. [Erica Osher] 12:38:38 Because they probably want you to use Meta's own AI systems to do that. So that's where this has been kind of challenging is figuring out what a compliant way is to do that. I don't know how the New York Times did it. I don't know, Chad, if you've had any luck doing anything like that yourself or anybody else. [Chad Davis] 12:38:53 Um, if you have had luck with that, put it in the… put it in the chat, and we can do some follow-up. Um, no, um, but what I do think is good is if we start building maybe some sort of collective public media database of problems. Um, I was swapping messages with Suzanne Smith. [Chad Davis] 12:39:09 yesterday in Florida, and she was asking me some things, and I was like, nope, no one's asked me that before, but this is the very thing we need. We need problem statements. Um, AI, kind of in a vacuum, is just a gadget, but when you have a problem, you can apply it to, AI can generally be applied to a lot of problems. [Chad Davis] 12:39:25 I think we flagged this and some of the other stuff that's filtering around and just start to maybe develop a project database of needs from within the system. And then if one station uses it, certainly another does. So, um… Maybe we could start up on the public media innovators that we'll start to collect some of this and try to figure some of this out, too. So, um, cool. Amber, you want to take the question? Next one, Phoebe? [Amber Samdahl] 12:39:49 Sure. And just to add on to that also, there seems to be repeated interest in a shared collective documentation of all the things that people are learning from across the system. I think it's a really great request and something that we should work on. [Amber Samdahl] 12:40:05 Collectively. Um, the next question is from Phoebe. Do you all have thoughts on public media stations using AI generated voices specifically? I know many stations and news outlets use other iterations of generative AI tools, but I know as someone who works in radio. [Amber Samdahl] 12:40:23 Using AI voices, whether for newscasts or promotional spots, can feel like a bit of a red line for many of us. But, um, I'm unsure whether that's reflective of the way the larger industry feels. Have you all experimented or talked about the use of generated voices? [Sharahn Thomas] 12:40:40 We have talked about it. It's a red line for us. We're not doing it right now. Yeah. As Tony said in our guidance, you know, we wanted to stand by and are standing by, you know, if you're hearing it, if you're reading it, it's a human journalist that has created that. [Sharahn Thomas] 12:40:58 you know, there may be, like, I think the question involves, like, for promotional purposes or fund, you know, fundraising, I think… This is probably an area the way we've worked out like with our labor relations like we may get to some sort of scenario, but it would be very explicit about that and not about journalism or marketing tool like so that. [Sharahn Thomas] 12:41:22 That's sort of how we're parsing it, but I mean, we're nowhere. That's not on the radar as an immediate thing that we're really actively exploring and working on. We're really cautious about even, I will say, like, you know, we're doing a lot these days in some video exploration work, right? And we're. [Sharahn Thomas] 12:41:41 We're utilizing some of the tools that are on the market for that, and they do a lot. They have a lot of AI features, and you know, they will adjust all kinds of things, insert words for you, just, you know, just all of it. And so, and it's difficult. You don't have fine-tuned feature. [Sharahn Thomas] 12:41:58 the ability down to a feature level to, you know, turn things on or off. And so even introducing that and using our guidance, but really walking through… we're just in the testing phases of those AI features. So again, they're not anything that anybody's using. [Sharahn Thomas] 12:42:15 Right now, um, but just learning that when you see it. So that's how cautious we are about where we are right now with that kind of use case. [Chad Davis] 12:42:26 with… I think there's a point I want to just interject here, too. Not disagreeing with anything there, that NPR is, like, is a medium that is entirely built on trust of voice. I think when you think about it from the video side, visual communication. [Sharahn Thomas] 12:42:29 Yeah. Yeah. [Chad Davis] 12:42:41 you know, I don't think we've published anything with voiceovers that, like, you know, documentary short-form video, things like that, that are, um… uh, have, like, an 11 laps voice attached, but we've experimented with that, and the tech is there. Like, it's good enough. So I think there's… there's the sort of the radio perspective, but just to flag that there's also, I think, a TV or video, linear video. [Sharahn Thomas] 12:42:57 Yeah. [Sharahn Thomas] 12:43:03 Yeah. [Chad Davis] 12:43:06 perspective as well in different types of content, you know, maybe… I just don't… Yeah. I don't see how small stations, yeah, that small stations are on a budget. Like, I don't see how we don't cross this into this gray area soon, because it just… [Sharahn Thomas] 12:43:10 Yeah. No, I agree with you. Yeah, no, just to say… [Sharahn Thomas] 12:43:21 Yeah. Yeah. Right. No, I'm glad you mentioned that. I mean, I totally agree like there's a difference in perspective of like where NPR is and where other we've seen this. I mean, not even just in station limb, but my own, you know, with smaller. [Chad Davis] 12:43:24 is gonna result in some cost savings for production, so… [Sharahn Thomas] 12:43:41 print publications. And again, I will say it's, in a way, it's very liberating, but fascinating, because the need has driven. [Sharahn Thomas] 12:43:51 people to, I think, experiment more, uphold their values. I'm sure journalistically, but they're pushing the envelope more because it's out of necessity in a way. It's a way for them to cover broader ground. And so, in a way, I really admire it and respect it when I have seen it, and because they have been very creative and some of the things that I've heard about. I mean, I'm sure there have been. [Sharahn Thomas] 12:44:14 Hard lessons learned as well. But just to say to your point, Chad, overall, like, I think it will vary of, like, what people need to do because of the way that they're resourced to, you know, to try these things. [Chad Davis] 12:44:29 Cool. Uh, so I'm going to keep trucking through the questions. We could… I'd easily spend 15 minutes just diving into that one. But, um, with, um, so Emily's up next, and this kind of builds a little bit on what you were saying, Erica, where, you know, if the tool is free or the you're the thing that is being bought and. [Chad Davis] 12:44:46 And sold, um, and Emily acknowledges that, uh, so that means we can't, you know, reporters can't be using these tools as much, um, because you wouldn't want, you know, for example, your questions or the documents that you're using as sources in your reporting. [Chad Davis] 12:45:01 Uh, to be fed into these platforms this way. So what to do about that? Like, how can reporters use AI tools, you know, given this caveat? Is it just pay-to-play? [Erica Osher] 12:45:15 There are some also. organizations that offer. [Erica Osher] 12:45:20 like the right types of terms to nonprofits at much lower rates, too. So I would look into those like there's a Google nonprofits, there's a Google education programs that, you know, there's things like that. There's also, you know, I think Anthropic released a new nonprofit rate that had a pretty significant discount, too. [Erica Osher] 12:45:37 So, look at the nonprofit grades, always ask. They might always publish their nonprofit rates or talk about them broadly, but usually if you ask, you can kind of push a sales rep to share that information with you. [Erica Osher] 12:45:50 And then another thing, too, is to look at the terms, even if you don't have a lawyer. Some beta tools too will have terms that will say things like, um… Humans at this company may look at your prompts. [Erica Osher] 12:46:06 Um, for product updates, and for research. And if it's something very sensitive, that's something to be considerate of. Another thing for particularly sensitive data, there are cases where we might not want to use these kind of cloud products, right? So places where it's kind of going up into the cloud servers, and there's less… there's inherently less security in some ways with cloud. [Erica Osher] 12:46:25 So, there are local solutions that you can use that are just on your machine. So you can use a model hosting system, like an… like, there's some called, like, Ollama is one that's very popular, and you can download models onto your own machines. [Erica Osher] 12:46:41 And due processing and do, um, use GAI technology in those ways. Um, there are additional costs in terms of hardware associated with that, but for particularly sensitive data, it might be worthwhile to do that type of thing, um, so that you can feel protected. [Chad Davis] 12:46:58 Sure. And hopefully, like, everyone, you know, if you've got an IT department or access to one that they can help kind of figure out some of the hardware setups on those. Cool, thanks for that. Amber, you want to take the next one? [Amber Samdahl] 12:47:09 Yeah, Derek has a question that I'll just say within our organization at Wisconsin Public Media, it comes up a lot. Are there ways and approaches you are taking to track and or mitigate the environmental impact of your AI usage? [Amber Samdahl] 12:47:24 What's the conversation around the environmental impact? [Erica Osher] 12:47:31 I can start, and then I'm curious what others have to say here. It's very difficult to have a good solution for this. There's a lot of… There's… there's things that we're looking at to implement to understand how much token… how many tokens we're using, like, how many calls you're making, right, for different… for different things. So then you can kind of do some optimization. [Erica Osher] 12:47:53 There's a lot of reporting data about that. Like Google has released some things about water usage. A lot of the, you know, I think a lot of the AI companies have. [Erica Osher] 12:48:02 Take it with a grain of salt. Um, I, you know, when they're reporting on themselves, and it's not peer-reviewed… Um, I think the real thing to think about is, like, what's the least expensive, least consumption, uh, least high consumption model that you can use for the task that you need, right? Like, don't use the most expensive, most, like. [Erica Osher] 12:48:23 Um… hungry kind of version of the technology to do something that you don't need it for. Don't make videos just for no reason, just because you can. Like, um, do you actually need AI to do this? Are you actually seeing ROI from using AI? And I think, like, responsible use is probably the most effective strategy, because I don't know a good way to measure it, really, that I think is actually, like, reliable. [Erica Osher] 12:48:45 I don't know Tony or Sharahn, if you have more to add there. [Sharahn Thomas] 12:48:49 You know, I feel like you're more knowledgeable, you know, just like you said, on what's what's possible. I think. [Sharahn Thomas] 12:48:53 Just from the point, you know, this question comes up internally with the staff, and so we basically said the same things. We're mindful of it, we're cautious of it, and trying to look for the ways that we can be, you know, good stewards, good users, and that sort of thing. But it's not a… Doesn't seem like there was just an easy, clear-cut. [Sharahn Thomas] 12:49:15 Position or answer at the moment. [Tony Cavin] 12:49:18 I'm part of the problem is it's so hard to quantify. [Tony Cavin] 12:49:21 You know, how do you know what you're actually doing? You know, where does your power come from? Is it from hydroelectric? Is it from nuclear? Is it from coal-fired? So many of these factors go into this decision. And what would be. [Tony Cavin] 12:49:36 the cost of not doing this, we're replacing it. Do you have more people driving into the newsroom in automobiles, coughing out carbon? I mean, I'm not trying to make excuses for AI and its use of energy. I think that's an important issue. [Tony Cavin] 12:49:51 But I do think it's so hard to quantify that it is beyond the sort of general don't use it when you don't need it. It's hard to get beyond that and have rules that will minimize the energy use or even help you understand how that compares to doing it a different way. [Chad Davis] 12:50:10 Yeah, and I mean, I think it's also there was a lot of concern in the early days of AI 20 early 23 about what this was doing and a data center battled out and all that stuff. We're seeing the systems adapt to that. We're also seeing that it may not be as dire. [Chad Davis] 12:50:26 I think you've got to balance, like, how do you train yourself to use some of these tools so you're doing it effective and efficiently eventually. [Chad Davis] 12:50:34 You know, that's a factor to weigh as well. Next question for Josh is from Josh. How do you land on the official sanctioned AI tool like Erica, you said Google, Gemini, y'all are kind of put your bets on. [Chad Davis] 12:50:50 Like, how did you get to that, you know, to avoid shadow AI use? He says, specifically from an IT and security perspective, for instance, using Microsoft Copilot because we are already on the Microsoft ecosystem. Is that valid or is there any security risk with other popular ones, like. [Chad Davis] 12:51:10 Gemini or ChatGPT. I think we're assuming paid versions of those. [Erica Osher] 12:51:14 I'm not a lawyer or an IT security expert, so I'm just going to put that disclaimer before this answer. The process we go through, and Tony had asked a similar question right below, so I'll kind of address both. But, like, we do an extensive legal vetting process. We're lucky to have really great lawyers on staff at NPR. So, um… Who look at, kind of, the terms very extensively. It takes a while, but it's worth it. Um, especially on the data privacy review side, because anything that's going to use personally identifying information, we need to be very clear about what that is and how we're going to store that data where it goes. [Erica Osher] 12:51:48 and how it integrates. We also do an IT security review on every single piece of software that we are allowing people to use to understand kind of what certifications does it follow? What, uh, you know. [Erica Osher] 12:52:02 I'm not going to speak exactly for what that process is, but they have a whole kind of questionnaire that they go through with vendors and looking through their kind of documentation to make sure that we're comfortable with it. Um, we do insurance reviews as well to make sure that companies have the right amount of insurance. That's something that often gets overlooked, but is really important in your risk mitigation strategy. [Erica Osher] 12:52:23 Um, and then there's the, you know, that's all just on the vetting side, like, the security vetting side, but then there's also, um, you know, with… The reason, you know, we chose Google, I think the thing is, like, what does it already integrate with that you're already using? So how easy is it going to make adoption? Because if you… for you all, if you're using Microsoft a lot, it totally makes sense to use Microsoft if you're happy with what it does, because then you don't have to ask people to go somewhere else. [Erica Osher] 12:52:47 And asking somebody to go somewhere else to do a thing, your adoption rates are gonna plummet just from that. So, the more you can kind of use things that are naturally integrated, I think the better, as long as it's meeting your business needs. And Gemini is one of many tools that we are using. We're also using things like GitHub Copilot and other tools. [Erica Osher] 12:53:06 Um, you know, we may do other… we use other models and access other models through some of those dev tools, too. So, um… you know, that's kind of a long-winded answer, but… but I think you have to kind of look at all of those different factors and come up with your own rubrics for that, and we're still learning what those are. [Chad Davis] 12:53:22 Sure. Um, by way of follow-up, um, just a few minutes ago, Emily asked, I think it was part of the previous thread, we were discussing, but it kind of ties in here, uh, and it's about using paid tools, and, um, she asked, it doesn't seem clear to me that the paid version is necessarily more private. Can you talk about. [Chad Davis] 12:53:40 like the extra settings that you get when you get a paid version. Just use Gemini as an example, because they're all kind of parody there. [Erica Osher] 12:53:47 Yeah, if you look at the Gemini API terms, for example, the Gemini Api terms say that they can look at your prompts, that they say they can look at your queries. They say that they'll use it to train their models. You have basically no privacy and no protections. And so even if you're using a cheaper paid tool, usually they have. [Erica Osher] 12:54:02 Usually the main difference is if you were paying for it, they don't train their models on your work. If you are not paying for it, they will. [Erica Osher] 12:54:09 Um, and that is pretty true across the board. [Chad Davis] 12:54:10 And you can do, like… There's like a private browsing equivalent function, too, when you pay. Like, you can have temporary chats. I know with ChatGPT, you can. I think you can with Google, too, right? [Erica Osher] 12:54:17 Yeah. [Erica Osher] 12:54:22 Yes, um, yeah, and then either… yeah, so you have temporary browsing, um, you can also tie it to your own data protect, like, retention policies and your information rights management policies, so things like that. It might also have better settings for how you share data within your organization. [Chad Davis] 12:54:38 Right. [Erica Osher] 12:54:38 So that's another thing to be aware of, is… you know, you might not want your HR team's data to be easily accessible to other people, so you want to have… make sure that you're using systems where you can kind of wall things off appropriately. [Chad Davis] 12:54:52 And maybe also just know there are different types of licenses now, too, so, like, you know, there are government licenses that obviously are going to require a lot more security, a lot more sort of data retention factors are counted in there, too, so… You know, maybe it's not just getting the, you know, off the screen paid version, but that maybe there's a license for the organization that is a governmental license that has extra layers of protection. So maybe explore those as well. [Chad Davis] 12:55:21 Amber, you got a question for us? [Amber Samdahl] 12:55:24 Yes, a question from Lynn. Increasingly, reporters are being called to do more with less outside of their regular audio, they've got to shoot, edit, produce reels, etc. At some point it does become a time issue. What is your guidance on using AI to assist in handling tasks like video editing, etc? [Amber Samdahl] 12:55:43 It's not replacing the journalist, but it could help save a lot of time. Maybe, Sharahn, you could start. [Sharahn Thomas] 12:55:49 Yeah, sure. Um, yeah, on the on the specifics on the video editing part, as I was mentioning, you know, in our work so far with video, we have not… utilize the AI, um, features to the extent that I know it. No, we have not cleared that for use, but like I said, you don't have the ability in every tool to turn them off completely. So, um, our guidance has been that no one is permitted necessarily use them, so. [Sharahn Thomas] 12:56:21 But we're exploring. We want to get to the point where we're testing it, and so we really understand what they do, and how that can work, and we would want to be able to put those and make use of them for the staff in compliance with, again, our standards and guidance and what we have found in the testing, first and foremost, and be smart about that. [Sharahn Thomas] 12:56:45 I'm trying to think, and you know, but to the point, there has been so much over the last number of years where journalists are out there and they're doing. [Sharahn Thomas] 12:56:55 Multiple things to try to make their content, you know, writing text stories, doing the audio recording and sit-downs, and now, yes, video in some sort of way, and so… It's another driver of why we want to figure out the smart usage of AI to benefit them, but it's just a process to walk through and get there. And I don't… [Sharahn Thomas] 12:57:23 Just trying to think of, you know, because our working even in Gemini is just really in the early stages, but people are… Um… I'm just trying to think if there's been anything substantively at this moment that I could really say that we've done, but I think it's because we're just so early in it with just giving out the guidance and opening it up more, that, um, we will see what other tools and what other tools will bring in. [Sharahn Thomas] 12:57:50 Um, in-house as we're trying to do that. So we're doing a bit of vetting right now, even for the last year that we've been doing the video work with the tools that we have, and just, like. [Sharahn Thomas] 12:58:00 What do we know so far? How is it working us? But again, one of our criteria that we're not really able to assess quite yet is the AI parts, because we just haven't. [Sharahn Thomas] 12:58:10 tested that yet. [Amber Samdahl] 12:58:12 Are there specific, just a quick follow up, are there specific features you're interested in testing or that you're going to prioritize testing first that you think would have the biggest impact based on what you've learned so far? [Sharahn Thomas] 12:58:15 Yeah. [Sharahn Thomas] 12:58:23 Yeah, I think, um… Certainly, the features that are… I think aligned to our… to what we're already saying in our guidance and our standards team is open to once we've tested it to investigate further. But I would say right now it would be still consistent with anything that's back end. [Sharahn Thomas] 12:58:45 Not necessarily… Going. So, transcription, or… or, um… translation, I can't say anything, like, we've seen everything in tools, like, they can fix a gaze on someone's eyes. That creeps me out, quite frankly. I… I don't feel like because contextually, there's a reason why that person may have gazed, you know, and so I think that changes. [Sharahn Thomas] 12:59:10 The the the context of the of the journalism in that story or that interview in that moment. And so it's that kind of thing, it's parsing through it, and we, um, and I don't have the list in front of, like, the member of my team who's doing the deep dive and learning. [Sharahn Thomas] 12:59:25 Um, the ins and outs of the tools has more of that, but I would just say it's anything in the step saving, and you quickly get into from step saving to, like, this is the outward-facing product. So that is the thing that's the challenge, in my view, right now. [Sharahn Thomas] 12:59:40 We had a question come up like, are we okay with a rough cut, sort of, you know, just by voice. [Sharahn Thomas] 12:59:49 the computer system, whether that's actually AI or not, will know, okay, then. [Sharahn Thomas] 12:59:55 take that view versus this view and give you a rough cut of that video, right? And we haven't had to produce a lot of video that's really on tight timelines. We've been doing this. [Sharahn Thomas] 13:00:07 Up first, Winter Games pod, and the question came up, could we do it in that? And the argument was being made whether it was actually AI or not, which is tough. And then the timeline, I mean, we just didn't get to the point where we could okay that because we just we don't know enough about it. But those are the types of things that I could see that make a lot of sense, right? It's just… Simple back and forth, guest host interview, shot changes, and… [Sharahn Thomas] 13:00:33 Why would you make a producer have to go through that? You know, taking out ums and ahs is another type of technology, I think we just have to see how effectively it does that, and that it's not clipping, and, you know, and it doesn't convey something else maybe unintentionally, because you're relying on it to just do that. That's… That's the work. [Chad Davis] 13:00:56 Um, we are at the top of the hour, but we often go long. Npr crew, do you guys mind sticking around for a few more questions? Because we have a few more. Good. Okay. Thank you. Appreciate that. I'm going to jump ahead. We'll be a little more selective here. And this really is going to be more, I think, for Tony. [Sharahn Thomas] 13:01:04 Okay. [Chad Davis] 13:01:13 Phoebe asks, can you talk a little bit about how and when to disclose the use of AI should stations always disclose when they've used AI tools in any way? And I think, think about it too, Tony, from both the newsroom perspective, but also if someone's doing promos or underwriting spots or, you know. [Chad Davis] 13:01:32 Creating their, you know, board book or something like that. [Tony Cavin] 13:01:37 The problem with so many of these things is that I do think there you sort of know it when you see it. It's it's case by case. [Tony Cavin] 13:01:45 you know, the thing that always immediately comes to mind as the example is we used AI to go through this huge pile of data. And then we had reporters look at what the results were and figure out blah, blah, blah. [Tony Cavin] 13:01:58 But I can't… give you a firm and fenced rule about. [Tony Cavin] 13:02:04 If this percentage of your worker that if AI plays a major role in what you've done, then I think you should disclose it. If AI has done something that you. [Tony Cavin] 13:02:15 would either not have been able to do, or not have been able to do in a timely manner, then you probably want to disclose it. If at some point you ask Gemini to give you some suggestions for questions you might ask a guest. I don't think you need to disclose that. [Tony Cavin] 13:02:31 The other place where we do disclose, and often it's not our AI. But we're very strict about if we use an image on a digital piece on a web piece that is AI generated, often to illustrate, you know, people have been doing this. They've seen. [Tony Cavin] 13:02:49 You know, President Trump riding an elephant, or whatever it is, no matter how absurd, like President Trump, we will still make very clear that it's AI, because we don't want someone to be able to copy that image without saying it's AI. So we want to do it in a way that makes that very clear. And there have been times, frankly. [Tony Cavin] 13:03:07 When I've asked them not to use certain images because I don't think we can protect them that way, and I don't think you really need them to tell the story. [Tony Cavin] 13:03:16 It's, as with everything, it's a judgment call. And the other thing, just to sort of jump back to the previous question for a moment. [Tony Cavin] 13:03:25 you know, when I first learned to edit audio tape was a long time ago. We did it with razor blades in those days, and we physically cut the tape, and you could still take out ums and ahs and whole words and sentences and distort. I mean, we have always relied. [Tony Cavin] 13:03:40 in radio on the judgment of the reporter and the editor to make sure what we're doing is factually accurate, and we're not cleaning something up or distorting its meaning, or simply changing the meaning. And so those rules really don't change. Ai is just a faster way. [Tony Cavin] 13:03:57 And you use fewer razor blades, but you're still able to accomplish essentially the same thing. And so I think… As with so many things, the rules we've always followed are still applicable. We just have a new technology that we're applying them to. [Chad Davis] 13:04:14 Yeah, I think, um, you know, we've talked about it. Well, with any emerging technology, if you. [Chad Davis] 13:04:21 If you approach new technologies from a value standpoint, and you're sticking with the values that you've always had, which are not technologically based, then you probably already have a pretty good rubric for deciding when to use a new piece of tech or. [Chad Davis] 13:04:36 Or not. One of the things I'll just put in as a nuance, one of the things we've talked about from the beginning in Nebraska is just distinguishing between editorial content and corporate content. [Chad Davis] 13:04:47 And corporate content just doesn't carry with it the same disclosure components that I think the editorial would necessarily require. Totally agree with you on the editorial stuff. [Chad Davis] 13:05:01 Tony, Amber, did you want to ask anything on this one too? [Amber Samdahl] 13:05:03 Quick follow-up question on the disclosure. Do you have guidelines for disclosure internally for the use of AI? I know you mentioned like shadow AI usage. Are there guidelines that you all share with the staff for sharing with each other? [Tony Cavin] 13:05:19 No, we have guidelines about what you can and cannot do. [Sharahn Thomas] 13:05:19 We don't? [Tony Cavin] 13:05:22 Um… Well, that has… that's not really risen as an issue of, you know, we tell your your colleague that I used AI for this. I think if you're working as a group, the assumption is everybody knows everything in that group, unless there's a reason not to like somebody's got some. [Tony Cavin] 13:05:38 very secret source, and they just don't want a lot of copies of that, but. [Tony Cavin] 13:05:42 Um, it may well come up, it hasn't come up yet, and frankly, I don't anticipate it coming up, but who knows? [Sharahn Thomas] 13:05:48 I mean, we… and just in terms of just Tony just expanding a little bit, like, in terms of the disclosure, like in terms of developing a disclosure policy or something like Tony said, we're not there by our standard that we feel like, but we've got some. [Sharahn Thomas] 13:06:04 Some basic principles as the standards team applies it. But we've talked about it, because as we are starting to introduce. [Sharahn Thomas] 13:06:12 tools, we want to ensure and just kind of run it through the pipe, so to speak. Like, is there a reason that someone creating the piece needs to just make sure that those editing and additional hands that may see it. [Sharahn Thomas] 13:06:27 Standards included, but, like, you know, we have an editorial review team that they're aware, because they may ask questions to just drill down a little bit more that may be the first, the, you know, obviously the journalist may have thought it was, okay, their editor may have. [Sharahn Thomas] 13:06:42 Thought it was okay, but then there's something else that, from tone or whatever, that the standards or editorial review team. But again, we haven't enacted anything, you know, where it's in writing, or we… but we've definitely… we're having those conversations, um, and we'll just see, but we're… early stages, so it hasn't really, as Tony said, there's just not a huge need at this moment. [Erica Osher] 13:07:07 On the corporate side for the general company, in terms of how we're talking about this, because there's also a lot of, like, internal corporate uses, right? Like, if you're making a slide deck for the executive team to pitch something right? Like, um… I think we've kind of have suggested that teams and divisions kind of set their own norms around disclosing with each other, um, and that, like, largely that should be up to kind of the managers of the teams. [Erica Osher] 13:07:35 About how… what kind of transparency they want and expect from their staff. [Erica Osher] 13:07:41 Um, I think what we're trying to do, and my boss, Ryan Mercili, is a big proponent of this, is to create a culture of openness and sharing and encourage people to not hide the fact that they're using Gemini. And, you know, or that they're using AI, if you make an AI-generated image in your presentation, like when I do training decks. [Erica Osher] 13:07:59 I use AI-generated images in those training decks. And I have a big disclaimer on the front of that deck that says that all of those images were created with Gemini because I want to kind of create that expectation and not act like I'm ashamed of it. [Erica Osher] 13:08:13 So I think that's the big thing is like just trying to get people away from feeling like, oh, it's bad if I used AI for this and more like, Hey, I used AI for this. And then I did this other thing. And I think that that's kind of the best thing that you can do, again, like to get not policing. [Chad Davis] 13:08:31 That was our early strategy in Nebraska, too, back again back in 2023, was like, let's just take the shame away from this right from the get-go. Let's, like, not be ashamed about it. This is experimentation. This is evolving with media, so I'm 100% with you on that. I'm going to do just a little bit of. [Chad Davis] 13:08:48 kind of editing here on the questions, but just to acknowledge Brenda, like, your piece on changing the perception of public and breaking stories probably a little… it's a little bigger than we can dig into with the time that we have for this webinar, but we'll think about that kind of as a… maybe a future topic. [Chad Davis] 13:09:04 Severn, um, your question about the ADA web guidelines, this is what I've referenced Suzanne Smith earlier. Suzanne kind of pinged me on part of this. I think this is just something that is beginning to come about. And this, if you are a TV station. [Chad Davis] 13:09:21 especially one tied to a government license or a university license, you might want to do a little bit of digging into this, because there will be some what we think of as descriptive video. [Chad Davis] 13:09:34 requirements, uh, coming down the pipe for some of you in April, for some of you a year from April. So maybe just dig into that. I believe there was a link that was in the chat. You can grab that. So, uh, I didn't want to let it go, but, like, again, bigger than we can really kind of tackle here. I think Amber may be. [Chad Davis] 13:09:51 Copy the link back up and over. And then, Mark, you had a comment, but we're going to kind of glide over that. Thanks, Amber, for sticking that in. Did you want to, Amber, take the next question on our list? [Amber Samdahl] 13:10:05 Sure, and thanks everyone for sticking around. We're getting down to the end. Next question from Derek. What is your strategy to counter the onslaught of slop that is proliferating? [Amber Samdahl] 13:10:22 It's a big question. [Chad Davis] 13:10:23 maybe, uh… I don't know, I guess there's there's both creating it and then interpreting, isn't there, Amber? Like… Erica, you want to start? [Erica Osher] 13:10:32 Sure. I mean, it goes also back to kind of what Rick was saying, too, about education. [Erica Osher] 13:10:39 you know, there's a lot of initiatives out there, right? Like, there's, like, you know, the synth ID that Google has, which identifies things only made by Gemini. There's C2PA. There was a really great, um… decoder episode, I think it was. It was either Decoder or Vergecast episode about the problems with C22BA that I would strongly recommend. It was phenomenal. [Erica Osher] 13:11:00 The problems with things… and so CTPA, basically, sorry, is is a provenance. [Erica Osher] 13:11:07 standard that has been, you know, purportedly that was created by Adobe to basically watermark stuff across from creation all the way through how it's published and show things like, hey, was this distorted in Photoshop? Like, was this AI generated? All these different questions. [Erica Osher] 13:11:23 The problem is that all of those types of technologies require every single participant to implement it, right? So a technology like CTPA really struggles because it's not in Apple phones, and most of the images, like, most of the images created in America now are shot. [Erica Osher] 13:11:39 on Apple phones and Apple cameras. So… It's very tricky to do that. There's no provenance initiatives right now that I know of, which I think is actually a really interesting opportunity for NPR and for public media, um, around audio. [Erica Osher] 13:11:55 Everything is basically just images. There's a little bit starting with video, there's nothing, really, with audio. Um, so… I think this is something that we're all… you know, BBC Verify is really impressive in terms of what they've been doing there, but this is, like, a very difficult problem for us all in terms of detection. I think our reporters, and maybe Tony can talk about this a little bit more, have been doing some work to learn on that. [Erica Osher] 13:12:19 Um, but it's… it's a real scary challenge, unfortunately, and something that I think we could just all need to keep advocating for kind of those developments of those standards and actual implementation by the companies, not pretend implementation. [Chad Davis] 13:12:35 Devon, can you talk a little bit about how maybe, and Tony can weigh in on this too, how the newsroom. [Chad Davis] 13:12:42 is vetting information, you know, Erica just talked about BBC Verify, but how are you vetting information on world deepfakes now? [Chad Davis] 13:12:49 And just what's your… what process do you guys have? Maybe, Shawn, you want to start? [Sharahn Thomas] 13:12:53 Yeah, I think, Tony, I think maybe you might be able to speak more about like the, um. [Sharahn Thomas] 13:12:59 I forget the team. I want to call them the disinformation team, but it's not that they're doing a lot of work, and in fact, we are, I think, next week, our training team is actually going to have that team is presenting to the rest of the staff about some of the things that they have learned. [Sharahn Thomas] 13:13:18 In terms of their discernment and assessment about things that come into there and that they're looking at. I will say that also from… I had a conversation just yesterday, because we've reached out to a couple of different entities, and this was with Verizon, which was oddly my IT team brought to us a couple of years ago and said, would you like to… Talk to them about detecting deepfakes. And so we're having that conversation, too, from a technological standpoint. What can be used as far as tools, um. [Sharahn Thomas] 13:13:50 Um, but that… that's yeah, the conversation right now is is early and the training and trying to fan that out across the rest of the content division, so… the teams that have been most confronted with it have become more the subject matter experts, um, in terms of their learning and what they're doing. And we're at the point of sharing that out in a different way and finding other resources to just augment that, I think is where the conversation… I don't know, Tony, if you can add anything just practically what you've. [Sharahn Thomas] 13:14:22 what you know of what that team has done? [Tony Cavin] 13:14:24 Yeah, I mean, they're learning by doing, basically. Last week. [Tony Cavin] 13:14:31 I spoke at a panel at WNYC had set up to talk about this issue, and I must say I learned more than I imparted. I got the better end of that deal. But there was a guy who works for one of the television networks, NBC, I believe. [Tony Cavin] 13:14:45 who broke down how to look at video and sort of determine whether or not it's fake. And what it really came down to. [Tony Cavin] 13:14:54 In the end, I was surprised, because I thought, you know, someone who's not all that familiar with how to pull the metadata out of a video file and that sort of thing. I'm gonna I'm gonna have to really focus on this to figure it out. [Tony Cavin] 13:15:07 But really, when it comes down to is using other points of reference to somebody in the newsroom know that particular area. You know, we had stuff. [Tony Cavin] 13:15:17 Um, we did a big piece maybe two, three months ago about some of the more violent arrests that ICE had carried out. We had a lot of video from social media on that. [Tony Cavin] 13:15:29 And standards insisted that they verify it, and that was how you end up doing it, you know, is with the Google Street View. Is this really where it purports to be with people who know the area, which you'd be surprised, you know? Who knew that so-and-so grew up in, you know, in Des Moines, and they happen to be in a newsroom in Los Angeles, or whatever. [Tony Cavin] 13:15:48 So a lot of it is really basic reporting principles. I understand there's a technical side to it, and I'm still learning that. But I do think, as with so many things. [Tony Cavin] 13:15:59 It's the basic reporting questions. Does this look like it could really be here? Does that background? Can we find a way to prove this, just as you would do with any other unverified piece of information you got from somebody? I know that sounds like an invasive answer. [Tony Cavin] 13:16:15 But I do think it's really what it comes down to when you're trying to do this stuff. [Chad Davis] 13:16:20 Um, Christian had asked, like, a different question at a different part of the webinar, but I think it kind of serves as a bit of a follow-up here and had asked about. [Chad Davis] 13:16:30 If NPR had thought about potential opportunities brand-wise to differentiate public media as sort of a human-driven enterprise amidst all of this, given the amount of generated content coming out of commercial media. [Tony Cavin] 13:16:47 I don't know if you've thought about it institutionally, but I've thought about it. [Chad Davis] 13:16:47 I don't know, Erica, maybe. [Tony Cavin] 13:16:51 Individually, I think that, I mean, I mentioned this earlier when we first were starting that I do think that there will be a division between, you know, traditionally in the newspaper world, there was a division between broadsheets and tabloids. You know, you bought the New York Times. [Tony Cavin] 13:17:06 or the Wall Street Journal, if you wanted to get serious news, and if you wanted to find out who in Hollywood was dating whom, there were other newspapers you could look for for that. And I think you will find there's going to be a similar dichotomy between sites which will probably generate an awful lot of clicks. [Tony Cavin] 13:17:24 Um, which we already see, and we will just see more of, which will use AI because they're absolutely about minimizing cost and maximizing profits, and sites that are doing serious journalism, and I expect public media to play a very large role in that, precisely because we're not profit driven. [Chad Davis] 13:17:42 Good point. Good point. I have one kind of quick fact question and I have a wrap up question for each of you. We're all done, folks. Thanks again for sticking with us. Alessandra asks, what AI training tools or programs are you guys using at NPR? How are you delivering the training and workshops? [Chad Davis] 13:17:58 Sean, you kind of talked about training a second ago in your answer, but you want to start with that? [Sharahn Thomas] 13:18:04 Sure. Yeah. It's a really collaborative process. With our now the… what I would call the editorial training team, but I would say even before that in just the use of Gemini as an organizational tool, and it was led a lot by Erica and just those in the AI labs team just training team by team. [Sharahn Thomas] 13:18:27 And it started very small by, you know, first divisional leaders getting a preview, being able to weigh in to say how they thought that tool would be useful or not, or just ask questions about it from the, you know, through the lens and perspective of their teams. [Sharahn Thomas] 13:18:45 And everybody, I think, Erica, correct me if I'm wrong, because everybody was bought in, and they were, and they asked their questions and good feedback, and I think that became the beginnings of Erica designing a training. [Sharahn Thomas] 13:18:59 module to walk people through that tool. And when it comes to like Tony mentioned, we have taken a slower approach with rolling out any tools to the content division. [Sharahn Thomas] 13:19:15 Um, and I will also say our Office of General Counsel, the same, you know, those who are a little bit at a higher stakes for very good reasons, it's been a slower walk, but the training has looked like. [Sharahn Thomas] 13:19:28 doing the testing first, knowing exactly what we're… what about that tool that we're going to utilize, or we think is most practical or germane to that team's work. And then for us in the content division, our training… our trainer or our training team, but it's the editorial trainer has taken that. She created. [Sharahn Thomas] 13:19:47 Uh, you know, the curriculum and the walk people through based off of the standards guidance, which was crafted. So it's… it's the training follows what's okay and what's not, but then also gives you the broader strokes of the tool itself. [Sharahn Thomas] 13:20:02 And that that's how we're gonna hit that. Yeah. [Erica Osher] 13:20:03 Can I just add on? Is that okay? I think what's really… and I saw there's a lot of questions in the chat, too, about like, can we share training materials? And this is definitely something I think that we want. We've been talking about a lot is, how can we kind of support stations and training? How can we do that? It's it's one of my OKRs to be thinking about that. Some of my objectives to be thinking about that and working on that. [Chad Davis] 13:20:05 There we go, yeah, good. Yeah. [Erica Osher] 13:20:24 Um, so very much want to. I think what's tricky about training it with AI, specifically with its tool-specific, is that the tool changes pretty much every time you use it. So, like, we've been doing trainings, and then today, or yesterday, music showed up in the Gemini dropdown. [Erica Osher] 13:20:40 Of tools that you could use. And this is true for all of them. They're constantly being released. So the materials themselves that are like more static are kind of like the guidance materials, the rules of the road, the things like that, prompting best practices, those types of guidance. [Erica Osher] 13:20:56 The types of normal software training that you would do, where you would, like, maybe show screenshots of, here's where you click on the thing. [Erica Osher] 13:21:04 And here's how you do it step by step, or not worth. [Erica Osher] 13:21:07 I don't think those are worth creating for a lot of the AI tools, because they will be out of date by the time that you finish them. So I think, like. [Erica Osher] 13:21:17 We do a lot of live demos, because I find that to be much more effective, so you start with kind of standard slides about the rules, and then you move into the live demo, where you're walking it through with people and showing them step-by-step yourself in the moment. [Erica Osher] 13:21:31 Um, and teaching them how to think about the tools versus teaching them where the buttons are. Um, and I think that's gonna be a really important thing to think about when you do training here, which is really hard. Um… And then, um, yeah, so that's something that we've been thinking about a lot, but it's very challenging for those reasons, to be honest, and that's true kind of across the board. I don't think that's unique to Gemini at all. [Chad Davis] 13:21:57 And thanks, Tim, for putting the Google News Initiative link in the in the chat. Appreciate that. Okay, we're going to wrap up. And so for the wrap up, what I'd love if you could do is tell us like, what is your like main source for staying up on AI news and developments? Just pick one. [Chad Davis] 13:22:16 Uh, and then, um, for any station that is, like, wanting to get serious about AI in the next. [Chad Davis] 13:22:23 30 days? What is the one thing that they could be doing? And let's not station, or you could answer it from the individual level, too, if you're just a, you know, working at a station and you want to get serious. Maybe go in reverse order from how I started. Sharahn, do you want to maybe take a first stab at this? [Sharahn Thomas] 13:22:41 Yes, because I'm going to say my main my main piece of source is Erica. No, I'm. [Tony Cavin] 13:22:50 That's what I was gonna say. [Chad Davis] 13:22:52 You aren't alone. [Sharahn Thomas] 13:22:54 No, it's great. You know, we have a small team with the AI labs, that's great, and everybody is, you know, and Erica's a key person in sharing information. But also, in seriousness, you know, Erica, well. [Sharahn Thomas] 13:23:08 NPR has been a member of, like, the EBU. I will say that, you know, and we did some work, some research work, um, with the BBC and EBU last summer, and so we do get information, and I find that. [Sharahn Thomas] 13:23:23 for whatever reason, the Europeans and also our friends to the north and Canada have some good information. So when they send stuff about what they're doing, particularly as it pertains to, again, newsrooms, that tends to be like very valuable for me to look at in that way. But I will tell you that I spent a lot of time just. [Sharahn Thomas] 13:23:43 just… I was doing this the other day on Sunday because there's so much in my schedule all the time, so when I just get bits of time, I'm sort of broad stroking, just research, like, looking what's out there and what I can come up with, which is not the best place. That's why Erica's my source. I'm sorry. [Sharahn Thomas] 13:23:59 And I'm sorry, I forgot the… what was the other part of the question, Jen? [Chad Davis] 13:24:01 Well, yeah, what's one thing at a station, if you want to answer at that level, or a person working at a station could do to get serious about AI in the next 30 days. [Sharahn Thomas] 13:24:11 Oh, okay. for Erica. Um… I don't know. I mean, I wish I had it just a bit. It's just the research. I think you just… but I know there's a lot out there, and that's just not a good, easy way of saying it. Um, to get serious about it. [Sharahn Thomas] 13:24:29 I would say if it's just… This is the way… I mean, before I got deeply in this work and just got curious about it, and I don't know where people are, but if it's really at the beginning, and you're just trying to… I think, like, I go back to getting some personal experience with something. I think, and being willing to pay, just even if it's. [Sharahn Thomas] 13:24:50 individually, not the whole station, but just to say, this is the tool, I'm going to pay the little bit just to get a little bit more privacy and bit of control over, you know, what it's learning and what I'm able to do with it, and just get some experimentation that way, so then you can start to inform. [Sharahn Thomas] 13:25:07 Better thinking, better questions on how you move up. That's what I would say. [Chad Davis] 13:25:11 I think that's a totally legit answer in Nebraska, I think Wisconsin, too, like we both pay for people to have those pro accounts when when they want them. There's not a heavy vetting process. If you want to play, you want to experiment. Like, we'll kind of foot the bill for that. [Sharahn Thomas] 13:25:24 Yeah. [Chad Davis] 13:25:28 So just treat that as precedent, if you will. Thank you, Sharahn, for taking those two questions and plowing the road for everybody else. They had all time to think it through now. Tony, you want to tell us your two answers? [Tony Cavin] 13:25:38 I had time to think it through, but I think Sharahn took some, stole a lot of my answer. I do think, joking aside, not just Erica, but most of, this is not the main focus of what I do. [Tony Cavin] 13:25:53 At this point, it's one of the things I do. And so I learn about what's going on primarily from colleagues. We have a Slack channel where we share a lot of this information. I don't really feel that I have the time to go spend a lot of time looking. [Tony Cavin] 13:26:09 for stuff. So I'm letting others do the sort of the outreach for me. And I'm taking advantage of that. [Tony Cavin] 13:26:16 And I also think by the time, you know, some of the larger questions. [Tony Cavin] 13:26:20 You see I'm more interested in them than in the nuts and bolts of how it works and what you can do with it. And a lot of that is already in the mainstream press, and we're looking at anyway, just as journalists in a day-to-day basis. Um… I agree with Sharahn in terms of how to get started for somebody who has just starting out. And I think even if you don't have the resources, minimal that they might be. [Tony Cavin] 13:26:46 even playing with the free versions of these things helps you start to understand what they can and cannot do, and it's like anything else. You could sit in 8 seminars and listen all day long to what it can do. But until you get the hands-on. [Tony Cavin] 13:27:02 and start playing with it, you won't really feel like you're starting to understand it. And then, if you are unable to do the paid version, you will still have a fairly good sense of what the possibilities are, what the limitations are. So I would argue in favor of that. [Chad Davis] 13:27:19 Erica, bring us home. [Erica Osher] 13:27:20 Yeah, it's the algorithm certainly helps these days because unfortunately, now that Google and all the other sites that I use know that I… and working in AI, they flood me with AI things constantly. So it's more trying to sift through the hype. And then I listen to, like, hard fork podcasts by The Times. I listen to, um… [Erica Osher] 13:27:44 Decoder and verge cast a lot, those are helpful, and uh, they're good about, like. [Erica Osher] 13:27:49 breaking through the hype a lot, like, they're very, like, kind of cynical about a lot of it, which is helpful. Um… And then there's a few people. Linkedin is like the worst part of this, but Linkedin, there are some people I follow on Linkedin who are, like, uh, from some of those European folks, um… [Erica Osher] 13:28:05 Some of the other kind of, like, there's this guy, like, flow, uh, what is his last name? But there's a guy who used to be at the CBC who's, like, an AI influencer kind of now, who I actually really trust. He's very much a journalism person, so finding people like that. [Erica Osher] 13:28:19 Um, and then agreed about playing, and then I think also… narrow down your use cases. Don't think about it too big, um, because then it gets too scary, um, chunking is always the best way to approach any big problem, so start small, whether it's like, I want to start just by defining the rules, I want to start by just figuring out this one workflow issue. [Erica Osher] 13:28:39 Um, versus deciding that you're going to figure out everything with AI all at once, because you can't. [Chad Davis] 13:28:44 Right, yeah. Starting with the problem is I I that's what I just took from what you said, and I think that's a great tip to kind of wrap up with here. Thank you. I saw a comment from Reagan in the chat. Public media innovators will try and like see what we can kind of do to address that. It's a good. [Chad Davis] 13:29:02 Good point. Conferences are a tricky thing these days to try to put together, but we definitely try to cover it through with these webinars. There was a whole raft of questions, not from the chat, but that I had about SEO and AI that we didn't get to today, but we did kind of cover this. [Chad Davis] 13:29:20 In our December webinar with SEO and SEM folks from PBS, so you might check that out. I think Amber's gonna… there you go, drop that into the chat for us. Our next Public Media Innovators webinar is, I think, going to be on March 19th. We're still putting that one together, but, uh, should be March 19th. [Chad Davis] 13:29:37 at 1:00 PM Eastern. I will just be back from South by Southwest. Amber and I are also going to be presenting at the Game Developers Conference. I think Erica will be back from South by Southwest, who we're going to hang there. So, um, so join us next month, uh, and we'll be announcing that topic soon. And thanks again, everybody who stuck with us to the end, and thanks to our NPR crew. [Chad Davis] 13:29:57 for sticking with us, too. Appreciate it. See y'all. [Erica Osher] 13:29:59 Thank you. [Sharahn Thomas] 13:30:00 Thank you.