AI, Quantum, and Cybersecurity: Protecting Critical Infrastructure in a Digital World

Episode 73 September 08, 2025 00:57:17
AI, Quantum, and Cybersecurity: Protecting Critical Infrastructure in a Digital World
PrOTect It All
AI, Quantum, and Cybersecurity: Protecting Critical Infrastructure in a Digital World

Sep 08 2025 | 00:57:17

/

Hosted By

Aaron Crow

Show Notes

In this episode, host Aaron Crow is joined by Kathryn Wang, Principal of Public Sector at SandboxAQ, for a wide-ranging and candid conversation about the critical role AI and quantum technology are playing in today's cybersecurity landscape. 

Kathryn and Aaron break down complex concepts like quantum cryptography and the growing risks of deepfakes, data poisoning, and behavioral warfare - all with real-world examples that hit close to home. They dig into why cryptographic resilience is now more urgent than ever, how AI can both strengthen and threaten our defenses, and why your grandma shouldn’t be left in charge of her own data security.

From lessons learned in power plants and national defense to the nuances of protecting everything from nuclear codes to family recipes, this episode dives deep into how we can balance innovation with critical risk management. 

Kathryn shares practical advice on securing the basics, educating your network, and making smart decisions about what truly needs to be connected to AI. Whether you’re an IT, OT, or cybersecurity professional—or just trying to keep ahead of the next cyber threat - this episode will arm you with insights, strategies, and a little bit of much-needed perspective. Tune in for a mix of expert knowledge, humor, and actionable takeaways to help you protect it all.

 

Key Moments: 

 

04:02 "Securing Assets in Post-Quantum Era"

07:44 AI and Cybersecurity Concerns

12:26 "Full-Time Job: Crafting LLM Prompts"

15:28 AI Vulnerabilities Exploited at DEFCON

19:30 AI Data Poisoning Concerns

20:21 AI Vulnerability in Critical Infrastructure

23:45 Deepfake Threats and Cybersecurity Concerns

28:34 Question Everything: Trust, Verify, Repeat

33:20 "Digital Systems' Security Vulnerabilities"

35:12 Digital Awareness for Children

39:10 "Understanding Data Privacy Risks"

43:31 "Leveling Up: VCs Embrace Futurism"

45:16 AI-Powered Personalized Medicine

About the guest : 

Kathryn Wang is a seasoned executive with over 20 years of leadership in the technology and security sectors, specializing in the fusion of cutting-edge innovations and cybersecurity strategies. 

 

She currently serves as the Public Sector Principal at SandboxAQ, where she bridges advancements in post-quantum cryptography (PQC) and data protection with the mission-critical needs of government agencies. Her work focuses on equipping these organizations with a zero-trust approach to securing sensitive systems against the rapidly evolving landscape of cyber threats.

 

During her 16-year tenure at Google and its incubator Area120, Kathryn drove global efforts to develop and implement Secure by Design principles in emerging technologies, including Large Language Models (LLMs) and Generative AI.

 

How to connect Kathryn : 

https://www.linkedin.com/in/kathryn-wang/



Connect With Aaron Crow:

 

Learn more about PrOTect IT All:

 

To be a guest or suggest a guest/episode, please email us at [email protected]

 

Please leave us a review on Apple/Spotify Podcasts:

Apple   - https://podcasts.apple.com/us/podcast/protect-it-all/id1727211124

Spotify - https://open.spotify.com/show/1Vvi0euj3rE8xObK0yvYi4

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: So you tell Gemini summarize my calendar. It'll look through all your calendar invites and, oh, somebody just hid a executable command within the description and ends up. We talk about OT ends up infiltrating into the network and into the smart home, opening the windows, turning off the lights, and turning on the broiler. [00:00:21] Speaker B: You're listening to Protect it all, where Aaron Crow expands the conversation beyond just OT delving into the interconnected worlds of IT and and OT cybersecurity. Get ready for essential strategies and insights. Here's your host, Aaron Crow. Hey, everybody. Thank you for joining me for another episode of the Protected all podcast. I'm super excited to have my friend Kat on the on the line with me today. We're gonna have some fun. It's gonna be a little spicy, which is exciting. And so, Kat, why don't you introduce yourself, tell us who you are and a little bit about your background. [00:00:54] Speaker A: Thank you very much. A A Ron. That's how you pronounce it, right? [00:00:56] Speaker B: That's right. [00:00:59] Speaker A: By the way, before I introduce myself, I have to say this is very exciting to be here, so thank you for having me. And I really had to work when I read this Protect it all podcast title because you've got inner capitalization. This is like very meta. You've got OT and IT and all of that stuff all in one word, brother. Like, mad props on making me work hard for the title. [00:01:24] Speaker B: It's all the little things that are in there. It's everything about it, right? In today's world, it's easy to focus on one or the other, but this is really just around all different things and all the things that we need to do to make sure that we protect the world around us. Right? [00:01:36] Speaker A: I do especially love the call to ot. It is a huge, huge problem in my mind. It's something that nobody really saw coming, and maybe some did, but not enough of us to know to safeguard it for the world of today. So my name is Kat Wang. I am a principal of the public sector division within Sandbox aq. We sit at the strategic intersection between AI and quantum technology. So not quantum computing, but quantum technology. We do cool things, things like quantum simulations, right? For material science or pharmaceuticals to find cures for undruggable diseases. We do quantum navigation in support of DoD and then avionics, aerospace, and then also from a cryptography side post, quantum cryptographic resilience in the cyberspace, which is what I focus a lot on within the public sector. So thanks for having me. [00:02:23] Speaker B: Now tell all the dummies out there, including myself, what the hell all that means, like, what is quantum? What are we doing? Like how are we protecting, like what is this going to mean to me as an individ, as an asset owner? Protecting it? Ot like what? Give us the download, you know, 92nd dummy version of that. [00:02:41] Speaker A: So the way that we understand functions today of a classic computing is very binary, right? Zeros and ones. Think about it as an on, off switch. Either it's on or it's off, or thumbs up, thumbs down. Quantum basically assumes that these two states are not the only states that it can be in. You can actually do a 360 rotation, up, down, et cetera. The, the, the idea is, you know, these, these quantum physics like things can exist in multiple states at one time. And the reason why this is important from a quantum computing standpoint is instead of being a very linear model, like if I want to get from point A to point B moving like up and down, going in a linear path, we can start thinking about it from a more lattice oriented path, right? Like a, you know, a three dimensional cube perspective. And you can, what that allows you to do is start to do computations in rapid, rapid contexts. Now what that means from a cyber perspective is from cryptography, a lot of the ways that we protect our information, we encrypt it. Today, that's going to be at risk if you are not using post quantum, like once quantum computing computers exist. Really if you are not using resilient algorithms that help to prevent quantum computers from decrypting your data, then literally everything that a cyber adversary has stolen for the last 20 years can be decrypted much faster, order of magnitude faster as opposed to like a hundred years, we're talking in months, right? So when that happens, then the entire security layer that we have, the security infrastructure that we have breaks down. And that's, that's very bad. Means that really the only way that you can be secure in a post digital transformation in a post quantum world is with this. [00:04:30] Speaker B: Yep. [00:04:31] Speaker A: Yeah, welcome, welcome back to the 1890s, am I right? So that's what it is in, in not so many words. So that's what I, I help to do in the public sectors is preparing our federal civilian entities, our, our defense entities with the ability to inventory every piece of high value asset that they have. And of course there's going to be, I call cryptographic debt very similar to tech debt. Like as you grow, there's going to be stuff that's plugged into your, your platforms and accessing your network that you don't know about and being able to get visibility into that, see what type of cryptography you're using and if it's vulnerable today, allows you to fix that and then also wrap it with NIST standards, for example, for post quantum world. [00:05:14] Speaker B: Well, and with that, just like in OT and in everything, there's going to be a, you know, prioritization of things because you know, as we know in this, in today's world, especially in the United States, everything is classified like, and there's different levels of classification and there's compartmentalization and there's different, you know, all of that has changed. Right? So we're going to, we're not going to fix everything instantly and we're going to have to understand what are the most important things. Let's make sure that those are protected. You know, the nuclear codes and you know, the things on submarines and all that type of stuff is different than, you know, my recipe that's in my Google Drive that is encrypted. You know, my mom's, you know, chocolate chip cookie recipe is not nearly as critical as, as the, you know, the nuclear code. [00:05:56] Speaker A: So the Crow family mother dough recipe needs to be protected. Okay. When the apocalypse comes and people need to have their like crusty bread, you're going to be thankful that you use post quantum safe algorithms to lock that shit up. [00:06:12] Speaker B: Okay, that's right. [00:06:13] Speaker A: Yes, that's right. I agree with you. What we're ended up talking about though is like a prioritization within a prioritization. Like if you're going to build a house, you're going to have a list of priorities at a macro level. But then each one of those priorities is going to list another list of priorities. And therein lies the rub. This is the reason why our cyber warriors, our cyber defenders are constantly trying to keep up. And it sucks. Like it's so hard, right? And on top of that you, you're fighting for budget, you're fighting for priority and reality. Like if, if this was any other scenario and if leadership and, and the rest of the, the organization was building in mechanisms to account for security and speed, then we would already be. Every time you spend a dollar on something, you put a dollar over here for something else for security to, to, to, to protect that. But there's just too much. So that's part of the reason why when to the grassroots cryptography not only for today's vulnerabilities, but tomorrow's. It's like at some point let's Say we were doing all the right things. Every zero day that existed came from something that we didn't see coming. Right. Many of them we do, but some of them, a lot of them we don't. And it's because there are. The economics are horribly flawed in favor of the adversaries. They have everything to gain and we have everything to lose. They have to win once. We have to defend successfully every time. So that is losing economics. So when intrusion happens and all other cyber defenses fail, your cryptography cannot. Because if they take that treasure out of your castle, the only thing that's keeping them from using it is the lock around it. [00:07:57] Speaker B: Sure, yeah. And you see that now looking at AI and it's hopefully, I'll say hopefully. Hopefully not as, as, as widespread in the, in the public sector, in the, in the government space, in the three loader organizations. But you see, I think there was recently a, a dating app or something that was, that was vibe coded using AI and all these women's profiles that were using this app that was released all. There was no security whatsoever. Whoever developed it, they were not a cyber or they didn't have a cyber lens behind it. The application was cool, had a, it had a fun ui. But all of the people's information, I think mainly women were that were in this app. All of their public, all their personal information that they put into this app. Again, I think it was like a dating thing was released because there was no security around it. And when you're looking at something, you're just assuming, oh, well, this is safe, like it's on the App Store, it's on, you know, there's no reason why I shouldn't be able to put those things in there. Right. So, you know, those are also things that we have to really be considerate around because as AI comes in, it can be a tool to help us to code, to make things function. But something working and something being ready for prime time, ready for release, or ready to be available to be used in the scenarios that we're talking about here. When it comes to national security, when it comes to, you know, my electricity working, when it comes to, you know, getting on a train and making sure that, you know, the safety measures are, are not impacted. Like, we have to be really careful around those types of things. And AI can help us, but it can also hurt us if we're not thinking about cyber as part of the operation and functionality of those systems themselves. [00:09:35] Speaker A: Okay, so I don't even know where to begin with all the gold that you just Dropped. So, okay, let me take it high level for a second. The. I've discovered in doing these types of conversations a lot that there's really three key things that I want anybody who this podcast to take away from them. The first is know what your data is worth and we can touch on that as it relates to that dating profile and any other app that we use, you know, you know, on, on our, on our, in our daily lives. The second is secure the basics. Right. And the third is educate your network. We inherently put all of the responsibility, the onus on the consumer, the user, when we are the act. The, the, the most uneducated segment of them all that seems horribly misbalanced. Like you're telling me, you have to tell, you know, if your grandma decides that she's gonna join a dating app, you're gonna force her to be responsible for the data that she puts online. And those terms and conditions are debilitatingly long for a reason. Mostly for the legal jargon and the dpa, the data protection rules that exist within them. But like this, something's gotta give. So that's why my third pillar is educate your network. AI. You wanna, you wanna poke that bear? [00:10:51] Speaker B: Yeah. [00:10:51] Speaker A: Okay. I have had so much coffee today, I am so ready to poke this. I'm gonna walk right up to it and punch it in the nose. So here's my hot take on AI. It is. I really go between days where, like, my life is so much better with AI in it, and then days where I'm like, oh my God, it's gonna happen. Skynet's coming to take ever. [00:11:16] Speaker B: We're all gonna die. [00:11:18] Speaker A: So it's who. I'll start with some very real examples. And I'll start with the psychology of it and how America is approaching it versus the rest of the world. So while Fortune 500 was focusing on AI to replace the workforce, our cyber adversaries said, cool, we're going to use the same AI to supercharge our workforce. So cyber adversaries didn't get leaner with AI, they got meaner. And we did the complete opposite. Right? It's like showing up to a knife fight with a Roomba and good intentions. It's like, okay, well we have about this here. It's not really AI. Well, when it kind of is, it's like, no, it's not, but, well, it's going to do. No, it didn't. Whatever you thought it was going to do would replace the human being that you just had. It's not doing that. And by the way you haven't applied any zero trust principles to it. So now it is more of a liability than the value that you got out of it. I am being very, very crude and very generalistic right now. So I will say take that with a grain of salt. But that, that is fundamentally, I think the problem here. We did not approach AI with the same creativity and productivity for a force multiplication of our workforce, which is what it really needs to be. We took it and used it as a cost cutting mechanism. Right. My friend Ali King likes to call it corporate ozempic. When you think you're getting leaner, you're actually losing the majority of your muscle mass. [00:12:50] Speaker B: Well, it'd be, it'd be like back in the day when we were going to the moon and the NASA, you know, the NASA engineers, when they started using, you know, calculators instead of slide rules. It's not like we fired engineers because they were faster. We gave them a calculator so they could be better at their job. We didn't look at it as a way to, oh well, I only need two engineers now. I don't need 10 because that guy can do work way faster. So why are we looking at AI any different? Right? I remember being in calculus class and my freshman year of college and I have this big TI92 calculator that's the size of a, a tablet nowadays. I'm just dating myself because this is back in the early 90s, but we were, I was able to use that in my class, in my math class. And you could actually put in a calculus equation in that program and in that calculator and it would give you the answer. But it didn't matter that I knew what the question was and I knew what the answer was, but I still had to show my work to get from A to B. Right. So it helped me to be able to check my work, it helped me to be able to be more confident in my work. But I still had to understand the 59 steps between the question and the answer to be able to show that I can get from A to B. And that's the way I see, that's the way I use AI. It's enhancement. It should be a 10x multiplier of your workforce. Not trying to. Well, I don't need those people anymore or I can have fewer of them because I can just AI Everything like that is so short sighted in all avenues. [00:14:15] Speaker A: And I have, it is a full time job. And this is coming from someone of sound mind and judgment who works in this space. Right. Not some. Another typical user that I would say who kind of dabbles a little bit in AI and has a free version of a Gemini or. And Claude. I spend a considerable amount of time understanding how to write LLM prompts. And what I have found from it is like, these prompts salt the responses. So if you are not careful with how you write in your prompt, they're going to give you the answer they think you want to hear. It's gotten so bad where I actually turn AI off in my search results. I don't want to see what they provide because in many cases I'll ask something very basic, like, does mint pair well with Stone Fruit? This horrible example. But like, something like that. And then AI will say, yes, it does. Okay, sure. And then I'll go. I'll actually go through all the listing results and it gives me a much better, more like, robust, like, overview of the reason why it does or why it doesn't. That's a, That's a very benign situation. There are other times where you ask a question, if you are not careful with how you're leading the question in the prompt, they will. LLMs will put that back into your response. And then when you ask them to validate that with sources, like, the sources are subpar. They would not hold water in any, like, paper if you were going to write on that or just it's blatantly wrong. [00:16:00] Speaker B: Well, and it's. So again, you and I, I'm very similar. I'm. I'm using AI all day long, every day in almost everything that I do, whether it's looking for, I'm doing research on whatever. Right. But I have all the different AIs. Like, I'm using them against each other, right. Intentionally. Like, I'm going to try it on this one and I'm going to put the same prompt in another one and see what comes out. Then I'm going to adjust my prompt and do it again in a different window that doesn't have the history of the one that I just did to see if I get a different outcome. And then I'm taking the outcome from one and then pasting that into the other one and telling it, hey, this came from this other AI. Poke holes in it, see what you can find. Like, because this is what's going to happen and this is where from a cyber perspective and from an overall risk perspective, we need to understand there's all these examples of here lately. There's too many to list. But we've seen you know, grok go off where you know, it went off and it was like creating images of Nazis that are, you know, multiracial and, and then you've got the other side where it's, you know, SharePoint that's, that's or not SharePoint but Copilot within Microsoft that's perusing or previewing all of your emails. And a bad actor realized that they could put in a prompt in a hidden context or hidden text that the AI would then exfiltrate data from the AI. You logged in as you. So it has access to your files, it has access to your email like these are. And the same thing just came out about Google, the Google one that's built into Gmail. Same thing is happening there, right? So we're just seeing these different attack vectors that we never thought about and everybody's like oh well that's really helpful. I've got a Siri on my phone and I wanted to summarize my text messages. Like do you, are you sure about that? [00:17:45] Speaker A: Or maybe they do. Maybe their text messages are super benign. Who knows? [00:17:48] Speaker B: That's right. [00:17:48] Speaker A: I love that you brought up, I love that you brought up the Google Gemini example because I, I actually so much. It's August 29th. So much happens that is reported on now granted like my phone, your phone that's already hyper optimized to be able to show us these things on our, our news feeds. Right. However, during black hat DEFCON a lot of these announcements came out right. August 17Six hackers found a way to utilize Gemini to exploit a hidden malware prompt in the A description of a calendar invite. So you tell Gemini summarize my calendar, it'll look through all your calendar invites and oh, somebody just hid a executable command within the the description and ends up we talk about OT ends up infiltrating into the network and into the smart home, opening the windows, turning off the lights and turning on the broiler. Like talk about an OT Iot I O T like this is, this is a, a absolute shit show. It shouldn't happen. Like we, if we think we can expect or predict any every permutation in which AI is going to operate, we'd be wrong. Right? And then not one day later, ah, hackers found a way to upload a poisoned file into ChatGPT and basically established a lot of connectors like you said before, got access and using your login credentials and was able to initiate action without any user interaction and exfiltrate data. Three days after that August 9th same thing. Jailbreaks and agent flare do a zero day exploit, enable data theft from the AI agents. Like this is agentic AI vulnerability and that was already integrated with cloud and idle IoT systems. So you can imagine the amount of fuckery that could happen from a situation like that. So that's August 5th through August 9th. That's four days and half of the arguably more of the world's hackers one place when that all went down. So it has been 24 days since then. 20 days since the 9th. I shudder to think of all the different things that we are not that we're missing, that we're not accounting for, that's not getting reported on, that's happening in the background continuously that we don't know about and won't figure out until later. [00:20:14] Speaker B: Well, and the frustrating part about most of those things is it's turned on by default. Like it's not like I went into Google and I configured and I said, hey, I'd like it to summarize my emails or I'm prompting and asking it to. It just started doing it automatically, right? And it's doing it in Microsoft, it's doing it in Google Workspace, it's doing it in your Gmail account and it's by default doing those things. And, and in theory they're like, well, it's not chatgpt, so it's not public, it's just in your container, your company's container. But that doesn't matter. Like there's still risks to those things. And we have to be asking these questions differently because you know, what got you here won't get you there. We're dealing with different problems and different attack vectors. We have to look at these problems differently than we have in the past. All these things. And I'm not saying that these things can't be solved and we can't get the value out of this. Like we talked about earlier, the calculator, the, the value add, we can absolutely do that. We just have to be thinking about it. Like you said a minute ago, how will this impact is there? What is the risk? Not if there's a risk. What is the risk from a cyber and a availability and thinking about all those things and what does this bring in? What risk does this bring in unintentionally that I'm not thinking about? That's how zero days happen. That's how all of these types of things happen, is because we weren't thinking about it, we were thinking about the outcome. Meaning we want the best possible Case scenario not what are the off ramps that happen that when it doesn't go as planned, what are the bad things that can happen on that same path? [00:21:44] Speaker A: So you're absolutely right. This podcast is not to poo poo on AI. Or they're required like we need to have AI in our lives to function. I can talk a little bit more about the criticality AI provides from a survivability like a humanity survivability standpoint leader. So that is key. But like we also need to consider what truly needs to be connected to. To AI. Does it truly need to be connected? And, and once we draw the line in the sand of what the risks are, whatever risk is too much risk. Let's, let's analog baby. Let's. Let's not worry about it. The. What really scares me though like of all of the things that AI can and cannot do because the cannot do part is part of the the same risk profile as what it can do is this idea of data poisoning. We talk about the China problem a lot in my line of work. You probably too. And I will say this, like many things about people who understand the, the, the methods, the ttps of the prc. But one thing that is so apparent with all of the typhoons that we're seeing is their ability to play the long game, right? Like we, we have an incredible military force assembled and no it is unmatched with any other nation state in the world. But the question becomes can we carry on? Can we support three different wars at one time with our our resources and our attention severely divided. But also like what are the risks where we no longer can respond and we talk about operational technology and critical infrastructure. But then if a lot of our platforms, and I don't just mean DoD like DoD has a very strong approach. The SecDef army that I just kind of better understood that their plan around AI and AI adoption within US Army I think they're approaching in the right way. But the critical infrastructure that underpins their ability to operate is highly vulnerable. So why if you pick a very strategic location, let's say an Air Force base, it depends on energy or power in order to open certain gates or to launch their planes. They'll just be sitting there unable to mobilize. And the data piece of it. This is why the first thing that I really want people to take away from this is know what your data is worth. Some people say prioritize the data. Certain things are worth protecting and other things have a shelf life. They don't matter. That isn't necessarily true, in my opinion, when it comes to the massive quantities of data that we could start aggregating and then turning that data into information. So a good example of this is, you know, I talked about by a lot or behavioral warfare and cognitive warfare. Behavioral warfare is a fascinating one. What is the value of knowing what the temperature is in an office building? Most people would say probably nothing. But behaviorally, if you were to change that thermostat and do it in a way that people could no longer change it back, a lot of things start to happen. So like psychological studies have shown that you disrupt sleep patterns and normal homeostasis physically, like biologically. The other thing is, is you then disrupt interactions with each other. Like people start to get a little bit more irritated and angry, possibly aggressive. [00:25:28] Speaker B: Right. I. [00:25:29] Speaker A: Right. I know your kid comes in and changes thermostat. Are you going to be happy camper? No. [00:25:33] Speaker B: No. [00:25:35] Speaker A: So all of those things happen. And, and here's the thing like that takes an adversary coming in and changing it all purposefully and then disabling the, the changes. And we all know, and sometimes these military facilities, they are inaccessible or it is very difficult to get any technician out there to fix that. That is in a scenario where the adversary actually, you know, does something. We already have situations where these, these dibs, these military bases have problems with thermostats because they weren't installed properly and they already are sitting in a hot box and being very uncomfortable with these types of working situations. Right. So that's one example of disabling sensors or executing behavioral warfare. But what if cyber advertisers decided to actually change the data itself? Like change, like slowly salt or manipulate the sensor data within, I don't know, something like critical infrastructure and start to disrupt that. What happens? And let's look at imaging data. Turns out that there is only one pixel difference in some images that will turn AI from recognizing a missile launcher to a civilian truck. It really calls to question our entire concept of truth. Right. So there's, there's, there's so many examples where data contamination and poisoning of AI data, the stuff that's already trained on, is a real risk. And that would be a classic PRC move in my opinion, for the long game. [00:27:12] Speaker B: Well, and, and we know firsthand now, you know, how do we. [00:27:17] Speaker A: The, the. [00:27:17] Speaker B: It's all security in depth, right? So we look at all these critical places you're talking about, whether it's a power plan or it's a base or whatever. You know, it's so easy now using AI to create frauds, fakes, deep fakes, voice, video. All these things are getting so good. We know we're going to have to. And that's not even talking the quantum thing. Like we haven't even stepped into that realm yet. We're just talking about things where, you know, you can literally now especially think about. So I've had 72 episodes, I believe, of this podcast on, on the Internet. That's video and audio. I talk a lot. I'm a professional Mike. Good quality mic, right? So they have high quality versions of my voice there, there. It would not take very long to, to create a clone or a fake, a deep fake of me right there. I know firsthand celebrities that are getting deep fakes and, and it's not hard to do, right? So imagine that at scale or, you know, you think about a penetration test or a physical, somebody, a bad actor trying to get access to something they can have, they can represent your boss and say, hey Cat, Aaron's going to be at this facility tomorrow. Make sure that you put him on the visitor list. So they don't even have to hack the visitor list. They just have to social engineer it so that you put them on the list and then they're let in the front door, right? There's, there's an infinite number of these scenarios, but this is the type of thing as a, as a, a defender, as a war fighter, as a, as a cyber protector, any of these types of things, we have to start thinking about these things. And unfortunately, what I see is so many of these organizations, these, these people, these, these groups, these companies, they don't have enough money, time or effort to really even focus on it. You walk in and they've got a firewall with an any, any rule. So it doesn't matter if Quantum's coming. They already have the front door open. They, you don't need Quantum to get in there. You just walk in. [00:29:10] Speaker A: Quantum has just as much like the thing is, we gotta stop, we gotta stop thinking about things as positive or negative. They are not right? They are neither of those things. And the Quantum on top, like the, the fuel of Quantum on top of the fire that already is AI is what I call the AI Quantum collision. It's like feeding gremlins after midnight. It's like, it's so, so bad. And what you're saying from a deep fake perspective, this is happening today. Like, I don't know if anybody heard about the crypto, the deep fake crypto heist in Hong Kong in 2025, earlier this year Hong Kong finance worker joined a seemingly ordinary call. Right? Zoom call. Everyone looked familiar. Voices all sounded right. Turns out that every single participant on the the video call was a deep fake. It was generated using stolen audio clips and AI. So it ended up resulting in an 18 and a half, I think, million dollar crypto high siphoned out of the company's accounts, legitimately approved by someone who thought that they were getting the authorization to do so by their executive leaders. Right now, that was an instance where it was all audio. I think the cameras were off in that case. So every time I get on a video call with someone, I can't see their face. I'm very skeptical, like, who. Who are you? But you know what? In real, it turns out it doesn't even matter because in 2024, I was attending in October, so almost a year ago now, I was attending the Singaporean International Cyber Week conference. And I walked up to a booth and they said, do you want to be part of a little demonstration, ma'? Am? I guess. Sure. Actually, Aaron, do you have that photo? Um, do we have. Do we. [00:30:53] Speaker B: Yes. [00:30:53] Speaker A: Are we able to bring it up? [00:30:54] Speaker B: Yeah, I'll bring it up. Keep talking. I'll bring it up. [00:30:56] Speaker A: Okay. So when that happened, this was a continuous like monitoring kanman platform that allowed you to, at the it, at the soc level, make sure that every connection and interaction that you were having was truly authentic, given the amount of deep fakes that are occurring for social engineering and other types of cyber crime. So they had a toggle. They said, who, who do you, who do you want to be? Well, there's like four men and then one woman. The one woman use case was Taylor Swift. And for today's purposes, since she just didn't engage, I thought this was a good pop culture, you know, homage to her. Now that she's getting, she's engaged. They, in real time, took a typical camera that was on top of a. Wasn't even a laptop. It was a camera that was embedded into a smart tv, pointed it at my face and had me talk for five minutes while they superimposed Taylor Swift's entire face onto mine. And it looked exactly the same and the mouth matched up perfectly. This is what I looked at like that day and this is what they did when they superimposed her face onto my body. So if you go to the next one, I think there's another one that shows you that the whole of the, the, the setup that they had going there, but you could be Keanu Reeves, you could have been, you Know, a former president, like they had a lot of different options. It's a lot, it's insane that you can just have these done and your tablet can do this today. Right. You can have those deep fake apps on your phone, as I think on your, on your iPad. And children play with this all the time. Yep. [00:32:42] Speaker B: And it's not, it's not super complex, but they're able to do it and like you said, in near real time. So this is the other one showing the actual system that they had there. [00:32:52] Speaker A: Yep. So that's the, the, the entire cohort standing behind me. That's me standing right there. And then this is the, the face that they cut out just to be able to show. So this, this technology, what they're, they were saying that it could do is essentially detect if it was a deep fake that was happening on a coalition. So this is a year ago, this is in Singapore. This is another reason why, like having global perspective and understanding how other governments are approaching quantum AI supply chain, all of these key threat vectors within cyber right now is so important because understanding how other people are approaching it will give us inspiration and maybe, you know, incentive to follow suit and prioritize some of these things in kind. [00:33:39] Speaker B: Yeah, I mean, it's terrifying to think about that right as you can, literally as you're standing there, put your, your body, you're standing there, remove everything, put your face over it, it moves to your mouth, all that type of stuff in, in near real time. And again, this was a year ago and we know how fast things are changing now. So if they could do that a year ago, what could they do today? Make it even more realistic. So these are, these are all the types of things. And this is not supposed to be a, I don't think either of us are trying to say, hey, you should be terrified of life. You should have a healthy fear and questioning attitude. This is something I've been teaching and as I build up teams, I've always done this all the way back to, you know, beginning of my cyber career of have a questioning attitude. Don't take it at face value just because the expert, I'm doing air quotes. If you're just listening, the expert says that it's a certain way. Ask for proof, trust, but verify. Like, ask for, how did you get to that thing? I want to see it myself. I want to understand how this thing works. I want to understand. Have you considered all the risks, like, or are they just saying, well, it worked? You know, 99 times out of 100, it was fine. Okay, what about that, that other time? What. What did it do then? Like, those are the things that we need to think about because nothing is perfect and there's always a risk. Risk is never zero, and it never will be zero. So we need to think about all of these big picture problems and think about that as we're solving. So, you know, you look at ot, you look at critical infrastructure, you look at DOD and a lot of these places, sometimes we just need to keep it simple. Like, do we need AI to control the thermostat? Do we need to have a smart thermostat? Like, there's a benefit to it for sure, but there's also a risk. So we just need to ask the question. It doesn't mean the answer is no. It just means are we understanding what the risks are and what the reward is? And is that risk worth the reward of automating, of putting the thing online? Because, you know, you can have your, your, your refrigerator be connected to the Internet. Okay? Do you really need it to be done? Like, mine are not. Not because I'm like a weirdo. It's just because I have refrigerator that it works. And I just don't see a use case where I need to have my freaking fridge connected to the Internet telling me how much milk I have or don't have. Like, I have three kids, they're going to tell me themselves. I don't need to connect to the Internet for that to happen. Right. [00:35:58] Speaker A: Security. The basics, too. Like, at some point we're going to have to buckle and say, all right, fine, God damn it, fine. If you really need to have this connected to the Internet or have AI injected into this, forcefully injected into it, then secure the basics. Like, if you cannot control whether or not this happens. I get it. Just make sure that you monitor everything that's coming through. From a cryptography, like a data asset standpoint for data resilience. Because, like, everything comes back to data. I also want to say economic security is national security. Data is not just like a digital clone of our identities. It's not just our credit card numbers, our bank account statements, or, you know, our daily habits. It is intellectual property, like the amount, the speed at which certain innovations can happen. Now just by hitting the market. We saw this between Deep Seek and Nvidia. Like, the speed at which a lot of these things will catch up, especially from other states, nations in relation to the US is already shortening substantially. Substantially. Why should I, as a nation state or cyber adversary, invest in my own IP when I could Go out and steal it from someone else. So secure the basics. It's, it's a, it's a huge priority for any nation that is developing their own innovation. [00:37:30] Speaker B: It's. It's one thing. So a lot of my career I've been spent in power utility and, and I've supported every kind of power generation, substation, curriculum, manufacturing, anything critical infrastructure I've been part of. Right. And one thing I really loved, and at the time, I didn't understand it. When I first got into the industry, you go to a nuclear power plant, like I supported, I was, I owned the, the ot cybersecurity and the o. The, the networking and really the, the. The functionality of the plant from that perspective, not how the plant ran, but you know, how do we get data in and out? How does all that kind of stuff. Right. And in a, and you walk into a power plant, especially a nuclear power plant, it looks like it's from the 50s. Like the panels look like what you saw on, you know, when we launched a man to the moon, apparently. And in the 60s, right. [00:38:19] Speaker A: A lot of ground today, Aaron. A lot of ground. [00:38:23] Speaker B: But, you know. Exactly. Conspiracies, right. So in the 60s, like. So a lot of. They have, you know, primary, secondary and tertiary systems. They have. Their primary system is analog, and it's always going to be analog. That's what they're licensed on. Right. So they also have digital stuff in there, but it's only in a secondary or tertiary setup. Right. So if all the other things fail. So like what you talked about, how can I display this information and, and, and fake the numbers? Right? That's, that's what stuxnet was, right? [00:38:54] Speaker A: It was. [00:38:54] Speaker B: Stuxnet was the control room, the operators, everything looked fine, but in the actual outside, it was spinning out of control because it wanted to cause damage. Right. But everything on the screens that the operators were seeing was green. It was thumbs up. Everything's running as expected. So it was saying nominal, and it was actually experiencing not nominal. So it was adjusting those numbers. But when you have, when you design a system like what we have in nuclear, and obviously you can't design tertiary systems and everything because it's expensive, it's complex, etc, but in our critical environments, we need to think about that because if, if the digital system, it's very easy to manipulate the numbers that are coming in, but when it is hardwired and, and there's a, there's a physical connection to it, the position on a valve, it's really hard to fake that. Right? Almost not impossible, but almost impossible. Right. You have to have physical access and you have to physically go change set points and you know, re. Re. You know, calibrate things so that the calibration is off. Like, there's a lot of steps that you could do that in a, in a power plant or any place that has a lot of those types of sensors. It would take a huge effort and you would have to incrementally do it so nobody noticed it. Like it would. It would be a big effort. But when I digitize all of this and I could do it with AI or a nation state attacker can just go do it with a, with a malware, it becomes a lot easier. So all this to say we need to think about these problems, these big problems a little differently and prioritize the functionality of what we're protecting. And is it again, going back to my, you know, chocolate chip cookie recipe or is it, you know, the nuclear football? Like those are two different scenarios we should approach. I shouldn't spend the same amount of effort or money or resources to protect one as the other. It's the same as with. I can have two PLCs. They have the same risks to my environment. One is controlling a turbine, one is controlling the ice machine in the break room. They have the same risk, but they're, they're not the same priority or impact to my business. So although people would get pissed off if the ice machine stops working, it's not going to shut my business down. [00:40:55] Speaker A: Right. Right. Do you ever watch, I'm. Did you ever watch the movie Live Free or Die Hard? [00:41:01] Speaker B: Yes. [00:41:04] Speaker A: It is shocking how much pop culture these days are echoing. I mean, there's certain things that are just wildly off entertainment purposes, like Mission Impossible, but the, the most recent one, Dead Reckoning. I almost said Red Dead. Red. Red. Dead Reckon. [00:41:20] Speaker B: Red Dead Redemption. [00:41:22] Speaker A: Redemption. That's the one. But there are certain things about that that, at least in theory, are very real and very big concerns. Right. Although I feel like it is very derivative of Skynet and Terminator. Either way, I do, I do think that some level of awareness, this is where the third pillar, educate your network. There's some very practical things that you can do and it's really hard to know where to start because on the one hand, the children. The children are on these devices, they do not understand. They're vulnerable demographic, they do not understand any of the risks that are coming in. And we do not introduce these concepts to them until, in my opinion, way too late. Like even the concept of media literacy we talked about cognitive warfare too. And informational warfare is another form. Behavioral, informational, cognitive. Like, these are all things that kids need to be aware of. And I hate that we're putting it on them, but they do need to know. But that's one thing that I think we need to invest more in. And anybody watching this, you are now, you are now part of this fight. If you care, please make sure that you tell your children or you tell your immediate, like, network. The very basic foundations of having two factor authentication, having strong pass phrases, not reusing passwords, really making sure when you download something that you read the reviews or determine whether or not you truly need that does whatever you're thinking about modernizing from a digital standpoint, truly need to be modernized. Don't have a TV in your bedroom. Don't put Alexa in your bedroom. Like, they're always listening. The, the education piece of it is really critical. I have, I have people over the age of 65 constantly contacting me. Like, Kat, is this fishing? Like, I don't know, but unless you absolutely need to click on it, don't, don't. [00:43:29] Speaker B: And, and they've gotten, you know, diving into that really quick. I've talked about this a few times, but, you know, I, I've been doing this for 30 years. You know, I, I'm, I see it firsthand. I'm building fishing, you know, scenarios, tabletop exercises. And I almost got hit this last year. I got a phone call and a voicemail, and it was from the Travis County Sheriff's Department or Williamson County Sheriff's Department, where I live. And it was a. Sounded like a good old Texas boy. I called him back. I had a conversation with him. The phone number that came up was from the, the police department because I, I checked all of that and I did my due diligence before I even talked to him. I called that number. They had somehow gotten that number and, and, and spoofed it. And, you know, they, they had this elaborate scheme to, you know, try to get money. And it, it had me for. Because again, my spidey senses are going off from the beginning. But they didn't push too hard in the beginning. They were like, hey, you have to show up at the, at the, at the county courthouse. You have to take care of this or there'll be an arrest warrant issued and we're going to have to come after you. Like, okay, what do I do? Well, you have to go to the courthouse, like, or go to the, the sheriff's department, like, okay, well, that's Reasonable. A hacker is not going to send me to. Or a bad actor is not going to send me to the sheriff's department. They're not. It's very unlikely the sheriff's office is going to be the one doing this. Right. So I go through the process like, yeah, but you have to show up with, with money because you have to pay it while you're there. And if you don't, we'd have to take you into custody. But if you do, then we'll just take it all care of it. Right. And it was all because they say that I, I had a, a summons for jury duty, which I did, but I filled out the form and, and got excused from. From jury duty. I think I was traveling that week or something. So they knew enough that I had a jury duty summons and that I was and I did not go. Well, I guess they probably just assumed I didn't go, or they were trying, hoping that I didn't go. Anyways, long story short, they got me to the point where I took money out of my bank account, went to an ATM and got cash out. And it wasn't until the next step where, because again, my, my spotty sensors are going off the entire time, but they just, they didn't push too hard. And then nothing was out of orange because I was going to take this money and go to the sheriff's office. So there was, there was no risk. And then when I took the money out, they're like, okay, now go to the CVS and buy a gift. Gift card. And I'm like, you're kidding me. No, I'm not doing that. Idiot. [00:45:54] Speaker A: Wow. Okay, so I have a. [00:45:57] Speaker B: But I see this all the time, right? And, and I still almost got hit by it because I, they hadn't. They. They were smart enough to not push it in the beginning. So anyways, it's just going to say everybody, nobody's against this. It can happen to anyone. We have to be on guard at all times. [00:46:13] Speaker A: So a little confession. That was me. Sorry. No, it was. I know. So this goes back to my exact point about what data is protecting you. You protect all these things that you think are high value, and then you don't protect other things that you don't like. The fact that they knew you had a jury summons makes me believe that they infiltrated some kind of local sheriff's department or, or a judiciary system like summons database, because they know that they have your name, they have your address, they have your county that you're. You're in. They Know when you were supposed to show up and they have like they everything else is publicly accessible information. They could see what is the nearest sheriff's office which is near cbs. Right. There's probably only a few different banking institutions, including local credit unions that you're probably part of. Like the amount of social engineering that kind of comes together. Once you have that piece of information, people will be like, well, who really cares about jury summons? This is why they will be find any opportunity to use that data to their advantage. And this is why we have to know what our data data is worth. [00:47:21] Speaker B: Absolutely. It's. And it just goes to show every. So you know, I had, when I worked in the power industry, I had security clearance because I had to be able to talk with all the government agencies and give us intel on things. I was also badged at a nuclear facility which has its own background process. And both of those databases were hacked by China. As we know. This was, you know, years ago. So all, everybody that had security clearance or was going after security clearance, they have all of our information. Like the funny thing is, is with the nuclear side, not only did they do a, you know, government level, you know, top secret or security clearance, you know, background check, but they also had a. You had to go through a psych eval, you had to have it. You answered a 500 question questionnaire and then had to have a one hour interview with a, with a psych psychiatrist to make sure that you weren't crazy. I guess somehow I passed. I don't know how, but you know, that happened. I don't know, I must have tricked the system. I sat on attack or something, I'm not sure. But you know, I was able, you know, so they have not only my personal information but all of my psych eval. Like all that stuff had to have been in there. Which means now what do they not know about me? Like they probably know more about me than my wife knows about me after 17 years. So it's just, it's, it's terrifying. And that to be said, assume they know. So how can I protect against that knowing that they know the answers, Knowing that they know my Social Security number, where I live, where I drive, what cars I have, that I went to jury duty or didn't assume all of those things and then look at every situation or when you're designing a system or you're trying to defend the system, assuming the worst case scenario, not the best case scenario. [00:49:05] Speaker A: Yeah, agreed. [00:49:06] Speaker B: Awesome. So 10, the next five to 10 years I asked this question. Everybody next five to 10 years. What's one thing come up over and we've already talked about a lot of this. What's one thing coming up over the horizon? You see that's, that's terrifying and maybe one thing that's exciting and you know, it could be less than five or ten years like Skynet's going to take over next year. You know. What's your predictions? [00:49:25] Speaker A: I'll give you one answer that applies to both what excites what our greatest challenge or threat is also possibly our greatest opportunity. And all it entirely depends on how we as a, I don't know, a global as as far as humanity goes approaches the situation. There is a global population decline. Yeah, a massive one. Anybody who hasn't read Peter Zion's book the End of the World is just the beginning. It's a real pick me up. But it's very important. It's very important. And they go through the systematic dismantling of every major industry that underpins humanity today because of this global population decline. In in short the way the birth rates have gone historically until now have been increasing over time. And anthropologically you do see this inside of societies like a certain level the birth rates and the population will peter out. They will reach a kind of like a, a tipping point. And now we're seeing that come down. And what happens then when it comes down is a lot of the people that are around do certain jobs are just simply not going to be there anymore. Economies are going to buckle and certain countries will be in severe trouble there. There are some people out there that are not too quote unquote worried about the China problem because of this particular point is the Chinese along with Russia and unfortunately some cases Japan. Other societies where the population decline is so severe and so significant that their country simply won't be able to sustain with the amount of people that they have anymore. So how does that turn into both our biggest opportunity and our biggest challenge? That's where I want to see AI. I want to see AI making smarter predictions about how to harvest food so that we are not so dependent on a workforce to do a lot of that harvesting. Some of that is already in place. But we need to see that increase tenfold. We need to see more in the way of efficiencies for transportation. And I'm talking about massive transportation on the shipping containers that get to and from and just the general production of goods in general, a lot of automation happening in the manufacturing space, in the agriculture space. That needs to happen. We need to then as you know, species learn how to level up in every sense of the word. Like think about all of the most interesting, fun, a utopian aspects of the sci fi movies like Next Generation, Star Trek, Next Generation, great example. Right. Where the concept of money doesn't really exist anymore. We're back on academic achievement, a contribution to the, the, the exploration and, and the growth of the, the human population outside of this planet, that type of stuff. We see this and in some cases with Occupy Mars, I want to see people skilling up, utilizing AI to think more strategically about big world problems that have scale. We do this at our company today. So the ability to do quantum simulations to find cures for undruggable diseases. This will save lives in ways that we cannot do right now. Because the pharmaceutical industry is very expensive with a very high failure rate. Right. You're not paying for your meds be being so expensive because of the drugs that worked. You're paying for the meds that didn't work. So if we can reduce that and we keep the small dwindling population that we have globally healthier and living longer with a better quality of life, then that, that helps to reduce, you know, the, the impact that we see at a larger scale. So we need to stop thinking about AI is like it's going to help us create efficiency in the workforce. No, we need to turn every single human being that can read and write into a scientist, into a strategist, into someone who could solve big problems in order for us to survive as a, as a species. [00:53:55] Speaker B: I love it. And I see one of the big values in that, in that space too. Right. You know, you think about, I was just listening to, I listen to a lot of podcasts, you know, just listen to podcast about health and longevity and you know, I'm not so short in the tooth anymore. You know, I got the gray hairs, all the things. Right. And, but if you think about the way medicine has been done and there's a reason why they call it a practice, you know, we, we've really had to build medicine around the masses even though it doesn't specifically work for me. Because you and I may have the same problem, but something may work for you and not work for me or vice versa. Right. Imagine the fact if we could feed all that data personalized into an AI and build a custom solution for me that would, would benefit and help me to get to the same place to fix my ailments or whatever that may look like and you know, fix my hormone levels and I mean, you're not Six foot, you know, two man with a red beard. Like I am like. So obviously there, there are differences between the two of us. So obviously some things are going to work for me enough for you. So if we can imagine that at a larger scale, like those are things, I'm excited about what tomorrow looks like. Right. The ability to use that for good. But obviously we have to be careful because then it could also be bad and that it does something, it says it's good, we inject it, and then I fall over dead. Right. Some people may like that. But that's for another time. [00:55:22] Speaker A: Out of my watch, Aaron. Not on my watch, no. And you're 100% right. Like bringing it back to protect it all cyber underpins that. This is how we protect it from being fat. And we're never going to be perfect, but just lock that shit up, get that, get those crown jewels in a vault and make sure that that vault is secure. I've seen, I've seen people hack into some of the most secure vaults in the world with a wet paper towel and a clothesline. [00:55:49] Speaker B: Yep. [00:55:50] Speaker A: Clothes hanger. Right. Like, this is, this is the world that we live in. You don't have to be high tech to be a, you know, a hacker. So you're absolutely right. There's, there's great, great opportunities ahead. Let's just make sure the security keeps up with the speed. [00:56:06] Speaker B: Absolutely. Awesome. Hey, thank you for your time. What, what is a call to action? How can people get more information about what you guys do? Maybe dive into more about Quantum and, and all the cool stuff that you guys are doing and working on? [00:56:16] Speaker A: Yeah, so I am very active on LinkedIn. I, I try to. Shockingly, I, I feel find myself on panels, on stage or on podcasts twice a week these days. So it's a great way to kind of keep up on the, the events that I'll be attending. If you want to link up, send me notes and, and messages. I'm happy to connect and yeah, just thank you everybody for being part of this and caring enough to want to invest in the security of your future. [00:56:44] Speaker B: Yeah, absolutely. Thank you so much, Kat. I'm sure we will see each other again. And until then, have a good weekend. [00:56:50] Speaker A: Thanks, Aaron. [00:56:52] Speaker B: Thanks for joining us on Protect it all, where we explore the crossroads of IT and OT cyber security. Remember to subscribe wherever you get your podcasts to stay ahead in this ever evolving field. Until next time, Sam.

Other Episodes

Episode 13

June 24, 2024 01:00:58
Episode Cover

Unlocking the Future: Hands-On Learning and AI's Role in Cybersecurity Education with Philip Huff

Welcome to Episode 13 of Protect It All! This episode features Philip Huff, a professor at UA Little Rock and a cybersecurity expert. He...

Listen

Episode 68

July 28, 2025 00:53:50
Episode Cover

Lessons Learned in OT Security: Regulation, Collaboration, and the Rise of AI Threats with Kam Chumley-Soltani

In this episode, host Aaron Crow is joined by Kam Chumley-Soltani, Director of OT Security at Armis, for a candid conversation that dives into...

Listen

Episode 6

March 05, 2024 00:51:48
Episode Cover

The Future of AI: Determinism, Security, and Beyond

Sevak Avakians, CEO of Intelligent Artifacts, discusses the limitations of neural networks and the need for a new approach to artificial intelligence. He introduces...

Listen