Episode Transcript
[00:00:00] Speaker A: You're listening to Protect it all, where Aaron Crow expands the conversation beyond just OT delving into the interconnected worlds of IT and OT cybersecurity.
Get ready for essential strategies and insights.
Here's your host, Aaron Crow.
Awesome. Thank you for joining the podcast Protect it all today. This is Cybersecurity Awareness Month, so we wanted to kind of dive into some basic topics around cyber cyber hygiene and some of the things that are in the zeitgeist of cybersecurity for this month. So today I've got with me Mike and Clint. Mike, why don't you introduce yourself, tell us who you are, and then we'll kick it over to Clint.
[00:00:44] Speaker B: Yeah, thanks, Aaron. Yeah. Mike Welch, managing director at Morgan Franklin Cyber, focusing in the utility, power gen, oil, water, gas, critical infrastructure. Looking forward to this conversation with both of you guys today.
[00:00:58] Speaker A: Awesome. Thank you, Clint.
[00:01:00] Speaker C: I'm Clint Bo Dungeon. I'm a director at Morgan Franklin Cyber, Director of Cybersecurity Innovation. I'm also the founder of Threat Gen and specialize in OT as well as now AI.
[00:01:16] Speaker A: AI. Everybody's loving talking about AI and all the things and I mean, heck, Clint and I've even had, you know, he didn't fund conversations around the use case of IT and it's coming, right? So I mean, honestly, let's just start there. Like, how are we using and how can we see the use of AI in both enterprise and more so on the OT and critical infrastructure side, how do you see that being used, Clint, from a perspective of people that maybe have a limited to no OT program, maybe they don't have anything and they're just trying to get started, how can it help them? And then maybe the more advanced folks that have a program and they want to get that, they've got the 8020 rule and they've already done the 80%, but they're trying to get that last 20. How can it help those guys? Because I think it's a different use case depending on where you are on that road map.
[00:02:07] Speaker C: Yeah, it is. And first of all, I'll clarify that. You know, at this stage in the game, I don't think anybody should be using AI to make final decisions and especially not making any changes in any type of configuration, firewalls or anything like that. Right. Especially ot. But I think the two use cases you'll see are from basics, I think generative AI, when used properly and securely and privately to where you're not spewing your private data out to the cloud.
[00:02:42] Speaker A: That's a key there we need to make sure that we. Exactly. You know, you don't, you don't copy and paste in this chat GPT online, right?
[00:02:49] Speaker C: We're not talking about chat GPT here for private data, but you can do it with like, we'll talk, we can maybe we'll talk later about you can do it locally.
But from the basic use case, I do think that generative AI has, and we're talking about specifically here, generative AI and not like the traditional machine learning kind of thing. That's the hot topic right now. So generative AI can really help when in your basic process with helping you analyze data, helping you understand what decisions to make, right? It can look at data.
Generative AI is really kind of a superhuman when it comes to data analysis. That's its superpower, right? And that's how it enables basic users is I have all this data and to your point earlier, kind of talking in the green room, my data is key. If you don't have the data, then AI will make its own data up and you don't want that, right? So having the proper data, it's going to help you analyze the data assets, whatever, and it's going to help help you make informed decisions, right? Where do I need to put my blocking and tackling? What do I need to tackle next, what vulnerabilities I need to tackle and that sort of thing from an advanced perspective, it's really the same thing in terms of the use case, right? It helps SOC analysts, it helps those advanced, more advanced, more technical folks analyze data. It can help supercharge your SOC program and analyzing data, analyzing threats, analyzing threat intelligence. Because that's what it does. That's it. It's. It analyzes data much like a human does, but it can do it quicker, faster, more efficiently, and in many cases, not all cases more accurately. And then there's a case in the middle to where generative AI can help your GRC program, it can help you with that heavy lifting that everybody hates, creating your documentation, right? It can help you create your cybersecurity policies, analyze your cybersecurity policies, help you create your incident response plan and all of that. And then, you know, because that's at least your first draft, right? It'll help you give you your first draft and then kind of the edge cases are, it can help you with your cybersecurity training, your cybersecurity awareness and incident response training like tabletops, which are.
And I'll just touch on it. But tabletops is a pain point for a lot of People and generative AI can help you make your tabletops more approachable.
[00:05:14] Speaker B: So Clint, real quick there. Sorry, Aaron. When it, when it comes to the data and utilizing the gen AI, how do we determine the accuracy or the appropriate making sure it's not biased in any one way or an organization that's starting that journey, that they're not violating their own policies in sense, because that's where we see a lot of startups or new companies that are trying to jump on this technology, but they don't understand the risks that may be there with that cut and paste or whatever.
[00:05:44] Speaker C: Right? Yeah. So, number one, first and foremost, the key rule is to always keep the human in the loop. Making sure, you know, trust but verify or don't trust and verify. Right. And so making sure that you're keeping the human in the loop and validating that data. It also goes back to the training process. Make sure that you are feeding the training and the fine tuning process. And training and fine tuning are different things, but fine tuning, as most people are going to do, or providing that retrieval, augmented generation, a library of documents for IT to use. So providing it the necessary data for fine tuning and in your rag system is going to help maintain that accuracy. So feeding it the proper documents, keeping the human in the loop. And then now you can use an agentive process to where it double checks itself. So there's a process to where you feed it the information. You get the information back before, but before it gives it to you, it feeds that data to another agent and says, hey, double check this for me for, you know, fact check this for me. And then it'll do things like go out to the Internet for fact checking. It'll fact check the documents that you put in the document data database. And so these are the typical techniques for maintaining that lack of hallucination, if you will.
[00:06:57] Speaker B: Excellent.
[00:06:58] Speaker A: Yeah, I mean, you know, we've all, we've all heard the stories of, you know, lawyers that are using it. And then it comes back and it has all these citations and then you actually look up the cases that it cited and they don't exist. It just created. I was actually listening to a podcast yesterday where they were, they were talking about voice and AI voice, but ultimately underneath it is it Basically they had two AIs talking to each other and they were making things up and they were, they were having conflict and resolving conflict. But a lot of the stuff is just made up.
[00:07:30] Speaker C: Right?
[00:07:31] Speaker A: They were, they were calling call centers and they would ask for like a credit card number or your Social Security number. And it was just making numbers up, like 9998-765432-15555. Right. You know, and the customer service person was like, that's not a card. Like that's not a real number. And it had no idea because it was just throwing out numbers. It didn't have any of that fact check thing there. So it was just, it sounded real. It gave numbers because it knew it had to give a number, but it didn't actually have a Discover Card or a MasterCard or whatever. Like it wasn't an actual account number.
[00:08:05] Speaker C: Yeah. And due to the reason or due to the way that generative AI works, that's its nature. If it doesn't have the hard data, it's going to make up data because it just predicts what that data should be.
[00:08:18] Speaker A: Sure.
[00:08:18] Speaker C: And that's why, you know, you shouldn't be using AI in any official capacity if you don't understand how to use it and apply those techniques to minimize hallucinations and minimizing and even alleviating, for the most part, almost completely hallucinations is possible, but you have to know what you're doing before you use it.
[00:08:40] Speaker A: Yeah. And that goes back to the conversation like you said we were having in the green room, where I really struggle to see how much you can use AI beyond advising on, hey, you're missing this, you're missing that. Because as we walk in, Mike and I've walked into 100 of these places and they don't have any idea from a data perspective what assets they have, which assets are critical, how their environment looks not at the high level that you would need to be able to give that. So you could feed stuff into AI and they wouldn't know the answers came out were true or not even the person. Right. So if the person can't tell if it's true, how can we expect the AI to understand if it's true? So there is no check and balance. So the very first thing you've got to do is understand what is true. And then I can start using AI to enhance that to get me multiple capabilities to. To some of more of those advanced use cases.
[00:09:31] Speaker B: Yeah. Like even to your point. Sorry, Clint, to your point where you said AI is it's superpower. Right. Is just intelligent, but it still always will need some level of human interaction judgment to validate. Because without that, you're just asking for challenges and risks.
[00:09:49] Speaker C: And that's the key point. Right. Is that you don't have generative AI to do someone's job for them. You use generative AI to help someone do their job better, which is why keeping the human in the loop is necessary. It's, it's an AI companion, right? It's not, it's an AI assistant, not an AI employee.
[00:10:11] Speaker A: So, so the example I always use is that, you know, when I, when I was in my freshman year of college, I had to buy a calculator, a scientific calculator. TI82, I think it was, right? This is back in the 90s, so it tells you how long ago that was. But in, in those engineering classes I took university physics and calculus and those kinds of things, they would let us use these calculators, right? And those calculators were being used.
I still had to know how to do the math. The calculator, I could put in the formula and I could get the answer in the test. So like, I'd have the question and I could put in the formula into my calculator, assuming I knew which formula to use and I would get the answer, but the test was not, did you get the answer? How did you get to the answer? And you had to show your work, right? So the calculator, I can't just put it in there and just expect it to do it. For me, it was an assist, it was a tool. It's no different than, than a cordless drill. You know, I can, I can use a manual saw or I can use a electric powered saw. I, I, it's just a tool. I can, I can do it either way. I just have to, you know, know what it is. It's not going to build it for me. I still have to do it.
[00:11:16] Speaker B: I'm curious, Clint, as a resident AI expert on this panel and across the industry as a whole, we're talking about security practitioners and organizations leveraging it. How do you see the bad actors out there using it to make what they're doing more efficient? Do you see AI enhancing their capabilities?
[00:11:38] Speaker C: So there's already tons of articles out there where people are talking about not only proof of concepts, proofs, proofs of concept, but also actual use cases and I guess, reports to where they are using generative AI to create malware, automate the process of creating malware. There are models out there that are specialized and they'll take these base foundational open source models and they'll fine tune them to uncensor them and then provide them with specific cybersecurity and even malicious red team, malicious offensive documentation, training data and techniques, and build these models to specifically help with the malware creating process. Right. And, and because the bad guys don't necessarily care about how a bad decision affects my production, they are using it to help automate more and make decisions and to, and to completely or to complete, I guess, for the most part, create autonomous command and control servers and things like that. There are already cases of them doing this. And if you really think about it, it's for the most part anything that we're doing on the professional side. So helping automate Red Team testing O sent that sort of thing. The bad guys are doing the same thing, but they're taking it to the next level.
[00:13:12] Speaker A: And they don't have the restrictions that we do because of fiduciary duty and laws, and they're already breaking the law. So they don't care that they're doing that. Right. So if you, if you take the, the latest or the known vulnerabilities of these, of these systems and you pipe that into, if you know a system and you know which vulnerabilities it has, you can very easily pipe that into even just generative AI. And hey, how would I attack this? What are the known vulnerabilities and what is the best way to attack this and pivot from there. It's going to help you, right? Even if you don't know anything about those systems, it's going to help you because of the general knowledge that's in the Internet. And I can feed it, you know, documentation, I can feed it, you know, all the vulnerability and alerts and the, all of that, the mvd, that database is good for us to know what we need to fix, but it's also good for the bad actors to say, hey, I know you have this vulnerability and you haven't patched it. Yeah.
[00:14:06] Speaker B: Well, I'm curious how these socks out there are going to be able to build their detection engineering rules to be able to identify attacks and alerts and stuff generated by AI activities versus a human. Right. We know what to look for. We have rules built for detection engineering rules for what? Those normal processes, tools, tactics and techniques. I'm curious how that's going to change when they're leveraging AI. Is it going to cover their tracks, make them more efficient, keep them under the radar where they're not getting the alerts?
It's interesting to find out where that's going to go.
[00:14:43] Speaker C: Well, just remember that number one on the, on the blue team side, we're already using things like machine learning to help with the process in the sock. And, but here's the, on the blue team. This is both the scary part and the good part. Right. The good news, bad news is this is the, this is the least advanced and the worst that the technology is ever going to be.
[00:15:09] Speaker A: Correct?
[00:15:09] Speaker B: Yeah.
[00:15:09] Speaker C: You know, it's only going to get better and more advanced. So I'm not trying to throw FUD out there.
[00:15:13] Speaker A: And quickly.
Yeah, and quickly. Like that's, that's the other thing that, that I think is important. So I was talking with some folks around, you know, Tesla or not Tesla, but Elon Musk's team and many others. It's a race for honestly, electricity, because electricity is the thing that is, that is currently holding it back because the more power they can throw at it, the faster it's going to grow. Right?
[00:15:40] Speaker C: Yeah. Who wasn't, I forget one of the companies, I forget who. I think maybe OpenAI or so I forgot who it was that they're like buying and reactivating Three Mile Island. Right. So they nuclear plant to power the AI.
[00:15:53] Speaker A: Right.
[00:15:54] Speaker C: I mean that's how you get Terminators. You don't want Terminators.
[00:15:58] Speaker A: And, and they're, they're, they're actually looking to build data centers around old decommissioned power plants that they can, they can retool and spin up because again, power is the problem. Right. So throw away the battery technology and anything else they need. Like they need power and you know, nuclear power is obviously the cleanest and most efficient but you know, they're looking at all, all sources of power. Solar is not going to do it. So they need, you know, gas fired turbines and they need coal fired power plants and they need, you know, nuclear to, to get these things going. And it's a race. Right. Because China is building coal firepower plants left and right and they don't care. Again, just like the bad actors. Not necessarily saying that China is a bad actor, knock on wood. But you know, they don't care about the emission stuff that we do in the U.S. right. So they're, they're just trying to win the race and that they, they pull all the restrictions out so that they're, they're building them federally. Where, for instance, in Texas. It's a deregulated environment and our power utilities are not able to build the power because they, how do they, how do they fund it? Right. It's not, it's not a government entity.
[00:17:05] Speaker C: So I heard somewhere that people make good batteries.
[00:17:08] Speaker A: Yeah.
[00:17:11] Speaker C: Going back.
[00:17:12] Speaker A: The Matrix.
[00:17:12] Speaker C: Yeah, yeah, exactly. Yeah, yeah.
[00:17:14] Speaker A: So, so I've doubled down quite a bit on, on the AI topic. Let's Shift gears a little bit, Mike, and kind of in an area that, that you and I've been in a lot. And that's, you know, we, we see a lot of needs in, in the space and I see more and more regulation coming down.
Need to justify and validate and actually provide funding to do the things the bare minimum. Right. So we need critical infrastructure. So NERC SIP has been huge in the space, but we're seeing more with TSA pipeline and you know, 62443 and a lot of these things that are coming out that are really pushing, you know, requirements down to these, these industries because we see, hey, you've got to do something saying that, you know, I'm going to disconnect it from the network or put in a firewall and that's all I need to do to have a security system as we know it's not enough. So talk to me a little bit about the regulation environment as well as the difference between compliance and security and kind of how folks can kind of kick that off in whatever regulated or even non regulated environment they may be in.
[00:18:17] Speaker B: Yeah, no, that's a great question and it's a great topic. Especially like you mentioned when we look at power utility generation, transmission, substation for the last whatever, 15 years since we've been working together in different capacities. NERC SIP actually when it was the 1200 series, right. When it was optional and then it became NERC SIP and there's so all these iterations of it, which is one of the few in critical infrastructure that have that level of regulation right now, like you mentioned, oil and gas, water doesn't have really any. And you look at yesterday or a couple of days ago, they had the water breach. Maybe on the IT side, we don't know much about it yet.
Compliance I always looked at as a practitioner, I always want security. Right. Compliance is that vehicle that you can use to get security put in place. As you know, everybody's fighting for dollars for budget and this and that. So how do we get the money we need to put in security? A lot of folks use compliance, but as a practitioner, compliance doesn't mean you're secure. Right. I was reading a thread yesterday on LinkedIn and it was talking about that why are we using cybersecurity frameworks if we're still having all these issues? The challenge is people are using compliance and it should be the floor, right? Yes, we have to be compliant. We need to make sure that. But that shouldn't be your high watermark, right? You have to meet that we understand there's, there's issues with resources and budgets and this and that, but you can't use that control as your ceiling. It should be your floor. And you should always build best practices right across all industries. Understanding the challenge is where do we get the money from? Of course in the NERC SIP world there's regulated, unregulated, sometimes you can do rate cases and you can get money back. So you see a lot of those companies in those regions doing more. Right. But at the end of the day we have to do better as an industry, as practitioners, even the government to make sure that we're getting funding, grants, whatever to help organizations build their program, even if it's a starting point or where do they go from? You mentioned 62443 from when we're looking at industrial control, not NERC SIP. Right. That's a specific revision of 853. They're all very similar in nature. There's a lot of commonalities across them. For organizations that have to meet NERC sip, maybe pci, there's lots of ways, economies of scale, to try to leverage it across the enterprise as a whole. But unfortunately we still work very silo based and end up costing the organizations more money. So I think we'll always have that debate that compliance is needed because it becomes the vehicle to get the budget and the funding. But it has to be really pushed down at the enterprise level either by the government.
[00:21:13] Speaker A: Of course.
[00:21:13] Speaker B: With NERC sip, we got ferc, we got NERC and then we got the regional entities. There's a big ecosystem there that has taken a lot of time to build to get to where we are and it's still changing. I mean just coming out now, I think it was NIRC 15, where now we're finally looking at east West. Right. We're not just looking. It's always been north, south. Right. It's always been in and out of the firewall. But what about that insider or that person? Never really. Were we talking about east West? Well, that's lateral movement, that's privilege escalation. Right.
So it's, it's going to be interesting how that's applied to the industry. When we're looking at power for highs and mediums and eventually it's going to come down to the lows and it's going to be a new whirlwind.
Exciting to see. But yeah, that's my thought there. When we talk about compliance versus security, they work together. But just because you're compliant doesn't mean you're secure. You have to do more.
[00:22:08] Speaker A: Yeah.
[00:22:08] Speaker B: At least that's my thoughts.
[00:22:09] Speaker A: Agreed. And it's one of those things where, you know, especially in NERC, SIP compliance is 100% compliant. Right. So the thing I would always say is 99.99999999% compliant is not compliant. You are incompliant because you are not 100. You are either compliant or you are not. There is no gray area in there. Right. So, so you are getting fined because you are out of compliance or you're at least having a recordable or whatever. Right. So where to your point, I can be compliant and still have issue. Right. So when you look at a lot of these large, you know, utilities, you know, maybe 40 or 50 or 70% of their assets, their critical assets are part of the compliance program. And there's the other 30. That's not, that doesn't mean those 30 are not important. That just means they don't meet the criteria to be part of the compliance program. Right. So we see Colonial Pipeline for instance, right? Going back to that attack, you know, not so long ago, that was not an OT system, but it impacted ot. Like I get so tired of hearing these arguments from pundits and social media people like oh well that wasn't even a cyber, that wasn't even an OT attack. Who cares?
[00:23:19] Speaker B: You didn't do then you didn't do a business impact assessment to determine what you bring that IT system down is going to impact that correct system. Right? So they go and it goes back to resilience, right. Even tabletops going back around that you really have to understand the system as a whole, which is, I know, like with cyber informed engineering and really understanding every component, the dependency upstream as well as downstream because becomes so much critical because for a long time we were talking about the convergence of IT and ot really they should never converge. But it is always going to need data from OT for maintenance, for scheduling, for outages, Right. You're always going to need that. There is a secure way of doing that, but it becomes a challenge because once you open up those holes now you have dependencies, right? Understanding those dependencies and if I bring this down, what's it going to impact? So it's not something you're going to do overnight. It's planning, right? It's strategy, it's understanding the systems, working with your partners really to be able to come together and understand what resilience really means.
[00:24:32] Speaker C: You know, that brings up a good point. You mentioned cyber informed engineering, which is kind of another buzzword flavor of the decade. But you know, when it comes to the OT IT bridge kind of thing and the way that people are assessing that risk or those hazards, right. I think it needs to be flipped, right. When you're talking about ot and you know, this may be twisting the, the terminology a little bit, but instead of having cyber informed engineering, I think you need to have engineering informed cyber. Right? Because when it comes to your resiliency and because like you mentioned before, if it impacts ot, it impacts OP ot, it doesn't matter if it was targeting OT or not. And understanding how a cyber vector can affect the OT is important. And the only reason a lot of cyber folks don't understand that, whether it's IT cyber, the only way you're truly going to understand that is to understand the processes, the operations. And you need to engage with the engineers, with the operators on that side. And when it comes to OT cyber, and this is kind of a soapbox of mine, but when it comes to OT cyber, the whole process should start on the operation side and work its way outward, not the other way around.
I typically see it and cyber, they have this mentality of, hey guy, hey, OT folks, we need to protect you from yourselves. And that's not the fact of the matter, is that operators and engineers have been protecting against impacts to the process, to the environment, to production for a very long time. And they're good at it. Cyber is just a new hazard in the whole process. Hazards analysis.
[00:26:24] Speaker A: Correct.
[00:26:25] Speaker B: I mean, for the longest, yeah. I'm sorry, go on, Aaron.
[00:26:27] Speaker A: No, I was just gonna say I agree 100%. And that's really what the Cyber Informed Engineering out of Idaho National Labs is really trying to say is you have to include. Because it can't be me as a cybersecurity professional designing a cyber system to push down to the plant. You know, and I've been doing this way longer than it was called cyber informed Eng engineering or anything like that. I was working with the asset owners, I was working with the systems administrators and the systems owners and the control engineers and those people and saying, hey, this is your system. We, you need remote access. I, we need to make sure that that's secure. So instead of just putting this on the Internet, let's do this in a secure manner. Right. But it's not me, you know, my security solutions are not the thing that drive the thing. Right. So. And I saw this from a, again, being a former asset owner and a power utility here in Texas, I saw a lot of things that came down from it it had the budget, they had, you know, all these, these policies and procedures and they tried to cram them down our throat because I worked at the plan in the business, right. But what. And I had this conversation a lot, even all the way up to, you know, the CISO of our company. And I said, look, guys, you have to realize you're the tail.
You don't wag the dog like you have more money than me. But if my stuff goes down, our company doesn't exist. If your stuff goes down, it's a bad day, right? You've got executives upset or whatever. But if my stuff goes down, the government is calling us because we've taken, you know, 30,000, you know, megawatts of generation off the grid and we have a big, big, big problem. Right? It's a, it's. You need to set those things separately. And to your point, Clint, is cyber is just a risk that they have to design into the system. And that's the piece is, is up until now, those engineers don't have the skill set, nor should they have the skill set to truly understand what that risk is. So we need to be working together. We're on the same team. It should not be it against ot, it should not be it against the plants in the business and the manufacturing and all this kind of stuff. We should be sitting at the table together because in on the same, at the same time, OT needs it as well to provide secure mode access, to provide firewall support like you don't. You shouldn't expect your control engineers to be managing a firewall and be managing virtual environments and to be managing the network environments and the routing and all that kind of stuff, like let them focus on what they do. But the only way that's going to happen is if you build trust between those organizations. Because I'm not going to give anything to it if I think they're going to break my system. So a lot of that, that, that distrust is because it tries to cramp things down OT's throat. And they say, if you're going to do that, get out of my house, I'll do it in house. I'd rather do it poorly than allow you in my house because you break stuff. And I'm the one that gets called at 3 o'clock in the morning when the thing goes down while you're having mimosas, you know, six hours away and don't even know what happened. Right.
[00:29:28] Speaker C: Yeah. That's key communication. Right. The. Whenever you have a culture of working collaboratively with OT and It and they communicate and actually take the time to say, hey, teach me what you do. Teach me about your process. Whether it's it or ot, I want to understand that is how you, you know, that is how you build that trust, right. I mean, 18 years of marriage and I'm not divorced yet, you know, has taught me, you know, communication is key. And otit, you know, it is a relationship. Right? And in any relationship, communication, understanding, compromise, meeting in the middle is key.
[00:30:11] Speaker B: Yeah, no, I would say even when we were talking, when you mentioned Idaho National Labs, Aaron, even their consequence driven, right. It goes back to, it goes back to. It's similar to business impact assessments. It's understanding the consequences. And the only way you could truly understand those consequences is getting everybody together. Right. And walking through it. And that even includes the manufacturer. I remember when I was at a large utility, worked there for a number of years, we used to do factory acceptance testing, right. We would have it ot us, the distributor, everybody at the fat, right, really walking through the process and then we would do it again at the site. So site acceptance testing right under now, not enough now. It takes time, it takes dollars to do that, but not enough entities out there to do it. Because at the end of the day, we're always looking at security by design or I think the new thing is security by default. Unfortunately, I don't know if we'll ever get there. Security still seems like it's a bolt on at some point in the future. It will be brought in early in the conversation, but the earlier you bring it, the better chances you have to build a program that you can really work with, be more proactive rather than always reacting. Because if we're always reacting, we're losing.
[00:31:28] Speaker A: Well, I mean, I 100% agree and we've made progress because I remember, I don't know, back in 2010 or something, I was again an asset owner power utility and we were doing control system upgrades across our fleet. And I think we did, I don't know, 15 that year. So I spent a lot of time in, I won't say the cities, but other places doing factor acceptance tests with the control vendors. Right. And as we were specing out those systems and when I say we, I was part of the team, it wasn't me doing it, but I was invited it in. Not by choice. They didn't have a choice. It was like nerc SIP was coming out. We had to have it secure. There was the, the vendor actually put in front of the plant manager the secure and the insecure option. For their control system. Right. And, and as soon as I saw that, I'm like, are you freaking kidding me? Like you actually have that? That's the title of your proposal. The secure option is this and the insecure option is this one. Which one do you want? And of course the plant manager wanted the insecure. Why? Because it was $100,000 cheaper or whatever that number was. And I said, okay, this is not on the table, you cannot do this one, so throw it out. And don't ever show anybody at this company an insecure option again. By definite, like, why would you even have that name? Of course now nobody would do that. But back then it was, it's still, it was acceptable. Like most people. It was, you know, no, no active directory, no network, no firewall, no any of that. It was just, you know, raw.
[00:32:56] Speaker B: It was about the service level agreement. How can we make sure we meet that sla. It wasn't about making sure we can control or secure the environment.
[00:33:04] Speaker A: Right, Absolutely.
[00:33:06] Speaker C: You know, and that, that really kind of brings up an interesting pivot in this conversation.
The convenience versus security factor. Especially since we're, you know, talking about cyber, you know, this in the spirit of cyber security awareness.
There was an interesting thing that happened to me this past weekend sitting on the couch and you know, I, I'm gonna go ahead and I'm, I'm gonna, I'm gonna come out here, I, I, I switched from Android to iPhone and I'm coming out of the closet here and the, and so I'm getting everything set up and you know, when you switch over, you know, I use a password manager and everything, but you gotta switching everything over and you know, getting all my social media accounts and my Netflix and everything set up and I forget which one it was. And you know, my wife's sitting there next to me and I'm, I start screaming at my phone because I'm like, are you kidding me? I have to go through. We, we're no longer in the era of dual factor and multi factor. I literally had to go through five different stage gates of security.
Oh, it wasn't even anything important. It was Steam. It was, I'm screaming at my computer saying, are you kidding me? It's a dank video game platform and I have to go through more stage gates of security than I do for my Microsoft account. What Give. And then I realized right then, and I'm a cyber security professional and I'm like, this is why I hate cybersecurity.
And if I'm a cybersecurity professional. And that's my reaction.
That is, you know, you're getting 10 times that reaction from the average user. Right. And that is why cybersecurity fails in many cases is because we're over engineering the cybersecurity in many places. And anytime something seriously severely overrides convenience, like there's a compromise. But when it's so hard to use, you're either going to not use that service or you're going to find ways to circumvent the service. Right?
[00:35:13] Speaker A: Yeah.
[00:35:13] Speaker C: And so I think my mantra is when it comes to cybersecurity awareness, everybody's already aware. Right. So the new meaning of cybersecurity awareness is not how do you make people aware of cyber threats, it is all about how do you find a balance between convenience and security, how do you get people willingly engaged in the cybersecurity equation? And you know, I'll just leave that as a conversation.
[00:35:42] Speaker B: Well, engagement is always the challenge. And I know it's always been the end users, the weakest link. And because we're talking about cybersecurity, realistically, the end user should be part of your solution, the first line of defense, the last line of defense. But you do have to make it easy because it always is about the user experience. So it doesn't matter if you're doing multifactor authentication or you're doing this, the user experience has to be acceptable, but we can't risk security at the same time. Right. So it's, it's, how do we engage them? If you force it down their throat, they're not going to be happy, they're going to find ways to circumvent it and then none of us win. Right. So I always look at the end user, the business, whatever it is, we have to engage them because they are part of our solution and without them we're not going to win.
[00:36:29] Speaker A: Absolutely. And you said something right there that's super important and I've seen it in OT all the time. Availability. Right. And, and if, if you make it so complex, why do we not have hundred character passwords? Because nobody can remember them. Right. And I, I can't tell you how many power plants and manufacturing facilities I walk into and the, and the password is printed or written out on a sticky note sitting right by the operator workstation. Right. If people will find workarounds, if you make it so complex, they will find a way around it. They'll bypass your firewall, they'll bypass your security, they'll do whatever. Because when it's Saturday morning at 3:00 in the morning and they have system and they've got the plant manager screaming on the, on behind their head and the CEO screaming at them. They're about to lose their job. They don't care about your cyber security stuff. They just need to get the system working right. Availability trumps everything. When, when there's no electricity and the power plant just tripped and you've got to get it back online. The last thing in your mind is thinking about the cyber security. Is this safe? Like safe safety from a, from a human, human, you know, life perspective? Yes. Is always there. But we've got to start thinking about, hey, we can't make it so compl. Complex that they're going to just bypass it because they will. To your point, right. They will find a way around it or they'll kick you out and say, I'm not doing that. I'm going to accept this risk. And you're take that someplace else because I don't have time for it.
[00:37:50] Speaker B: Yeah, I remember I was at a conference a few years ago and the guy that was talking was talking about the blast radius. If something happens. What is that blast radius? And if it becomes so complicated to try to restore, you're doing something wrong.
[00:38:03] Speaker A: Right, Right.
[00:38:04] Speaker B: I think you mentioned earlier, Clint, I mean, keep it simple. There's ways to implement cybersecurity that are not overly complex or you used over engineering. Right. It's how to find that balance because there is good ways and it's right sized for the function, for the business. It's not one size fits all. And I think that's also the challenge with compliance sometimes. Right. It's not always one size fits all. This business risk is different than this one's appetite because of what they do. And it's how you implement it and it's, it goes back to that. You can't implement every control at once and think you're going to be successful.
[00:38:43] Speaker C: Right.
[00:38:44] Speaker B: You implement your controls, you mature those controls, then you add on additional controls as needed based on your attack surface and the different things that the industry you're in, the benchmarks. Right. It's a, it's a, we always use continuous monitoring. Continuous this, It's a continuous process. Right. We can never put our guard down because the bad actors don't. And then for them it's all about money and they want your data. So we always are working. And I mean it's, it's a challenge for anybody out there. But if you over complicate it, it's Also going to be a larger challenge to try to be successful or if you have to restore because of something happened, it could be attack, it could be just a misconfiguration or an accident. But if it's so complicated to restore, it's. You've lost.
[00:39:34] Speaker C: Yeah, yeah. And I think going back to the conversation, so how do you manage that over engineering the compromise of accessibility and availability versus security and my experience this past weekend to go back to that example was that, you know, go back to the Android. No, to be fair, I did find out that my password manager because I use a platform agnostic password manager and I'm not going to tell everybody what that is because that's a security issue.
So I use that. And number one, using a password manager, I don't know any of my passwords and yes, I do have 30 plus character passwords because the convenience factor. So if you engineer convenience into security, flipping the script, then you have security. Right. So using a password manager that can not auto fill in the insecure traditional sense but with the click of a button can auto fill passwords into your browser and things like that. And that's what I was using. I would not have been able to get everything switched over effectively.
[00:40:47] Speaker A: Sure.
[00:40:48] Speaker C: Because onto my phone had I not had a password manager and be able to manage that. So making the password managers easy to use. Right. Making your third party authenticator. So I'm using different, you know, whether you use Microsoft authenticator or Google or whatever, making it easy to do the security password managers authenticators with one click button authentication because you know, Google has a fantastic one. Right. Is this me? Yes. You know, and yes, there are ways to circumvent that. But when you make security easy then you flip that script like I said before. And that's the problem again. Going back to examples of the week and everything. And my wife yells at me because she's trying to get into Netflix and I have this super long password and she gripes at me because why do you have this? How do you remember that? I said I don't remember it, I have it on a password manager. And then going back to the user mentality, she refuses to use the password manager because it's new technology for her to learn. So it's this constant cat and mouse chase of we just made it more convenient for you user. And the user says but it's new technology, I have to learn. And that's not convenient. It's a right.
[00:42:02] Speaker B: Yep.
[00:42:03] Speaker A: Yeah. And so I love that and 100% like I've got my family on password manager, so we have one and I've got the family plan. So my kids have their own logins and my wife has one so they can create their own logins and they're, they all know how to do it. And it's funny because, you know, they're, they're sharing or they're, they're doing it at school and the other kids are looking at them like they're weird. Like, why do you, why do you have, what is that? Like they don't understand it but you know, they're like, oh, my dad's in cybersecurity, so. And they're like, oh, okay. You know, but, but driving to that, those conversations. I think that's, let's, I'm going to lead it back to those tabletops and, and why I want to lead it to those tabletops is because I think that's kind of a great place that can be used and most of the time are not being used properly, I don't believe. Because most places that do a tabletop, they do it once a year. And the cyber executives and you know, the CISO and the C suite folks go through that exercise. They have some of the facilities, people, whatever, but they don't have all of the right people involved. And part of that is because it's hard to do. You know, they're paying, you know, third party companies to come in and they don't, they can't get everybody in the same room. So Clint, I know that you've, with Threat Gen have a product and this isn't to pitch that, but more so around what we can do using AI and things like that to make those conversations better. And the outcome of those things can be that it and OT and plant managers and executives all better understand and their perspectives are put into the, you know, process so that when you come out, you come out with a more accurate understanding of my risks, the mitigations that I need to do, the concerns that I have, the places that I'm, you know, I've got gaps, the places that I'm strong in, you know, all of that type of stuff can come in because I have everybody at the table. So all that lead up to say let's dig into tabletops.
[00:44:07] Speaker C: Yeah. So, you know, so tabletops are one of those things to where it's in, it's the age old adage where practice makes perfect or perfect practice makes perfect. Right.
But it's a measurement tool and it is muscle memory. The problem is, number one, the symptom is that we're doing annual tabletops. You can't really get much out of an annual tabletop other than checking a box, going back to compliance. Right. But if you want to get the true value out of a tabletop, tops, you need to do them more frequently. You know, it's. You're. You're not going to get as fit as air in there with, by wanting. By doing one ruck.
You know, you, you got to do it regularly, right? You got to exercise regularly or it doesn't get into your muscle memory. And, you know, what's the old saying, you know, everybody has a plan until you get punched in the mouth. Well, but if you, but if that plan is ingrained into your reaction, your muscle memory, then you're going to be fine when it hits the fan and you're gonna know what to do. So. But why. Why do we only do it once a year? And aside from apathy and just complacency, it's because tabletops have traditionally been hard, right? I mean, it takes a lot of time to plan one effectively and how to do it. And it, you know, we're creatures and boring. Yeah, exactly. We're creatures of comfort and we don't do things that are difficult very easily. Right. And so this is. That is where the generative AI does come in, in that. Because it's a superhuman, right? It analyzes things like human, and it can do it. It augments the human capabilities to write documents, to create things, to come up with stories, right. Using generative AI, you can take all that heavy lifting off of the resources that it takes to plan the tabletop exercise. You can use generative AI to create your scenarios, realistic, effective scenarios.
Any tabletop in the past that I've been involved in the traditional way, you never have enough artifacts and collateral, right? You're always going to have somebody that says, well, what do those logs look like? Or that's not how our system works, or you have those problems. So using generative AI, you can either preemptively create a ton of artifacts and collateral to be able to show your team, your audience, or you can create it on the fly. That's one of the things that I've created, which is this dynamic system that will create the scenario and facilitate the scenario and create injects and run it. And when somebody says, hey, how do you, you know, what do the logs look like? It will create very detailed logs. It'll create this cohesive, congruent story. And that's what generative AI does, right? And so all of a sudden, now, when we don't have to spend tons of time and resources to create the scenarios and run the tabletop, get everybody in the same room, we can do them more effectively, right? We can run them quarterly, twice a year, we can run them quarterly. And now you have a measurement tool that says, well, we've improved, we've improved, we've improved. Okay, we put some changes in place, let's run it again. Or how about this? You got a new threat that comes out and you're unsure whether or not you're prepared for that threat. Can we, do we have the time to create a traditional tabletop, to put that threat in there, write the scenario, do all this with generative AI, you can simply take the details of that threat, feed it to generative AI and say, let's go. And now all of a sudden you are running a tabletop instantly on that thread. So, you know, to me, going back, you know, it should. This should probably be my closing thought, but I'm gonna say it now. You know, generative AI is the Jerry Maguire of technology. You know, you complete me.
[00:48:06] Speaker A: That's right, yeah.
[00:48:08] Speaker B: You know what I love, What I love about it is, let's say, because you like, I think with the traditional tabletop, the logistical piece of getting everybody involved becomes a challenge. But you can leverage the automation of IT and then build, like you said, Clint, build new injects, bring in other groups that a lot of times forgotten facilities, building management. Right? And then you can take that previous one and then put injects that are specific to them that now they can get value all almost on the fly. Right. So it really is quick to turn around. And then you can get. Now those different business areas are once again, we use that word engaged. Because now it's specific to their environment, their function, and then it helps them build a plan on recovery, resilience, continuum, business continuity. Right, Whatever it may be.
[00:49:00] Speaker A: So, yeah, absolutely. If you take it a step further, like the use cases is open ended and that you can expand it so I can run it with this group and then I can take the results and feed it back to the system and then send it out to my systems engineers and let them run through it. And then their input is adjusting the model. And then I send it to my IT people and then I send it to my OT people and then I come back together and say, hey, after we fed all these things through, we made some assumptions in the first round that, you know, our systems engineers or our plant managers or our. This Group, Group. Hey. They brought up some things that we didn't think about. Let's go back through it again. And it's like, you know, to Clint's point, it's like a muscle, right? So if I do this and I learn, oh, well, I should have initiated my incident response plan two steps before when I go through that same scenario again, like, ah, I remember last time I waited too long. Let's go ahead and initiate. Right. It's the same reason why we do fire drills, like in buildings, right? Is we don't want the first time that somebody's having to evacuate for a fire to be when there's an actual fire. Right. We want them to understand, where's the muster point? I don't use the elevators. There's a floor warden. You know, what do I do? Where do I evacuate to? I go two floors down. Like, if I'm in, you know, a building in the, in the, you know, in New York, I'm. I'm going two floors down or whatever that procedure is. The tabletops allow us to do similar things and do them repetitively without this super overhead and expense of, of having to bring everybody in the same room and have a third party to hold it. Like, it just really opens up the capability and allows it to be used like a tool.
[00:50:37] Speaker C: You know, it's. So this panel here is the perfect illustration of tabletops you have. Aaron and Mike are the epitome of the regular practice, regular exercising on the annual tabletop.
Who do you want to be?
[00:51:00] Speaker A: Awesome.
All right, so. So always kind of wrap these things up with a, with a question I ask everyone, and it's, it's the next five to 10 years. What's one thing that you see that's concerning and maybe one thing that you're excited about coming up over the horizon in cyber. So, Mike, I'll go with you first. What's, what's one thing that you're kind of excited for? Maybe one thing you're concerned about? That we need to make sure that we, you know, stop at the past.
[00:51:26] Speaker B: Actually, I, I think it's the conversation we had today. AI excited about its capabilities, but concerned about its capabilities.
And that's like Clint was saying, we're at its infancy. As you mentioned, when power, electricity becomes available, that brain is just going to get larger and larger and larger, and that will bring a lot of value, but at the same time, it will bring a lot of risk that as practitioners, we have to always be thinking about that. So I'm glad I went first because that was the easy answer.
[00:51:57] Speaker A: It's a double edged soap.
Go ahead, Clint.
[00:52:00] Speaker B: Yeah.
[00:52:00] Speaker C: And so let's just go with the assumption I'm not allowed to say the same thing. Right. Because you knew that's what I was gonna say.
[00:52:07] Speaker A: Exactly.
[00:52:08] Speaker C: All right. So, you know, I think that the thing that I'm, that I, that what was it that I'm not concerned about, but the thing that I see potential in and I, I think is that in the slowly but surely we are making progress in cyber. Right. I mean, especially in, from the OT perspective. Right. I mean, 10, 20 years ago there was resistance and now there is, dare I say, awareness and concern and willing participation. Right. So I am seeing progress in the willingness to address cybersecurity. Okay. And I think on the flip side, what I'm concerned about is budgets.
Because without budget, you're not going to hire the right personnel, you're not going to retain personnel and staff, you're not going to get what you need. You're going to continue to search the bottom of the barrel to find whatever tools you can to do the best you can with what you have. Right. And, and so I think budgets have always been a concern. And I don't think that while I'm seeing some progress from the C level and everything like that, I think I'm seeing far too often that you don't have enough buy in from C level to allocate the necessary budget. People aren't seeing security as a business process, they're seeing it as insurance. And I don't like that. So that's my concern.
[00:53:46] Speaker A: Yeah. Kind of adding on to that, I, I see a lack of understanding and acceptance of risk. And part of that is because the executives are accepting risk because somebody told them, yeah, we're okay with this or whatever, and they're not truly understanding the risk because I think if they really understood it, they wouldn't accept that risk. Right. So I think a lot of risks that are accepted, but whether they're pushing it off to cyber insurance or whether they're saying, oh, well, that's not a problem or whatever, I think a lot of that is, is because they are misinformed or not, you know, accurately informed maybe, you know, and I don't think it's malicious. I think it's, yeah, they, they think it's like that, that false sense of security in that they think they're better off than they are. You know, they're really strong on their IT side. They've got really good firewalls, they've got processes and procedures. But, you know, you look at the target attack, you look at any. A lot of these things. It's all it takes is, you know, a chain is only as strong as its weakest link. If I've got one device that's connected to the Internet, bypasses all of this great security stuff, it didn't matter, right? I've got all of this capability, but I just bypassed it. Because you made the security solution too hard. So some engineer at the plant just plugged it directly in the Internet. Right. You know, I've done assessments where the cable modem that's providing tertiary or backup communications is plugged directly into a network. And I. And it's got the WI fi built into it. And I could bypass all of your firewalls and all the enterprise stuff that you've got by connecting to the cable modem router. And I'm directly in. Right. And it's. It's not uncommon, right. We have systems that are old, we have systems that are Windows xp. We have a lot of this stuff. And that doesn't mean you've got to rip them out, but you truly need to understand those risks and make sure you're mitigating those risks, not just accepting the risks blindly. Because honestly, most people that are accepting the risk probably don't even have the authority to accept the risk. And it's not accurately getting bubbled up enough to translate that risk and communicate what the true risk is so that the people that actually have the authority, the CISO and the C Suite executives, are able to make the right informed decision and accept and. Or mitigate those risks accordingly.
[00:55:53] Speaker B: I think that's a topic we have to have another conversation on, Aaron, because it's. I mean, you can go in so many areas, it might be a future one. Because even when you look at reputational risk, we always used to be, hey, if your breed and you're a public company, your stock's going to go down. But how many stocks gone down? People buy the dip, and then next thing you know, six months later, their stock is through the roof, higher than ever. Right. So it's understanding the risks and the impact and how you have that conversation with the C level.
[00:56:25] Speaker A: The C Suite, Yeah, absolutely. Well, awesome, gents, I really appreciate the time today. Some great conversations.
Clint and I didn't fight over AI or anything like that.
[00:56:36] Speaker B: I was the mediator fighting anymore.
[00:56:38] Speaker C: I let AI fight my battles. Now.
[00:56:42] Speaker A: That that podcast I listened to, really, they basically trained their voice and they had it connected to a phone line. So they would actually have it call customer service for them. So I can absolutely see that being something I want in my future. If you need to disconnect your cable or you need to whatever, just have your AI be your assistant and take care of that stuff. Stuff right now. You know, luckily I have a wife and she does that stuff for me. But it'd be really cool if I had an AI that I could just say, hey, go do this stuff. Negotiate my contract for my cell phone and don't give up until they give me a discount. Right.
[00:57:15] Speaker C: Talk to my boss about my performance review for me.
[00:57:18] Speaker A: Exactly.
Go on job applications. There's all sorts of endless opportunities that people will use AI for that really have nothing to do with cybersecurity. They're just fun to talk about.
[00:57:31] Speaker B: So 100%.
[00:57:33] Speaker A: All right, gentlemen, I appreciate your time.
[00:57:35] Speaker B: Awesome. Be good.
[00:57:36] Speaker C: Thanks, guys.
[00:57:37] Speaker A: Thanks for joining us on Protect it all, where we explore the crossroads of IT and OT cybersecurity.
Remember to subscribe wherever you get your podcasts to stay ahead in this ever evolving field. Until next time.