Episode Transcript
[00:00:00] Speaker A: You're listening to protect it all, where Aaron Crow expands the conversation beyond just ot delving into the interconnected worlds of it and OT cybersecurity.
Get ready for essential strategies and insights.
Here's your host, Aaron Crow.
Hey, Joe's, welcome to the show, man. I know I said this before, but it looks like you're a mad professor there with your background and everything and some of the conversations we have, it actually fits. So I'm excited for this episode. So why don't you, you introduce yourself, tell us who you are and a little bit about yourself.
[00:00:35] Speaker B: Yeah, absolutely. My name is Joseph Perry, and for a billion dollars I will ransom back the moon.
I work for Morgan Franklin Cyber, the Cyber fusion center, specifically where I manage within the fusion center, I manage incident response, threat intelligence, and purple teaming activities. I also, on the company's behalf, do a lot of things in this very nebulous and complex world we call AI. So my career started out in the US Navy, the national security agency, a little over a decade ago now, about 1213 years ago, I started out as a very technical engineer, a programmer first and foremost, C C, even like x 86 assembly kind of programming. Over my time at the NSA, I really interested in emerging technology and specifically the security implications of emerging technology. So this is ten years ago. So emerging technology then would be like bitcoin. And the last time that we were talking about AI, big data, stuff like that. So that was my area of interest at the time. I left the NSA about six years ago. Since then, I've done a lot of consulting, a lot of pen testing. I've taught a lot of courses online for programming and stuff like that. And yeah, most of my time is spent doing either incident response or some sort of security simulation, whether that's red teaming, purple team, whatever the case, may be.
[00:01:41] Speaker A: Awesome. Yeah, that's exciting. I mean, obviously this industry and cybersecurity is such a big topic now that back in the day, NSA and three letter organizations, they were all considering those things. The general population wasn't always necessarily up to speed, especially in the Ot world.
Infrastructure and all that kind of stuff. You know, red teaming, a lot of stuff. Maybe we heard it, but, you know, it was from the outside. Like, every individual person in a company wasn't thinking about cybersecurity, which now I, the good thing, I think, is that I think more and more people understand cybersecurity is, has to do with them. It's not just somebody else's problem. I need to maintain my cyber, whether it's at work or at home and all of those things in between. So it's. It's amazing how far it's come in a short amount of time.
[00:02:26] Speaker B: Absolutely. Even looking at 2017, 2018, when I started doing consulting and got out of the government, I spent a lot of my time convincing companies that they might be the target of a cyber attack. It wasn't even on the level of, these are the tools you need to employ, or these are the security steps you need to take. It was on the level of, yes, you might be targeted. And the rise of ransomware, obviously, has really kind of helped put aside any of those uncertainties or concerns. But you're absolutely right that it's relatively recent, that the very idea of cyber security as core function has really caught on.
[00:02:56] Speaker A: Yeah, you know, I've had a lot of those conversations, too, and they're like, well, why would anybody want to attack me? Like, we're nobody. We're just this small XYZ company in the middle of nowhere. Why would anybody want to attack me? I'm not critical infrastructure. I'm not, you know, I'm not CIA. Like, I'm none of those things. Like, I sell milk or whatever the thing is that they do.
But to your point, it's like people don't care. Like they're going after money. Like, it's. It's. It's opportunity. It's. If I can do something and I can take. Take an action on it, then I'm going to do it. Unfortunately, that's the space that we're in. It's like, you know, you leave your. Your car unlocked and. And, you know, $100 bill on the. On the dash, somebody might walk by and grab it. Right. Because it's there.
[00:03:37] Speaker B: Absolutely. It's. It's something I like to kind of liken it, too. People always talk about, you know, most sharks don't actually like the taste of humans, but that makes me worried because the shark has to find out it doesn't like the taste.
[00:03:47] Speaker A: Right.
[00:03:47] Speaker B: So just because you may be right, you may not actually have anything useful to the attacker. The attacker may well compromise you, hit you with ransomware and find out that you don't have enough to pay them, in which case you just go out of business. The ransomware doesn't not happen as a result of that. You just suffer the worst possible consequences. And we see that. And now that we've started seeing that in the news, we've started seeing that being accurately covered where. And it's really unfortunate that schools and healthcare organizations have gone out of business as a result of this. I definitely never want to say that's a good thing or anything like that, but the one silver lining is now, when someone says, well, we just make pencils, why would anyone ever attack us? We can say, well, here's the last pencil company that said that. Here's what happened to that. Which isn't a great place to be, but it is useful to have a genuine demonstrable to get to someone. I got into cybersecurity because every computer I've ever touched has broken to pieces, and I figured I might as well weaponize that. There's nothing. And a lot of it's just because I have a suffix in my name, but every database I've ever been entered into has just errored out and failed. So, yeah, I am sure that somehow I broke something lugging into this, and it's totally my.
[00:04:44] Speaker A: That's awesome. I mean, I actually want to use that in the podcast because honestly, it's important to understand. Like, same thing with me. Like, I didn't start out, I didn't drive towards getting into cybersecurity. You know, I graduated high school in 96. I'm old and. But I was, you know, I took our library when I was in high school. I was very computer savvy. I had a computer from a very young age, Tandy, or a trash 80 IBM or the radio shack version. Right. Tape player, as the program would fast forward and hit play. Yeah, that Mac over there is one of my first computers. It actually still works. But yeah, you know, I was just the technology guy. Like, it just came naturally to me. It worked really well. I never really did much programming. It was more around the infrastructure side and kind of connecting them all together and kind of using them.
I took the library, our high school library, from the old school catalog, index cards, and took that to a computerized barcode system while I was there. As my senior year, I worked in the library and converted us all over, and I'm the one that set it all up and all the users and all that kind of stuff. And then that kind of transitioned, I went into colleges, electrical engineer, and then I was working, and it was really, you know, the Internet and the technology boom was really happening, and there was so much going on. And then through that, working in infrastructure and networking and systems integration, all that kind of stuff, the configuration piece, again, we didn't call it cybersecurity, but, you know, I was a domain exchange admin for at and t and these different places. So I was doing security, but again, we didn't really call it cybersecurity. It was just, you were making sure that, you know, Bob didn't have, Bob and accounting didn't have access to the, you know, the, the non accounting environments and vice versa, and, and all that kind of stuff. And then that just kind of grew and grew and grew, and then it became this whole other thing that, you know, online and hackers and all this different type of stuff, especially from a public facing perspective. But a lot of that time I was working in manufacturing and critical infrastructure, and all that stuff was just air gapped. And it was, you know, security by obscurity. It was these one off systems, and they weren't the same things. But as we started bringing those things into the, into these environments, commercial, off the shelf stuff into these OT environments, we started having the same problems that they were having in the it space. And we unintentionally brought those things into the OT world and were like, oh, crap, what? We need to fix this. This is a problem. I remember doing a control system, and the vendor actually gave us a proposal, and it's when I was an asset owner, and he gave us a proposal, and it was the secure version or the insecure version, and that was, that was the proposal titles. Do you want the secure version or the insecure version? And the insecure version was obviously like $100,000 cheaper, and the plant wanted the insecure version because it was cheaper. And I'm like, you can't choose the insecure version. Like, that's not an option.
[00:07:37] Speaker B: Yeah, absolutely. It's one of the things I joke a lot. Like, I have a unique insight because I came up as a programmer, I have a unique insight, insight into how exploits become possible and how vulnerabilities happen. And pretty consistently, a vulnerability doesn't happen because it's some new, incredibly advanced technique that no one's ever heard of. We get a vulnerability because somebody refuses to change the tech stack they've been using for 30 years, and God help them, you're not going to make them change their library either.
It's the joke that there are still people out there who are trying to upgrade from python two to python three. But it's as technology gets adopted into a new area, that area starts on the back foot and has to speed run learning all of the hard lessons that we learned the first time we do it into the OT world, they had to speedrun learning all the problems that we learned in it. I mean, kind of in the same thing when we actually built cybersecurity up as an industry, it went through all the same problems that it went where nobody took it seriously, nobody who was involved in it, who understood it technically, was able to talk to business leaders. All of those same growing pains kind of happen. And it's that sense of technological adoption is powerful and useful, and it's sort of inevitable in most industries, but it also carries with it kind of those same problems that it had the first time it got adopted, wherever it was first innovated into.
[00:08:47] Speaker A: Well, and the irony is, and I know some of the things that I want to get into with you is talking about AI and IR and threat and all those different categories, but the irony is specifically in a lot of these areas, and it's, it and Ot, I think it's just maybe a little bit more extreme on the OT side. But, you know, you've got, you've got folks that are 40 year old technology. You know, the stuff that was installed 40 years ago, it hasn't been patched, it hasn't been changed. It's exactly the way that it was when it was installed 40 years ago. All the way up to people talking about how do I use, use AI in my, you know, to enhance my, my manufacturing line, right. And everything in between. So you've got this vast difference of cutting edge, bleeding edge capabilities and looking to, you know, enhance and update and all that type of stuff to people that are just continuing. If it ain't broke, don't fix it like they're running their old pickup truck that they've been running. It's got a million miles, and they're not going to get a new carburetor, you know, a fuel injected engine. Right. It's that far difference between them and as cybersecurity professionals and in this vast world, and obviously hackers as well, we have to be able to look at and protect all. Like, we can't just expect everybody in the world ot environments, it environments to just upgrade to the latest and greatest across their entire organization because it'd be super expensive, hard, all of those things.
So we have to be able to protect where we're at. And the answer is not always, especially in OT, it's not always just patch. It's not always just upgrade the control system or the windows machine or the network switch or whatever the thing is that's old and archaic. Sometimes it's like, okay, I can't do that, so how else can I protect this thing?
[00:10:25] Speaker B: What mitigating controls can I put in place? Yeah. And I think that point that you hit on there is such a critical one, and it gets into what we'll talk about with AI. I have clients that I go to, and I'm talking to them about using laps and removing local admin privileges for most of their users. And then I have clients that if I go to and if I brought that to them, they'd be like, oh, you're not a serious security professional. You're talking to me about security measures from 1520 years ago. Why are you bringing this up to me? Like, I need you talking to me about cripple and how I can minimize my splunk input and how I can reduce costs and how I can improve my analytics and my dashboard. That's what I'm here to talk you about. But if I go back to that first client, I was like, here's the crippling implementation that I think makes sense for you. They're going to lose that. They're going to start screaming it because they're not even in the cloud yet, let alone need to worry about data input and managing the volume.
Not only do you need to be able to change that context within your own mind and understand these are the solutions best suited to this problem state, but also within the industry, we have this constant push where there's a chunk of the industry, and I love them, they're very near and dear to my heart, who are just interested in the most advanced research, who are just doing the hardest red team were just doing, and that's great stuff. But when I talk to those folks, I'm like, I can't take that to any of my clients. They're not going to understand a word. I love this. I love your results. I'm so glad you did this. But I can barely explain it to my junior analyst, let alone a CISO, who's going to have to make purchasing decisions based on it. So much of our challenge is changing context and sort of this matrix sense of not only is it how mature the organization is, but it's also based on what kind of needs they have. Are their needs extremely technical? Are their needs extremely procedural? Are they about. We don't really have that many data assets that we need to have this complex attack chain for. We really need our people to stop paying out fake invoices and stuff like that. So it's just all of these different ideas.
I think of them mentally as tags, but these different things you need to associate with a given client or a given environment, that is, this is the way that I have to approach this. These are the problems that I need to solve for them. And it's just a wild constellation of potential problems that you might get from space to space.
[00:12:16] Speaker A: Yeah, absolutely. And you talked about something really important there, and it ties right into the AI, talking about cribble and how much data I'm sending in and adjusting. And I think that's the bigger piece, is you talk to these cisos and they hear these buzzwords on LinkedIn or in the newspaper, or their vendors are telling them, oh, you know, it's like every product you have has an AI label on it, right? I've got AI. Like, what does it do? I don't know, but I've got AI in it. Is it actually something I need? Is it enhancing something? Not really sure.
But as we, as we look through this, it's no different than any other product. It's like, I actually did a podcast and a post the other day, and it was around, you know, there's an OT, Forrester OT product that Forrester put out, and, you know, there was top right quadrant and all the different things, but it was such a generic view of it. It was like you had, you know, networking products, firewalls, and with, you know, compliance tools, and they were rating them against each other. And it's not really a fair rating. It's like, it's like rating a. An 18 wheeler with an apple. Like, yeah, the 18 wheeler is better at hauling equipment, but it's not going to taste good.
[00:13:29] Speaker B: I mean, buy it out of it. You're going to hurt yourself.
[00:13:31] Speaker A: They're just not the same product. They're not even the same category. Like, yes, they're both in the OT industry, but they're not. They're not really a fair comparison.
[00:13:38] Speaker B: Different kinds of problems from different spaces. Yeah, absolutely.
[00:13:41] Speaker A: Yeah. So, so AI, I see as a huge one. So let's dive into that. Like, where do you see AI being an impact in cybersecurity? And I know there's a lot of different ways, and obviously, we can't hit them all, but what are some of the bigger ones that you see being a benefit to folks, no matter where they are in that tech stack? Are they the guy that doesn't have the admin, they're still logging in as admin, or people that are more advanced down the road in their cyber journey?
[00:14:07] Speaker B: Yeah, that's a great question.
Without getting into the whole technical history of AI, which I love talking about and would happily do for hours and hours, one fact that really needs to be known when we're talking and for the purposes of this conversation, when I'm talking about AI, I mostly mean generative AI, and I mostly mean things that look like or behave like chat. GPT sure. So in that context of AI, one thing that's really important to know is that kind of AI was mostly created for language based tasks, so specifically translation, but just in general for things that that are conversational, that sound like a person talking to you. And so that's important to note as we start looking at how can AI impact cybersecurity? Because there are a lot of problems in cybersecurity that we hear AI like, haha, this is going to solve that problem. We can hand it our data lake, and it's going to tell us every security problem. That's not really something that this kind of AI is suited for. And in fact, that's something that big data and data science have been doing for 20 years. That's where data lakes came from, is to solve exactly that problem. So there are a lot of problems in cybersecurity that people want AI to be a solution for, and they really just want it to be a solution for that, because that would make them a lot of money. That being said, there are places that it is a really useful solution. I was alluding to that when I first made the point of theyre really good at conversational tasks. So one of the things that ive already seen it implemented in is in various sims and soar products where they are using it as basically the final analytic layer before an alert gets to the analyst. So if Im an analyst and Im looking at a SiM, Im looking at dozens, maybe hundreds of alerts a day. But Im also not seeing thousands more alerts that are below a certain threshold that don't actually meet incident qualifications, whatever the case may be. There are thousands and thousands of data points that are coming into this tool that I'm not necessarily going to want to read through all of those and look at all of those. And what AI tools are really, really effective for doing is taking all of that information that's already structured and formatted. It's not unstructured, a big blob of data somewhere. It's a very clearly structured and metadata typed and sorted data. But the AI can look at all of that, use what's called retrieval augmented generation rag, and basically use those documents to print out, basically use only the context of the documents to give you its result, which means that it's going to give you a very human, very conversational analysis of a very limited data space. And that's great because it removes, it doesn't remove the possibility of AI being wrong, but it dramatically reduces the risk of it. And it also means that what it's having, that analysis of is something that you can just look at yourself. If you're like, this doesn't make sense to me. You can look at the alerts and see if there's a mistake somewhere in there. So it's a very tightly focused use. It's just taking a big pile of alerts and putting them in a paragraph that a human being can read. But it's what that analyst would have had to spend an hour or more doing of just manually reading all of those alerts and putting it in a paragraph. So that's the biggest impact I've seen for it. And you see that same basic use sort of spread across all forms of cyber. So threat intelligence has a similar thing where reports are now being generated based on the data being put in, threat hunt reports being generated in the same way. Purple team, red team, all of that. A lot of it comes in that we have the data now let's put it out into a formatted form for users. For a lot of folks, when they first hear that, that's kind of a disappointing use case of AI, they're like, oh, so it's like, it's a secretary. And my response is like, well, yes, now you have someone writing your report, you have a secretary. That's such an expensive thing and hard to get. I would fight tooth and nail to get a secretary in my current role. It would save me hundreds of hours a month.
So, yeah, AI is just a secretary in that case. But that's a really powerful use if you're using it effectively. A bit more technical use case that I've started to see, and I'm not convinced it's going to work long term yet, but I'm also not convinced it won't. I just haven't been kind of convinced in either direction. That's where a lot of organizations, the mitres of the world, or folks who are associated with the mitres of the world, who have really large data sets where they have this Mitre ATT and CK and the Mitre defend and the Mitre Dao, and they use AI to basically take a given attack or a given event and map it across that to quickly give you information about what next steps to take and what process to take. And that gets a little bit more tightly into what a lot of people imagine AI being used for, which is automating that whole operational process from start to finish.
There are a lot of questions yet to be answered about that. I mentioned earlier in the reporting creation, if AI makes a mistake in generating its report, I as an analyst, read that report and say, that doesn't look right. Based on what I found, I can go check that data. If this AI is following the mitre process and they're following the kill chain and making changes to my environment and making changes and cutting the attack route, mitigating do all this a mistake is much more potentially catastrophic and requires much more human intervention. And so that's where the question I think still exists of will there be too much human intervention required that it's not really worth the expense of implementing aih? Or will we find a balance where, okay, we go from a team of ten people monitoring this to one person who monitors it, and so it's worth the investment. I'm not really at all convinced that the second case is going to happen yet, but there are a lot of very smart people working on it and I'm willing to give them the benefit of the doubt there. Those are probably the two biggest general use cases I would see for AI. Beyond that, when you get into the very, oh, AI is going to hack your environment with a brand new ode. That's nonsense. It comes from a fundamental misunderstanding of how AI works. Anybody with enough money to do that can just tell the NSA to hack you. They're already a government, they already have a spy agency working for them. They don't need to build an AI.
[00:19:15] Speaker A: To do it well. And I think people get confused around some of those topics, like a lot of the quantum computing and all that. That really is not AI. That's a difference in the processor and how things are, how quickly and fast and how many records it can do at a time and all that kind of stuff, versus Aihdenkhdev. I think we have this, many of us have this vision from, whether it's Terminator or whatever, you're going to have this thing that can out crunch and is smarter, et cetera. But our current version of AI is not that now, next version, if you go from three to 3.5 to four, obviously there were giant leaps in all of those. Five is potentially exponentially better than where it is now. And sure, maybe at some point we'll get to that place, but, you know, you even have read books and articles around, you know, even on the, on the defense side, you know, using AI and just make it faster. Right. The whole point is to narrow down, but I still want to have a human making sure that, hey, this is the right decision and I'm hesitant to take out of defense, put it into a power utility, put it into critical infrastructure. Am I willing to just give the business, the automation or the computer the right to make all decisions without humor and intervention ever, like in the whole thing of Ot is automating. Like we are automating and we're building, if this happens, then do this. If this happens, then do this. If this happens, then do this. But all those things, an operator is watching screens and they're seeing those indications come up and they're the, they're able to hear, look, feel, taste, smell, all the things that are going on around them and take action accordingly. Right? They know their train, they understand what good looks like. They understand when to take action, what to do in these scenarios. You know, you, you go back to your, whether it's an IR doctor or whatever, you know, my response plan of how do I operate this thing? These are my normal conditions and know what to do. I don't think, I don't see a time anywhere in the near future where I'm going to just remove a human person from at all. Now, to your point, maybe I'm going reduce it from needing ten to needing two. Can I do that? I can absolutely see that, but I don't see any in the near term future where we're going to have, oh, we're just going to completely automate that thing and then I'm done. We're going to walk off, do the next one, and AI will take care of that. Just like a human would like, yeah, we're not there yet. I don't.
[00:21:47] Speaker B: Well, and it's one of my favorite quotes in the whole world comes from Marvin Minsky, and it was in 1967. And Marvin Minsky said, I am convinced that within a decade we will have substantially solved the problem of creating human like intelligence. In other words, what he was saying was, we will have Terminator level AI no later than 1975. Sure, he was a little off.
So I don't think anybody who makes the claim today of like, oh, we're x years away, we're x years, we're y years away. They're just as wrong. And quantum computing is a great parallel here, because quantum computing and aihdenhe and in its slightly different but related way, like things like the metaverse or bitcoin, all have a similar problem at the heart of them. And I love AI. I have been obsessed with AI for basically my entire professional career. So this isn't just me being like, oh, AI is stupid, we shouldn't do it. But they have a fundamental problem at the core of them, which is that they require such a massive investment and such a massive public adoption to make sense that they really have to reshape the world. Otherwise they're not worth talking about. And so every time someone who sells AI is talking to you about aih, they're always going to be pitching you the future of what AI will be one day. And the fact that we're not there yet is kind of incidental to the current conversation. The fact that, like, oh, but one day you'll be able to give chat GPT 55 total control of your environment. It's like, okay, but today I'm on chat GPT four, right? And if I try and do that, it's going to sell my company for a peanut and, like, promise that it's a legally binding deal, right? What are you talking about? And so, so much of what we have to do as security professionals, and this is why I love talking about AI and hypesight and security is our job, is not to be true believers, but it's also not to be obsessive skeptics. It's not to say AI is stupid and it's useless, because I just talked about two use cases where a sock that uses AI to do that first pass analysis is going to work hilariously faster than a sock that does it. I've seen it on our own practice. I've seen it with my own analysts, I've seen it with my own work. Having access to those summaries is much faster and much more. Even in the times that I've had to be like, that's not right, and I have to go back and check it. That's one thing that I had to go back and fixed and as opposed to all of the little bits and pieces that I had to arrange and get, just so, it's so much faster.
So the thing that I love about AI and the thing that we as security professionals have to do is picking the parts of it that are actually useful and figuring out through all the hype, through all the magical thought and the mythical language about what AI is going to do and the dark AI, God, that's going to turn us all into paperclips. The reality of it is like, well, mostly what it's good at is taking this very technical data and turning it into conversational data. And that's very useful. That's a very useful thing for it to do, but that's a bit different from solving cold fusion and taking over the world. That's a very different problem. Set your point about chat. GPT five might be worlds better than GPT four, and it very likely is. One of the interesting things about this kind of technology is basically the more money you dump into it, the better it gets, in a pretty predictable way, which is not true of most technology. If you dumped $10 million into making a car, there's not really any reason to think you're going to get a better car than if you dump $5 million into making it. Ask the air force about their jets.
That was mean.
But it's with AI, there is kind of that promise of if you dump more money into it, this model will get more powerful. Now, we don't know how long that's going to be true. And that's one of the things that's really interesting, because basically, with each form of AI we've seen in the past, where the first version of AI is what we now think of as the Internet, it was the of web pages and distributed metadata, tagged information that's accessible from across the world. And then AI matured into human level reason, human level thinking. There's never really been one accepted definition of AI. And so then in the eighties and nineties, it kind of got dialed back to basically what we now think of as excel, the ability to transform data at large scales and turn it into useful human knowledge. That's AI in the eighties, nineties, then in the two thousands, AI was big data. It was the ability to get, to draw conclusions, to make market decisions based on this massive unstructured data. And that is, by the way, when you ask me, when we talk about the most advanced AI, that's still the most advanced AI, the stuff that folks like Goldman and Sachs are using for fraud detection is hilariously more powerful and advanced than anything we're using to give us a recipe in the form of Shakespeare. It's so much more effective and technologically impressive to me. But that being said, so what I think is really likely is we're going to get to, in the next few years, we're going to get to the end of this current version of AI's ability to keep getting better. They're going to dump another billion dollars into it, and it's only going to get a 10th of a billion dollars better, and that's not going to be worth it for the investors. But it always leads the seed of the useful thing, and it also always leads to the next version of AI technology. So the paper that popularized chat GPT, that made this current version of AI popular, came out in 2016. It's not a very, very old paper. There were a lot of other papers coming out at the same time trying to solve those same problems that may spark the next AI revolution. And so we as security professionals have to look at this AI, this current AI revolution, this current hype cycle, and say, okay, what can we take out of it? What's going to be useful? What is a risk that we need to avoid? What is the thing that we need to mitigate, manage, and control? What is a third party risk, etcetera. But also, what lessons can we take from this? So that the next AI hype cycle we see, we come into it more prepared, and we don't make bad investments, we don't make bad decisions. It's really, it's one of those kind of, in my opinion, really cool things about the cyclical nature of AI and security is that if you pay attention, you get to learn from the cycles and do better on the next one. So that's my soapbox about AI and the past and future thereof.
[00:26:59] Speaker A: I love it. So I also want to dive into what you just said there. I love the term, and again, it aligns with a lot of things I'm talking about this week as well. And it's that hype cycle. And that may have a negative connotation to some folks, and it's not intended that way. Right. It's about the things that we're seeing in the zeitgeist, in the current, you know, cybersecurity tools or processes or capabilities or whatever, obviously, AI being one of those. What are those things that you're seeing today in the current hype cycle that are really impacting us, whether positively or negatively or neutral, even overblown, underblown, whatever it may be?
[00:27:40] Speaker B: Yeah, that's a great question. So I think one of the really interesting things about hype cycles in this context, and the reason why we want to talk about it, is that hype cycles have been a known and studied thing in AI for a very long time, for decades and decades now. And a hype cycle really only refers to AI's relationship with the public, not actually with the growth of the technology itself. Basically, the idea here is the public catches wind of some new innovation in AI, chat GPT, which actually, before chat, GPT, was deep fakes that caught everyone's.
[00:28:07] Speaker A: Attention, and now they're just like, oh, my God, this thing is crazy. Which it's been around. Right?
[00:28:11] Speaker B: Right, exactly. And so everybody gets really hyped up, we start hearing the predictions of, oh, the global economy is going to grow by 5% because of this, which is a wild and goofy prediction to make about any technology except for harvesting wheat. But anyway, so then we move into the phase where it doesn't manifest those predictions, it doesn't give us those things, and everybody says, oh, this technology was actually always stupid and useless, and we don't need it. We're going to throw it all away. And then we move into the stasis period before the next hype cycle. So in this current hype cycle, we're right. Getting into that third stage now where a lot of folks are starting to say, well, well, we're spending all this money on AI. Where are all the promises? Why isn't it materializing? Why aren't we getting anything out of this? But folks are kind of missing that. We have actually gotten a ton of stuff out of it, like a huge amount. For example, one of my favorite things about this is medicine, the COVID pandemic. One of the main, I don't want to say the only thing that cured Covid or helped solve the pandemic, one of the biggest influences in the pandemic being solved as fast as you're being developed, developed as fast as it was, is AI is basically a machine learning model that is similar, but not identical to the way chat GPT is designed. With all of the different pharmaceutical research organizations across the world that were doing research into Covid, pooling their research in one of the greatest acts of humanitarian medical technolog knowledgeism in human history. It's a hard word to say technology, but the idea is that we are seeing in medicine, we are seeing in finance, we are seeing in cybersecurity these really complex analytic tasks being made. Not trivial. I could not develop a drug with the help of AI. It doesn't matter how good the AI is. I'm an idiot when it comes to drugs. I don't know anything about chemistry, but somebody who has a bachelor's degree today now has a much more effective ability to contribute to their team than they would 510 years ago, where they had to manually look up each thing, they had to go through this process, and that's not using chat GPT, that's just using tools that are built on similar designs where they have these data sets that it's trained on.
So we're finding in all of these really niche areas that AI that's not necessarily identical to chat GPT, but is very similar in the concept and the design of neural nets that are trained on datasets, that are curated and that generate new information as a result of those data sets. So those are the areas where in really niche fields where we already have deep domain knowledge, where we already have this corpus of information for the AI to be trained on, that's where it's really, really impactful.
Most of those places have already been using big data or data science in that direction for a while. So to them it feels a lot more like kind of a one step up rather than the dramatic revolution that it actually is. But under the hood, it's a really significant change. So that's the biggest area. The other area that I think it's worth calling out is AI is really good for criminals and fraudsters. Like, it's so, so easy to do a fraud right now. I don't want to advise anyone to do it, but I have seen more fraudulent emails, more fraudulent letters, more convincing frauds in the last 18 months than in my entire career before that. And I saw a lot of fraud before that in my career. It was not a rare thing before that, but it's such a powerful tool, and not just for overtly fraudulent, but even for the kind of shady two person basement consultancy that we see a lot of in cybersecurity, it's very easy for them to all of a sudden have professional or semi professional looking presence and statement of work and all of this that can give a client a really, really positive impression of what they can do. And then they actually work with them and they find out that it's a bargain basement consultancy. So I don't want to say that that's all AI is good for, but it is really, really good for fraud and criminality.
[00:31:42] Speaker A: Yeah, I've seen some of that talked about and really diving into it though, right, is the emails you got from the nigerian prints. Right. You know, the language was bad. It was obvious if you were, if you really paid attention, you know, it was obvious it wasn't. Right. Right. But now they can take those same scammers can take that copy, put it into chat, GPT say this is designed for a United States. You know, they can get specific this. I want to focus on people in Texas, so I want to talk like a Texan would talk or a, you know, a Californian would talk or whatever. So that language barrier and noticing the.
[00:32:19] Speaker B: Style of a 60 year old from Houston.
[00:32:20] Speaker A: Exactly. So it's harder to notice the differences in language. And it's very easy to take a tool like this and copy and paste and get out. This really a lot better looking easier to assume is good, especially with somebody that doesn't do you and I will probably notice a lot more likely than others that don't do this for a living.
[00:32:49] Speaker B: Yeah, we see hundreds of examples a year.
[00:32:51] Speaker A: Exactly. We get them all the time. We're seeing it from, we've seen really good ones and we've seen really poor ones. I mean, it kind of everything in between, but we're also trained to look for those things. Whereas, you know, my grandmother, that is, you know, in her nineties, she's just assuming that, well, she got, my wife has even gotten the text, oh, you want a Walmart gift card for dollar 100? Just click on this link and sign up. Right. Those things are getting better and better. And chat GPT is definitely a tool, just like anything. And I think that's the good point to really focus on here, is it is a tool. It's like a hammer or a pry bar or whatever. You can't just expect the tool to do all the work. Like, you still have to wield it, you still have to do something with it, and you have to instruct it and guide it, and you're going to get different results depending on how good you are with that tool. So an expert, you know, I can take the same tools that a 30 year expert craftsman that builds cabinets. I'm not going to get the same quality. I can buy all of his tools, materials and everything, and I'm not going to build the same quality of cabinets that he has because I've never, I haven't been doing it for the past 40 years. I can't expect to take that tool and get the same results that he does.
[00:34:00] Speaker B: Absolutely. And that alludes to the thing that's important to understand here about the way scammers use generative AI and chat. GPT is one probably the biggest. They're also using things like Midjourner, where they're creating images with, using video generation. I can't actually remember the name. There is actually one GitHub project that's responsible for 95% of all deepfakes and I can't remember the name project off the top of my head right now, which is for the best, because I don't want to advertise for them.
[00:34:24] Speaker A: Exactly.
[00:34:26] Speaker B: But yeah, so it's, it's the thing you need to know about attackers is they're not just attacking you, they are sending millions of messages at tens of thousands, at least messages out. And it used to be that wasn't that big a deal because they still had to have somebody on their side writing those emails. And so yeah, they can a b test against 10,000 people, but they're looking at a click rate of 0.001% versus is 0.0015%. It's really just not a big deal. Whereas when they now have chat GPT, they can give them hundreds of examples of potential users right out of the gate. Well, now I can a b test, now I can do really good means, test really good audience testing across all of those different examples and figure out, oh, this one has a click rate of 5%, thousands of times better than anything we've ever seen before. And so that's where fraud has always had this industrial, not always fraud in the last 50 years has developed this incredibly powerful industrialization to it, but it always lacked inputs because there was never anybody willing to, there wasn't really anybody willing to sell their good exploits to those folks. There wasn't anybody really willing to do the hard copy work for them. All the stuff that we see in the more advanced, like the russian threat actors who are really, really advanced, who have assets in country, who will write emails and do things for them, a lot of these scam fronts, a lot of these really large industrialized fraudulent organizations, they didn't have access to that before. They were kind of considered the lower rung, the lower class of criminals. And now that doesn't matter as much. Now they have access to that input. And so that's the way in which you call it a tool. And you're exactly right. The most powerful tools are force multipliers. And that's all this is. It's a force multiplication tool. Now that's not to say it's not a tool that we need to consider carefully the Hong Kong case, is it? Last year somebody sold out $25 million given to a fraudster as a result of a video call where the fraudster used a deepfake to create an entire room full of people who were like, we need this right now. We've got to get it done. It's a secret, it's such a big deal. But if you're a finance person who's never really dealt with this kind of fraud before, and you get on a conference call with ten people, of course you're going to believe them. Of course you think that's real. So it's that area where fraudsters are now. They get to take this industrial engine they've been building for 50 years, and all of a sudden they have really, really high quality gasoline to put into it. And it used to be they were working with basically just mud and whatever little bit of fumes they could get out of that mud. That's all they could get out of their engine, but now they have what that engine was designed to run on, and that's a very, very scary thing.
[00:36:48] Speaker A: So how does all of this adjust and change and shift as these new emerging technologies are coming in the incident response world? Right? So we have all these things and capabilities driving how we have to respond, and we don't want to be too far behind the eight ball, but obviously we are. Incident response implies, right, of boom, the thing has happened and now we're responding to it.
[00:37:12] Speaker B: Yeah, I think the biggest change is that it's always been true that in incident response there's always going to be some social engineering element to the attack. Whether it's a phishing link that got clicked, whether it's an actual phone call that happened, whether Anh email went back and forth. In almost every incident I've worked across my entire career, and it's been way too many incidents, there's been some element of it that we would classify as social engineering.
Now, that's not really an innovative or exciting thing to say. If you work in IR or you work in red teaming, you already know social engineering has been key to the process for 15 or more years. The big thing that we're seeing now, however, is that between ransomware and AI, social engineering is often the whole incident. Now, it used to be the attacker would get into your environment and then they would need to get privilege escalation. They need to establish persistence, they need to move through the environment, they need to get all of these different systems. Now, a lot of the cases I see are the attacker got a click by ten people in the organization, figured out which of those ten people had access to reasonably important data, and hit with ransomware whatever systems they could get access to. There's very, very little process inside the environment. When I started in incident response, we would talk about dwell times of 140 plus days where the attacker really carefully mapped out the environment, understood everything, got to know everything. Attackers are now much more. Figure out what part of your infrastructure is critical, find a way to touch that part of your infrastructure, shut that thing down, move on. There is data exfiltration that still absolutely happens. There's still that theft and that extra ransom, but it is more and more those attacks are getting shorter in their cycle, and they're getting much more tightly focused on just getting that initial output access through social engineering. That's not to say there's no technical aspect to it, but I've definitely seen a lot fewer really technical incidents in the last few years.
I would also say, you know, from the incident response perspective, one of the things that's the biggest difference is most of the time the attacker that you're investigating is obviously trying to avoid getting caught. They're obviously trying to avoid the security tools, trying to, you know, act in whatever way is going to be most stealthy. Well, I should say obviously that used to be the case because the case was that the attacker had some ODA. They had access to this environment and they didn't want to burn their access. Now, in this age of industrialized processes where they have AI doing most of the first pass stuff for them, they've got a scanning tool, finding the low hanging fruit, and the AI is doing, and they're deploying the AI script automatically. They're not touching anything manually. A lot of that has gone away. So a lot of attacks are getting kind of noisier and clumsier. We're seeing a lot more of those script kiddies who before be able to hit like five companies a month, are now hitting 100 or 500 companies a month. And so as a result of that, the average attack is getting a lot louder. That's a good thing from the incident response perspective in that it's a lot easier for me to find and be like, oh yeah, this person's using a ten year old exploit. You should have patched that. But it is also very easy to catch using a modern sim. Sure. So we're seeing a lot of those kind of low hanging fruits that used to go by the wayside, used to kind of escape notice are now being hit because the ocean, so to speak, is sort of drying up as everybody is targeting on an industrial scale, every possible target.
[00:40:10] Speaker A: To put it in sales terms, it's almost like the cost of acquisition has reduced to a point where they're willing to spray and pray more often. Yeah, right. So is it also that they're less worried about maintaining that access because they're more confident that if you clicked on it once, I can probably get you to click on, I'll just change the attack and get maybe not the same person, somebody in your organization. I'll get access through another way. Again, if you find me today, there.
[00:40:37] Speaker B: Is definitely quite a bit of that. And I think one of the things that's really helpful in understanding this is that it's not so much that old attackers have changed their process as that new attackers who use a process that didn't used to work. Now that process works. And so exactly what you're talking about where the attacker, the sort of attacker that we used to be worried about is somebody who's really focused, really subtle, really attentive. They get one point of access and they don't want to get caught, because if that gets burned, their whole attack chain might get burned. Whereas now, when we're dealing with somebody who's willing to be loud, they'll send the same phishing email to 1520 people in the organization, and if they get a click through on all of them, they'll start exploiting all of those people at the same time. They'll start hitting all of those at the same time. So it is absolutely what you're seeing. I don't necessarily think it's necessarily because AI has caused a change in practice so much as AI has enabled what we in this field would consider very bad practice.
[00:41:23] Speaker A: Sure. Yeah. And it just allows bad. You know, again, that tool is leveling up people that don't have a that skillset. Maybe they're not. The script kiddies is a great example. Why don't you explain what that is? Some may know what that is and some may not.
[00:41:36] Speaker B: Yeah, that's a great question. This is kind of a parallel. So if you using marketing and sales as the example, if you work in marketing sales, you're probably aware of this huge influx of people who have absolutely no idea what they're doing, but have decided that because they have access to chat, GPT, and it can spit out their sales script for them, now they're a salesperson, same thing's happening to us. So script kiddies are folks who we would think of as being not just not very good at what they do, but all they're capable of doing is running a script, in other words, using a tool that someone else created for them. That term got popularized with the low orbit ion cannon incident of like 2005 or something like that. Like a very long time ago, where a tool was created, a DDoS tool, distributed denial of service tool, was published, published that leaked the user's email address. Anybody who used the tool, it was a cracked version. Anybody who used that version of the tool, their email was essentially just sent straight to the FBI, and hundreds of people got arrested as a result of it. It was a huge takedown. And so that kind of cemented this contempt in the environment, because the cracked version of the tool wasn't introduced by law enforcement. It was a hacker who did it to screw with people.
The folks who just use tools without understanding them have often been held in contempt by the industry because they just get their hands on a tool and just spray and pray it, which is exactly what we're seeing with AI.
[00:42:53] Speaker A: Well, and now we even see chat GPT able to take those existing tools that may exist and maybe I can put it into chat GPT. Maybe I don't have the programming chops to really create something on myself, but I can say, hey, adjust this to fit into this environment. So it's adjusting to fit a certain type of environment, a vulnerability or whatever. So you're using, again, tools. I don't necessarily have the capability to do that just in coding myself, but I can use tools like chat GPD to kind of help me. And we've seen a lot of that happening too, where chat GPT is just helping them adjust, take a tool and modify it to fit a certain scenario.
[00:43:32] Speaker B: Yeah, and that's, and you talked earlier about, you know, a tool, chat JPT is just a tool. And that is kind of what's happening here is people are getting really, really good at using chat in and of itself just using this tool. And so now they're able to adopt it to lots of scenarios. And one of those scenarios is this kind of attacking. So I, for example, will use chat GPT when I'm researching a new topic. I don't use chat GPT to give me the answers. I use it to develop my reading list, to develop my study guide, to develop my syllabi, help quiz me, stuff like that. I use it for processes like that to help me develop knowledge really, really fast. Attackers do essentially the exact same thing to develop new exploits or to it to get access to your environment. So if I'm a script kiddie and I see, oh, you've got a login page. Well, I remember reading a blog that said SQL injection attacks work against login pages. So I'm going to go to chat GPT and say, please give me a SQL injection attack to use against this URL. And it's going to spit out probably it'll tell me, no, it's not allowed to do that. And I'll need to massage my request a little bit. But if I'm really good at using chat GPT, that's not a struggle at all.
And so I can, even though I, all I know is that's a login page. And I remember reading somewhere that SQL injection is a good way to defeat login. Chat GPT can do all of the other work for me, because I'm good at using this tool, I'm now, again, not really a high quality, because if I, as a security professional who has more than a blog level understanding security, I'm probably not just going to hit a login page with a SQL injection out of the gate. I'm aware that that stopped being useful on most organizations like ten years ago. But there are, as I said before, all those low hanging fruits who got away from notice, avoided being caught before. Now folks are trying attacks against them that would have been disregarded as non viable.
But yeah, it's that script kiddie, that sense of it's not about knowing everything. It's not even necessarily about being good at everything. It's just figuring out how to use one tool to solve as many problems as you possibly can.
[00:45:20] Speaker A: Right.
Well, let's dive a little bit into threat intelligence and what we're seeing in the space. And it's not that the space has changed all that much. Right. I think we've seen, we've always had nation state actors, we've always had people that are just bad actors that are, whether doing it for financial purpose or because they're a disgruntled employee or they don't like the company or social activists or whatever the hacktivist type folks, the same categories have always existed. I think the tools have adjusted and maybe made some of the others having more access or success, even more attempts at bat, even just swinging more often because they have access to this. So dive in a little bit around that threat intelligence that we're seeing in the space.
[00:46:09] Speaker B: Yeah, absolutely. So in general, when we're talking about threat intelligence, there are a lot of approaches to it. And I always try to take the approach of, for a given organization, this is the thing that's most likely to be your problem in the next x time, right. So 30 days, 60 days, 90 days, one year, whatever the case may be. And it used to be for most organizations, I would talk to them about your biggest fear. Most organizations biggest fear is just getting in the news, just being on the front page. Bad press, right? Not just bad press, any press.
[00:46:36] Speaker A: Sure.
[00:46:37] Speaker B: Because that's what's going to draw a lot of eyes to you. That's when most attackers are going to take their kind of first pass at you with a scan to see what they can hit. And so that's what you need to be prepared for is those moments of notoriety. And so you need to have more attention around those, more preparation around those. That's not really as true anymore because what I've been talking about a lot with this industrialization, this kind of grinding process that's been brought up by a lot of these kind of script kiddie level actors. Is that the thing that makes you most likely to be hit now is just being under a certain threshold of ease of access. And so threat intelligence, you're absolutely right. Like the categorization of the process of discovery, threat intelligence hasn't really changed. But the way I prioritize, the way a lot of my peers in the field prioritize, this is the intelligence that we need to approach first or need to care about first has changed.
We're still spending time on apts. I'm still paying attention. I have clients that have been specifically targeted by russian actors because sanctions, stuff like that. I'm still being attentive to that, and I still keep track of that. But a lot more time now is spent on. Okay, this is the exploit that's being hit by 60% of threat actors in this space, or whatever the case may be, even though it's not necessarily one that leads. It's not one that has access to the entire environment. It only has access to this one bit of data. We found that attackers aren't necessarily as concerned with that. They'll go in through this one open door, hit the low hanging fruit, and bet that they can get something out of the deal.
A lot of what we're doing now is really shifting from an attacker model of, let's deep dive this attacker and know everything there is to know about them, because even the apt, even the really advanced ransomware gangs, they break up and reform and break up and reform all the time. And so it's not necessarily easy to keep track. So a lot more focus has now been given to sort of that technological side, the CPE, tracking the vulnerability, tracking the attack, surface management, stuff like that. That's where threat intelligence is really tightly focused now in terms of concrete kind of findings and things that I find really interesting in threat intelligence right now. My favorite kind of fact about threat intel is just that russian hackers really don't know what to do and haven't known what to do for, like, two years now.
I've worked with a lot of companies to do, like, ransomware negotiation and stuff like that. And ransomware negotiators went through a really bad period, like, a year or so ago now, because nobody was doing ransomware, because nobody in or around Russia really knew whether they were gonna get called up to war or whether they were gonna get blown up or, like, what was gonna happen with their whole lives, so they just weren't hacking. And so that's in terms of interesting things happening in the world of threat intelligence right now. Starting to see russian threat actors start to reassert. But it's lawless, I would say, in a way that it wasn't where there's a lot more. Before we'd see maze that would say, we don't attack hospitals or whatever. Nobody has those kind of rules anymore. There's none of that honor among thieves mentality. It's really chaotic. Threat actors are taking work from one another. Threat actors are.
I won't give too much away here, because this is something negotiators are really actively using a lot of different ways. But one technique that is viable now that was not viable, say, ten years ago, is playing threat actors against one another and getting access to information, getting access to data or getting access to systems, simply because, as I mentioned, groups break up, groups reform, groups break up, groups reform. Finding disgruntled hackers who are willing to sell out their comrades is a lot easier now that they're not part of this really big hierarchy that is eventually leading up to the FSB.
So it's a really fun place to be, especially if you're a ransomware negotiator right now.
[00:49:53] Speaker A: Yeah, that's interesting to see. Obviously, all these world events happen, and they impact, and we don't always understand the downstream effects that are going to happen from something, a butterfly effect that happened over there and what impact it's going to have to us here. So, obviously, for entities out there, you can't expect to know all of this. No more than I go out and fix my car. I'm a shade tree mechanic. I used to work on cars and things like that, but my car today has more computers in it than anything. It's not the computers that you and I work on. It's. It's a different thing. It doesn't mean I couldn't figure it out. But, you know, if something's happening there, I'm not the expert in that, so I'm going to call an expert for it.
[00:50:32] Speaker B: Right.
[00:50:32] Speaker A: So. So that I think we're seeing more and more in that today. You know, with. With all the. The focus on CSA, cisos now, with the. Their personal responsibility for impacts and their. Their organization and going to jail, it's. It's crazy.
And unfortunately, I think it's going to. It's going to force, and I think it's probably a good thing. Not that cisos are going to jail. But I think there's a, there's been a long time of cisos that see in their name by, by, really by name only. They didn't have the authority that a normal C suite executive would have. They're not in the board meetings. They're not, you know, they're not directly reporting to the board. You know, they're sometimes two and three levels below the rest of the C suite. So they're being held responsible for something, yet they don't have the authority to take action. They don't have the budget. They don't have the authority. They're having to get those things pushed through others. So Bob told them no, and now I'm hold responsible because he wouldn't approve the budget. Right. So there's all that's kind of flushing out and forcing organizations to put the responsibility and the authority into the right place so that, because they know, so if something happens, then we're all held responsible for this.
We can't choose the insecure option anymore because it's cheaper.
[00:51:54] Speaker B: The SEC has made it really clear they're not willing to play ball on that anymore either. They have become much more strict about whether it's reporting material incidents. They've given very strict timelines on that, whether it's yearly disclosures, discussing the board's experience with cybersecurity, whether it's whenever a new board member joins, they need to detail their experience and knowledge of cybersecurity. It is very clearly a matter that is getting scrutiny right now, and not just scrutiny in the sense of, oh, you need to be secure or you're going to get in trouble. That's not what happened. The scissors who are drawing legal attention, I'm not going to say whether or not they deserve to draw legal attention. I'm not a legal scholar. It's not my area. But the facts of those cases are legally interesting facts and are things that I would prefer a judge handle than you or me, schmucks off the street.
[00:52:36] Speaker A: Correct.
[00:52:36] Speaker B: And so there is definitely an element, I think, where you're seeing those very serious, whether it's the legal process, whether it's auditing, whatever the case is, we're seeing what I might call very serious people taking an interest in cybersecurity in a way that before we go to the board and say, hey, you need to care about cyber, because if you don't, you might. And they're like, yeah, we need to care about cyber because if we don't, the SEC is going to come after us. Right. So it's a very different perspective, for sure.
[00:52:58] Speaker A: Yeah, it's, it's the club, you know, the stick and stick out of the carrot. And we see this and switching over real quick to ot cybersecurity. It's the reason why critical, like power utility is so kind of further down the road than other of the 17 critical infrastructures. And part of that is from the compliance requirement to do so. Right. So they have this, they don't, it's not optional. They have to do it or there's huge fines that we've seen implemented. So those fines impact. So it's nothing we can, well, we're not going to worry about that. Like $10 million fines I've seen given out. I mean, they're huge. So of course we're going to do it.
[00:53:38] Speaker B: Just the last note on that, because critical infrastructure is a perfect example of that area where AI, cybersecurity, a lot of these tech fields, they come from a kind of Silicon Valley, move fast and break things. You can promise it in marketing and then eventually we'll build it in prod and it's not a big deal. Critical infrastructure, if something goes down, something goes down.
[00:53:56] Speaker A: Correct.
[00:53:57] Speaker B: That's a real fact about the world that has changed. And so it's got a much smaller tolerance for that.
[00:54:01] Speaker A: Well, and that's why the lead time on emerging technologies and all of this, we're ten to 20 years behind some of the things in it. Like we don't use cloud. We just recently in the last ten years started really adopting virtualization. Like all of these technologies have been in the it space for decades. And we're just now getting comfortable with them in some of these critical infrastructure environments.
The right reasons, like from outside in, it's like, wow, you still have Windows XP running. Yes.
It's not perfect, but it runs and it's harder. It would cause more problems than solve by just replacing that thing that you think is bad. So I'm gonna, and we talked about this earlier, right. I'm gonna put other mitigating controls around that instead of just replacing it.
[00:54:43] Speaker B: So, and that's, that's exactly it. It's that, and it's something that a lot of computer folks I struggled with a lot of my earlier career. Every time I go, I deal with, I don't want to talk too much back, but construction clients were the ones that I fought with by far the hardest because I would say, well, you can't give all your users look alive. You can't do this with your users. And they're like, okay, if we don't do that, tens of millions of dollars are lost on this building project because of this process has to change and this thing has to change and we have to go to this thing. We have to all. And it forced me to realize like, oh, I can't just come in here as this on high security genius do all of these things and now you're secure. They're like, well, we do those if we could, but we have contracts that say that box is going to stay a Windows XP box forever. We have agreements that say for safety reasons we won't make changes in this environment. And that can seem genuinely ludicrous coming from the outside in. But once you've been in that space and you moved in the OT, in the critical infrastructure space, you start to understand that it's just a completely different value set and it's a completely different set of problems that need to be solved.
[00:55:39] Speaker A: Absolutely, absolutely. So we talked a lot today, next five to ten years, what are maybe one thing that you're excited about coming up over the horizon and maybe one thing that you're a little concerned about that you see that could be coming up over the horizon.
[00:55:53] Speaker B: Yeah, I would say the thing I'm most excited about is probably the thing that has most people the most afraid, which is what we were just talking about, which is the fact that cybersecurity is starting to tighten down and get a lot more serious. I think in the next five to ten years, it's going to get a lot harder to break into this field. I think most of the boot camps are going to evaporate. I think a handful of the online training programs will still exist. I think some of the certifying bodies will still exist.
And I don't think it's going to get smaller because it's less necessary. I think exactly the opposite. I think it's going to get smaller because it's more necessary. There's more scrutiny, there's more attention. You have to really be able to perform. And these shops that are charging $100,000 for a nessus stand, they're not going to be able to stand up because people are going to start asking them hard questions. I'm excited to see our industry start to really move towards more of how we view the electrician. I said it wrong both ways. The electrical industry as electricians or plumbers, more of those skilled trades that have.
There is a very rigorous understanding of what a good electrician or a bad electrician looks like. There isn't really yet an accepted definition of good Soc analyst versus bad Soc analyst. But I think that's what we're moving towards. We're getting closer and closer by every day. That's the thing I'm excited about. Nervous about is we, and by we, I mostly mean cybersecurity. Like leaders and business leaders, we've solved almost all of the easy problems, and we've also solved most of the hard problems, which means now we have this terrible habit of inventing new problems to solve, and we keep coming up with like, oh, this is going to be your new. I don't want to say a specific technology because somebody will get really bad at me, but this is your new 5017 word, long cloud based technology that maybe, maybe one in 100 technical employees will understand, and at most, one in 50,000 people in the world understand.
[00:57:39] Speaker A: Right.
[00:57:39] Speaker B: And that's not a good place to be as an industry. That's not, you know, that's. They're always going to. You're going to need that niche of specialization, that niche of deep experience.
But if the technology you're selling is not something that you can explain to a normal human person off the street, street, it's probably not solving a real problem, because encryption, one of the hardest things in cybersecurity, trivial to explain. Sometimes we need to keep data secure. That's pretty hard. So we have people who do it for a living. Done solve, whether it's RSA, whether it's black hat, whether it's def con. I walk around these vendor floors and I ask people, so what does your technology do? And three paragraphs later, I have no idea what their technology does. And I don't want to be jerk. I'm really good at. I know a lot about this industry. I can't begin to parse what that person is saying.
[00:58:23] Speaker A: Right.
[00:58:24] Speaker B: So, yeah, I think that's the thing I'm most nervous about, is we have a really bad habit of just inventing goofy problems to solve, and I think that comes because of the thing that I'm excited about, which is we're moving into a much more mature state.
[00:58:34] Speaker A: Yeah. And I see that, too. Part of it's because it's the gold rush and there's a lot of money being dumped into cybersecurity.
[00:58:41] Speaker B: Shovel salesman out there. Yeah.
[00:58:42] Speaker A: You see a lot of people put, you know, they've got a widget, they've got a, you know, as seen on tv, they've got a whatever, right. And they sell it, and. And then you put it in there, and then there's limited to no value. Right.
But they already got their money and they're gone. Right. So there is a lot of that.
I think there's a lot of big companies buying smaller companies. There's a lot of commoditization in product space here. I think we're going to see that narrowing in people as well as products as we start doing this. That Forrester thing I was talking about before having things like that. I'm not trying to beat up on Forester. I think it's a great thing to have. I want to have some understanding of these, of these things, but we need to be really specific in what we're looking at so that I know that, you know, I needed a firewall, but I also need an asset management tool, and I also need something that does change detection and also need this other thing. Like, it's not a, there is no silver bullet in any of these things. I'm gonna have to have an ecosystem of capabilities, whether it be product driven people. It's people, process and technology. Right. It's. I need all those things. I, we're not there yet where we can just put in a tool, and then I don't have to hire people and I don't have to have all the process in the back end to just do it all for me. Right. We're just not quite there yet.
[00:59:56] Speaker B: Yeah. It's in you. In 2010, all we were talking about was how to get cybersecurity taken seriously and looked at as a serious problem. In 2015, all anybody was talking about was, how do we standardize this? How do we have frameworks? How do we all agree on the language for it in 2020, all anybody wanted to talk about is like, what's the future of the technology? How do we go from here? How do we get the next big thing and where we're at now? And I think where we're going to be for the next few years is just, okay, what are our actual goals for this program, and how do we measure them? Not just in the sense of what we're doing ten years ago, how do we measure success or failure in a basic way, but how do we measure progress toward a specific kind of nebulous security goal? How do we know whether or not a given technology should be adopted? The answer to that, the trite answer, is just domain expertise. You have to have the right, whether it's in house or consultants, the right folks who have the right deep level knowledge. But there's also a certain level of the institutional mentality, because it's really easy to have somebody come in here and tell you, like, the metaverse is the future. You've got to move your company into the metaverse. You're going to get hacked. And that's. That's obviously nonsense. But why is that nonsense? And somebody saying the same thing about AI isn't nonsense? And there is a difference between those two things. But you, as an organization, need to have the right kind of rational skepticism to be able to dig in and understand the difference between the two.
[01:01:05] Speaker A: Yeah, absolutely. Awesome, man. I really appreciate you being here today. Anything you want people to know, come find you, all that kind of stuff. This is your call to action.
[01:01:15] Speaker B: Yeah, absolutely. Well, as I mentioned at the beginning, I work with Morgan Franklin cyber, specifically in the cyber fusion center. You can find us doing all sorts of stuff, but especially you can get ahold of me. Joseph perryorganfranklin.com I think our next big event that we're doing is gonna be black hat and Defcon. I think we've got our cyber lounge there.
I can't remember the name of the place that it's Athenae.
[01:01:36] Speaker A: It's the House of Blues. House of Blues.
[01:01:39] Speaker B: So much more prepared than me. Yes. We're going to be doing an event at the house Blues. It's going to be awesome. We did an event in San Francisco during RSA convention. It was awesome. They're really excited to do just see folks and have folks out. We'll probably be talking a lot about AI there, and I'll probably be saying a lot of things that make people mad at me because everybody at that convention loved AI at the last one, loved AI and didn't like what I say about it. But yeah, it's going to be a great time. We love the conversation. We love meeting folks. And of course, you know, on a more businessy level, if you have a cybersecurity problem, whether that's an active incident, whether you're looking for a sock provider, whether you need somebody to help you understand AI, please do get a hold of me. That's what I'm here to help with.
[01:02:14] Speaker A: Awesome. Yeah. Thank you. Thank you so much, Joseph. Definitely excited about, other than the heat and all the things in Vegas during that time. But yeah, black hat and Defcon is always, always a lot of fun and getting in and diving in with all the users as well as the other vendors that are in the space and looking at all the emerging technology and sometimes making them upset with us because we're like, yeah, I don't get it. I feel like in big, like, you know, when Tom. Tom Hanks raises his hands, like, I don't get it.
[01:02:41] Speaker B: Yes. I feel that so often.
[01:02:44] Speaker A: Exactly.
[01:02:45] Speaker B: All right, well, thank you very much, Aaron. It's been an absolute pleasure to talk to you.
[01:02:48] Speaker A: Thanks, sir. Thanks for joining us on protect it all, where we explore the crossroads of it and ot cybersecurity.
Remember to subscribe wherever you get your podcasts to stay ahead in this ever evolving field. Until next time.