Episode Transcript
[00:00:00] Speaker A: When you're investigating unusual activity, it doesn't really work like that. You're acting more like an intelligence officer where you're having to go, okay, well, this is my starting point. And depending on the nature of the alert, it could be a really strong starting point. It could be a really vague, like something's unusual, and then you have to pull the string.
[00:00:15] Speaker B: You're listening to Protect it all, where Aaron Crowe expands the conversation beyond just OT delving into the interconnected worlds of IT and OT cybersecurity.
Get ready for essential strategies and insights.
Here's your host, Aaron Crow.
Awesome. Hey, thank you for joining me on the Protected all podcast today. I am excited to have this conversation. You and I met at S4 a few weeks ago. I, I have, I know your team and a lot of the folks that you work with. So really excited to have this conversation. So why don't you introduce yourself, tell us about you and your unique journey to get to where you are in this, in this OT cybersecurity space.
[00:00:56] Speaker A: Yeah. Good to see you again, Aaron. It's been a, been a busy couple of weeks since we saw each other, but yeah, I'm Oakley Cox. I am the director of product for Darktrace OT@Darktrace. So as you've probably already picked up, I am British. I'm joining from the United Kingdom.
And yeah, I've been at Darktrace now for coming up to eight years, but prior to that was. Had nothing to do with cyber security, had nothing to do with ot. My background is actually in chemistry, so I did a doctorate at the University of Oxford where that's where I first got introduced to kind of machine learning big data. And that's kind of what led me through a windy route within Darktrace to where I am today.
[00:01:40] Speaker B: So, man, that's awesome. We've talked a little bit before. Obviously we talked in person and we talked before this, this, this started recording. But that, that's such a unique path to get here and many of us have different unique paths. And you know, my, my background, I've done everything from real estate and sales and kind of a those things. And I think all of those skill sets have made me better at what I do now, whether it's talking on this podcast or it's, it's selling stuff or, you know, really even the real estate side, remodeling houses and all that kind of stuff. And even though it has absolutely nothing to do with O2 Cyber, it helps me ask questions and deal with ambiguity and there's there's all sorts of things like that. So. So how did you go from, you know, chemistry to AI and cyber security in OT of all things?
[00:02:23] Speaker A: Yeah. Well, I guess it's worth talking a little bit about the background. Right, the chemistry part. I was a strange person. I really enjoyed chemistry. I enjoyed learning it. I know a lot of people got put off by it at some point during high school or university, but yeah, I just loved learning. I kept doing it. I did it for my undergraduate and then was lucky enough to be offered a PhD position. But then this is the point where I started to realize it wasn't my thing because now I was having to do more stuff at the actual lab bench, the actual doing. So I like learning about it, but I didn't like doing it, which is a real limitation when you're spending all day, every day doing it.
But at the time this was the emergence of the big. Well at the time it was called big data. Right. The emergence of the ML kind of machine learning stuff. So my PhD research was interfacing between these two fields, like how can we make my job as a lab chemist easier by leveraging ML big data. And so when I finished that and realized, you know, I didn't want to do chemistry, I was looking for other ways to, other avenues to apply those, those more computational aspects of my research.
[00:03:27] Speaker B: So, so how, how much? First of all, I was one of those people. I was really good at math, I loved physics, never liked chemistry. I don't know why. Like I could do physics all day long and calculus and all that kind of stuff. I love those things. Chemistry to me was just like. I don't know if I was too stupid, I don't know what the problem was, but it did not click with me at all. So I am definitely one of those, those use cases. But so talk about why those, those connections and how those connections, you know, from chemistry and the models and all that kind of stuff. Like how are those things aligned and connected when they seem completely irrelevant or unrelated to each other?
[00:04:02] Speaker A: Yeah, So I think it's the. Fundamentally the maths is the same or similar. Right. So it's how do you, how do you take either structured or unstructured data and make sense of it to make your life easier. I, I've never seen AI as a, as a replacement, so so to speak, but more as a augmentation, a way to make my job or people like me, their job's easier. So when, when you know, I was looking around for what other applicants, what other things I could Do I came across this, what at the time was a relatively small UK startup called darktrace. I'd never thought about it, I'd never thought about cyber security before. But you know, they were applying AI as they called it then even back in 2017, applying AI to the problem of cyber security. And so I thought that sounds like a great opportunity. So, so went for. It was lucky enough they, they, they hire. So I went straight into the cyber analyst team. They don't hire based on knowledge, so they don't look for certifications and degrees in cyber security. They hire based on skill sets. They look for a lot of people in those kind of science backgrounds also language backgrounds, even things like international affairs, international relations. You get this real diversity of skills that then they, they then teach the knowledge once you're in.
[00:05:14] Speaker B: Yeah, that, that's, that's so forward thinking if you think about it. Right. And, and it's, it's really easy. And you, we see this, this knowledge gap and the skills gap that we have in cyber security and we have all these, these positions that we, that, that folks say that we need and, and you know, but you look at the job requirements on those pages and it's like, well, it's an entry level position but you have to have five years experience. Like that doesn't make sense. Right. So to your point, I, I've built a lot of teams and I looked for the underlying skill sets that I needed instead of a certification or years of experience or a title. Whereas, you know, I was taking people from, you know, it working in desktop support, networking in an it, in a law firm, things like that, and bringing them into my OT space. Even though they had zero. They'd never been to a plant, they'd never under. They didn't know what OT was. But I was able to take those underlying skill sets that they had and teach them the OT side. But they had the underlying troubleshooting and networking and those other skill sets that I needed and that built a really strong and powerful team. But it's, it's amazing to me that few, that, that more companies don't do that, especially when they're needing these positions. And unfortunately, I think part of that is probably because HR is the one riding the job wrecks and they don't really, truly understand what the need is. But as, as asset owners, as, as leadership, as, as management, like we need to work to better write those, those job wrecks and make sure that we really understand what we're looking for and know that there aren't Very many people like me, like you that has, has the, the so going and expecting that you can find one in the ether on LinkedIn that's a looking for a job, B has a skill set you're looking for, C is willing to take a job in the pay range or the location and all of the things that you're talking about, you've just narrowed down the potential opportunities to nil to almost zero. And then you wonder why you can't find anybody and then you hire somebody that's not, doesn't have any of those skill sets but has a certification but then they don't do a good job anyways. I'll get off my soapbox now and let's completely agree.
[00:07:12] Speaker A: I mean the other best plus side is it made it such a fun team to be a part of. Right. Like we were all in the same boat together. The, the onboarding, the first three, six months is really intense but we've all come from kind of, you know, highly competitive universities. We've, we've gone through school kind of doing that and we're coming into this new environment we love, all love learning together. We're all learning from each other. And then by the time you're that, you know, one, two years, three years down the line, you suddenly become the expert. But also it was only yesterday that you were in their shoes. So you get this kind of camaraderie. You still need the experts sprinkled around like so in our case, we were, you know, we were lucky enough to have a few former kind of GCHQ MI5 types and the equivalent in for the US team. And I'm sure you, you did the same with OT experts but building your team. So it still needs that, you know, that sprinkle of real expert, real depth knowledge. But then it below that you create this team that has a lot of camaraderie and fast learning.
[00:08:06] Speaker B: I think 100%. Yeah. So, so what are, what are some of the biggest. If you kind of go back a little bit to in the beginning versus even now, even what is, what are some of the things that you guys were trying to attack again? Like you said 2017, nobody's really talking about AI, especially in a cybersecurity lens, especially in an OT lens, we're still scared of AI, much less in an OT space. So what kind of things were you guys trying to solve for back then?
[00:08:32] Speaker A: Yeah, so at the time, you know, we, we were relatively new company but had, you know, a few hundred couple of thousand customers and the underlying technology Was this anomaly based detection. So you know, doing real time threat detection based on mathematical anomalies. And then as the security analyst team we were writing reports for our customers. So one of the first projects that the team I was in, was involved with was actually how can we now apply a different form of AI ML to actually solve that repetitive problem of writing a report? So how do we investigate an incident such that, you know, we can then create a, do the hypothesis testing that human analyst would, but the repetitive part of that and then create a full security analysis, full security incident. And so that was something we did in the first two, three years. We call it cyber AI analyst. We launched it in 2019 as part of the product and it allows that kind of correlation of disparate alerts, different devices, but not beyond just correlation. Also the investigation piece, you know what, what's this device done before? Does it represent some kind of, does this unusual activity correspond to some kind of security incident? And then piecing that all together. So that was very much that augmentation. It wasn't replacing the security analysis. It was making my job easier because now I didn't have to do the boring investigations. I could focus on what I enjoyed, which is the interesting in depth investigations.
[00:09:50] Speaker B: Sure. Well and that's, that's the key that I think most people need to understand because you hear all this, you know, the sky's falling, AI is going to replace all of your jobs. We're all going to be unemployed. You know, maybe at some point that that is the case. I see it to more like what you just said. I see AI as a tool, just like a screwdriver or a hammer. Like AI in and of itself is not going to go. It can't just replace a carpenter. Right. But a carpenter can build better stuff if he has better tools. Right? A Carpenter, you know, 300 years ago, using all manual tools. Yes. They could create amazingly beautiful, gorgeous things. We've seen examples of that, you know, Sistine Chapel and all of these things. Whether it's a carpenter or a painting or sculptures or all this kind of stuff. They did amazing things with their hand. But how much better and more accurate can they be when they're using these different tools? Look at 3D printing, look at, you know, power tools and, and, and all of the things that we have today.
It's amazing how much faster, better, more accurate, efficiently that we can do these things. And you can have one master and, and all of these, you know, lower skilled type people that can, can deliver these amazing things because of the Toolkits they have. And that's the way I see AI is, you know, every, all the vendors are wanting to bolt AI onto their thing for ob. And to your point, how can I make it more efficient? How can I make some of those reports? How can I take some of the responsibility? And some of the, and usually it's the lower level things that I start with. Like, that's the entry into AI is like I want to start with how do I make this report even if I know it's going to be a draft and I'm going to have to have a human to manipulate it and make it look better? How do I at least just take raw data and put it into a draft report? That saves me six hours of somebody copying and pasting from 12 different sources into this one file. Right, that, that's an easy thing that AI can do. It doesn't take much intelligence. It's just copy paste, you know, put it in here, blah, blah, blah. You can, you can very easily do that. So, so knowing that what, what are, what are some of the more advanced things that you guys are looking to do, either even in current state or even future casting and especially in OT cyber where everybody's so terrified of, you know, the AI robots crashing my system or taking over and me not being able to control it and all that kind of stuff that, that folks are fearful of from maybe watching too many movies or whatever the reason may be.
[00:12:07] Speaker A: Yeah, I think one of the more recent interesting things we've added, particularly from an OT perspective, is in instant response preparedness and training, you know, the readiness and recovery. There's a lot of hype around generative AI, your chat GPT AI and you know, the chat bottom type application that I don't think has a good application in cybersecurity or in particularly in OT cybersecurity. But you need that kind of trust and reliability, the ability to scrutinize the data at a really high level. So we were thinking of different ways of applying that type of thing and one of the things we came up with was a tabletop exercises. So taking your real data, combining that with templates of what an attack looks like, and then being able to play out a simulated note. You don't perform the attack, you simulate the attack within the ui. This is what an attack would look like in real time based on your devices, your data, and then that allows you to drill your teams. Right. Which is useful for cybersecurity in general. But in critical infrastructure where you know you're getting Requirements you have to tight tabletops every six months. You know, the ability to do it in a safe and secure way, remove that paperwork. Part of a traditional tabletop. We've seen it. A really popular use case that people in OTT seem to love.
[00:13:28] Speaker B: Yeah, I mean, you know, I've had this conversation with, with a lot of folks and, and the traditional tabletop where, you know, you do it with the executives once a year, you hire a big four consulting firm to come in. You know, a lot of pomp and circumstance around it. But. But what's the value add? Are you including the right people? Are you having the right conversation? Are you just checking a box? And unfortunately, I feel that a lot of folks are just checking a box. Whereas I see tabletops as more like training because it's just like anything. The more times I go, like the first. Why do we do, you know, fire drills and, and, you know, high rises? Right. We go through it because when, when the alarm goes off for real and there's smoke, you don't want people to go, like, I don't know what to do. Where do I go? Where's the fire escape? Like, should I take the elevator? Should I jump out the window? Like they should. We. It shouldn't be the first time that we've thought through this, right? We, we practice. We go back to the place of, of our, of our, you know, what we've done and what we've practiced. It's like with, with martial arts or with shooting or with, with anything. If you've trained something over and over and over again, it just becomes natural, right? So if you look at tabletops in that way, in that I've done this before, I've gone through this scenario. I know what my first step is. Saving seconds of time can be exponentially impactful in an incident. Like, how quickly can I make sure it's not spreading? How quickly can I shut it down and isolate it, like whatever the scenario is for the attack? But if it's the first time somebody's seen it or they don't even recognize it until it's weeks or months away, those are all things that we know, especially in ot, because most people are, you know, when you're locking in an OT environment, most of the time it's operators, it's engineers, it's chemists, right? There's. There's chemistry folks at power plants, and they're not thinking cybersecurity, they're thinking the water, you know, the chemistry in the water and other. Are there bad things in there? And how do I purified and get it to the levels that I'm looking for? Right. They're not thinking about when the, when the chemistry is off. They're not necessarily even considering is it really off or is that analyst is the device incorrect for some reason. And even if they are, they're thinking physically calibrated. They're not necessarily thinking somebody's tampering with it or there's a, there's a software glitch, whether it's a bad actor or anything. So all of these things are so powerful to train our people and, and tools like AI are huge in that. That's. Let's, let's take a step back real quick. Why don't you tell us what is darktrace? What is your, what is your focus in the marketplace in ot? Because what I love about your story is so many vendors and I'm talking a lot. I'm sorry, I just, I'm very passionate about this if you can't tell. But darktrace is very unique in the space in that there's not very many vendors in the, in the OT cyberspace that started as ot. Most of the vendors started as an IT company and they, they kind of shifted or they added a capability or, or they, they adjusted and, and kind of form fit into ot. Instead of you guys started out as an ot. Like that was the intention when you started up, it was to attack and look at OT cyber security. So that's a unique use case. So give us the, the elevator pitch or this the kind of what you guys do and what your focus is and what makes you guys different from, from a product perspective at least.
[00:16:39] Speaker A: Yeah. So when, when the, the founders were developing this Technology back in 2013, 2014, they were taking unstructured network data to train what they called a self learning AI right. To understand what, what is normal a pattern of life for that network. And it didn't really matter where that network traffic came from. And so they, the, the, the first place they went was those operational systems where the network traffic is, you know, it's still unstructured but it's more predictable. And so they're the first customer DT had as you, you've already said was, was an OT customer. Drax Power Station, which is a big, well at the time was a coal power station. It's now they burn wood pellets in the north of England.
And so you know the intention was to, to learn what is normal for that network and, and then perform anomaly based detection, but not in a way of there's a really key difference between what a lot of vendors are doing out there, which is baselining where they will take a, you know, a two week baseline to understand what is normal and then switch it to detect mode and then all of their detections from then on are referenced from the baseline. Now this is, you know, it's, it's massive to have a baseline, right? That's a huge upgrade for a huge number of organizations. But firstly you're baking in bad because you are assuming that your two week training period, four week training period is good. And secondly, even OT networks are not static, right? They are constantly evolving and changing. So you are then having to spend all of that detection time fighting false positives and saying, you know, re baselining it, saying accept this, I put it in. And then you've got all the risks of human error of saying yes, no. You know, does a security analyst who's actually having to do this all day every day going to thoroughly check that that SSH connection is expected or not? For example. So this was, this approach was different, it was real time. That's the self learning aspect, right. It evolves over time.
Now since then, you know, we've applied it across all sorts of parts, digital data. So we, we do it across cloud, email identity, things like Office365 and of course OT. So now we, we kind of have a broad platform but it strengthens it, it unifies all of it. So things like itot convergence is a massive use case for us because we can treat those different realms and as ot mutually cloud, right? As ot muse the cloud, treat those different digital realms but keep them in a single, a single pane of glass from a security perspective, right?
[00:19:06] Speaker B: Yeah, and that's, that's, I can't tell you how many people have, I've been around this long enough that you know, 20 years ago, you know, we were not, we were, everything was, was, was, you know, 4 to 20 milliamp. It was hard connected, they weren't using IP. As things started changing over, you know, they didn't want to use firewalls or routers or things like that. They wanted isolated air gap networks and everything was proprietary. And then they started bringing in commercially off the shelf equipment with switches and firewalls and Windows servers. And then you know, they resisted on virtualization and of course now everything uses virtualization and VMware and, and all the things. And cloud is another one of those. Cloud and AI obviously, you know, cloud is another one that's like, oh, we'll never Do OT in the cloud. People are already doing OT in the cloud right now. A power plant is probably not going to put their control room in the cloud. But cloud is such a, such a, interesting term because a cloud can just mean. It doesn't have to mean that I'm going to Amazon. It can mean I have a cloud in my data center. Right. It can mean a lot of things. So cloud is a generic term, but it's really just, you know, you know, in some place and it's a little bit different than, you know, physical iron servers, that this is a dedicated email server and this is a physical box, that's my file server and this is my network attached storage device. Like that's really the bigger differences of those things and how do I use that processing and GPUs and all that kind of stuff. And as we get into AI, we see, we see more and more of that as well. And I think it's a matter of making sure that we're thinking through. And what I love about what all that you've just shared, right, is you guys were forward thinking bringing in people, looking at problems differently with. Because a lot of times we get a lot of the. Well, we could never do that. We've done this for 40 years and we've never done that. And there's a resistance to change people inherently. You know, you've probably read the book who Moved My Cheese? People hate change. Like it's, it's funny, it's a little bitty book, but it's very powerful. If you haven't read who Moved by Cheese, I highly recommend it. It takes 20 minutes to read the book, but it's very powerful to understand people's perspective and understand why people are resistant to change in all things. It has nothing to do with cyber security and everything to do with mindset and, and how people look at changing things. Right. But, but this, this goes to the, the ultimate reasoning behind resistance to put in things that have AI or cloud or anything like that in our OT spaces. Even if maybe they provide better security and reliability and availability than the physical ones that we have today.
[00:21:37] Speaker A: Yeah, no, I could agree and I think as a security vendor as well, it's not for us to decide what like it's the, the risk profile is based on the customer. Like as you say, there's going to be nuclear power plants that are never going to go, you know, to the cloud. But there are plenty of OT organizations starting in manufacturing and even things like renewable energy that are going to increasingly put workloads in, in the cloud or private cloud or whatever it may look like. And so for us as a vendor, that's where we need to be ahead of the curve to make sure we can manage that transition as best as possible. Manage their risks, that risk management piece, understanding, you know, if the business is going to introduce this from a efficiency saving perspective, can we also understand what risk that's going to introduce and then therefore mitigate against it? And yeah, and I think that's an exciting place to be, to be kind of, you know, you're constantly thinking ahead, looking ahead. It's good.
[00:22:30] Speaker B: What are some of the exciting use cases that you've seen come across? And it doesn't have to be OT OT or it, but just, you know, what, what are some of the things that y'all are excited about that you've seen, that you've been able to use it that hasn't been used before or, you know, that kind of thing. Right.
[00:22:46] Speaker A: Well, I think actually this, so this is actually a slightly older story, but it's what got me into the OT space in the first place. Once at Darktrace, right, and we had an attack was apt that we detected, but it was specifically targeting critical infrastructure. That's what I find fascinating about ot, right is the, the risk profile is so completely different. The impacts are so completely different. It is impacting people's lives in a way that the vast majority of it does not necessarily. You know, your email server going down for half a day is a, is a break from work, whereas you can't say the same about the electric grid. So. So yeah, it was an incident in 2018, around the time that Triton, or Tritis, whatever you want to call it, that that attack was, was going on. And those are actually a very similar attack called Shamoon, which also targeted oil and gas. The difference with Shamoon was that it didn't specifically target the control systems. It was more of an IT attack but used a lot of the similar techniques. And we, we had a customer that was hit by Shamoon and we were able to see every step. Now at the time it was just detection only we weren't doing any response. So the attacker was able to progress. So it was really interesting seeing this like never seen before attack being undertaken by what was clearly an advanced persistent threat, some kind of nation state or nation state aligned. And there's that kind of realization moment of, you know, how powerful all the adversaries are, but also how powerful our solution was to help overcome that challenge. We could actually pick out the Most sophisticated threat actors. So we've got a blog from 2018, so an old blog that kind of talks through that. But since then, you know, that's carried on. We could continue finding that kind of thing. 0 Days Insider threats, the kind of things that traditionally get round those static rules and tools because there is no rule or playbook for how do you detect Vault Typhoon? How do you detect that rogue insider who is using their legitimate access in order to steal data? Yeah, that's what gets me excited is solving those really difficult challenges.
[00:24:56] Speaker B: Well, and, and that's, that's the biggest use case for tools like this. Right. Is is we, we've all been around long enough. Most of my listeners probably as well. Even, even if you're new, you've used antivirus and what is antivirus? Right. It is a blacklist. These are known bad things. If any of the, if you see any of these things happen, file types, whatever block it. But as we know, we get new signatures and definitions for those, those antiviruses weekly, daily, hourly, sometimes. And so if, if somebody releases a new attack that is not in the blacklist and applies it before the antivirus finds out, a system or, or provider finds out about it, adds it to their signature list and sends it to you, then, then it's going to get by. It doesn't matter. Right? And, and by very definition a zero day is something that somebody didn't know existed before it happened. Right. It's the very first time something like that happens. Right. And those are the types of things that tools like this can see. Now obviously a human, if they were super capable and understood everything about the network and was looking at everything and all that kind of stuff like yeah sure, maybe they could find it, but how long would it take them to find it? Et cetera. And especially in ot, we don't have dedicated resources that are, that are looking at glass 247 at this particular facility. Right. So. So it's unlikely that, that an analyst in and of itself would be able to find that without tools to be able to highlight and bring those, hey, this looks different. Like there's something about this. I don't know that it's a bad thing necessarily, but it's different enough. And to your point, things do change in an OT environment, but things don't change as frequently or as often as they do in an IT environment. So I say this a lot. You know, Sun Tzu, Art of War, you know, use my strength as weaknesses and my weaknesses and strengths because I'm not patching, I'M not going to the Internet. I'm not doing a lot of these things all the time. When something changes, it should be pretty obvious. That's like, it's the green thumb. It's a, it's a flashing light. Like, why did that change? Is that expected? You know, especially as I do this more and more and more and I'm getting more and more data, especially about a particular environment and facility. I've had examples and case studies where, you know, vendor brings in a application whitelisting product, right? They're locking down workstations. These are the only thing that are allowed to do. And they, they go through that learning process you talked about. Like we ran it, we let it run for six weeks, right? And then we lock it down. A what am I locking down? That was really bad that I just said is good. I don't go to that level of detail to know that. But even, even worse in the experiences that we've had is there's a process that runs once a year during a certain scenario and it didn't happen during my time. So when they go to kick that process off because it wasn't approved in the white list, it didn't work. And now the plant doesn't work because that process didn't kick off and something failed. And now everybody's all up in arms about it because I broke something instead of, you know, truly understanding the full process. And that gets back down to nobody really understands enough of the process to be able to do that type of thing. Which is why we need tools like AI that can kind of learn on the fly and grow. So yeah, that's, that's super exciting and the future is coming with that. And also the flip side of that is we know that, that the nation state actors are using it. We know they're using these tools against us. So if we're not going to use them, then they're just going to exponentially outpace us and we'll never keep up because they just, they can throw things at AI that we can't respond to individually as humans.
[00:28:18] Speaker A: I really, I really like your point about that. In theory, a human could do this job, right? The beauty of, the beauty of the method of detect, this method of detection, right, is regardless of what the attacker, brand new idea the attacker comes up with, they have to do something unusual. And you're right. Like if you know your network inside out and your network's 10 devices so that you can know inside out, you could defect unusual events. That's perfectly reasonable. The point we're trying to do is, well, how can we augment that? How can we actually help you find the unusual much, much quicker by kind of understanding that unstructured data? And yeah, so you get these mathematical unusual events. And then I frequently get questions. Well, what if the attacker just learns what's normal and then blends in with it? Well, I mean, firstly, if they do fair play to the attacker, but secondly, if you think about the whole kill chain of attack, not just the stealing the data at the end or the initial compromise, they have to go through multiple steps to achieve their goal. There is no way they could blend in completely normally to achieve that. You talk about like man in the middle attacks where they're just listening. Well, they've got to exfiltrate the data somewhere that's going to look unusual. So, you know, even these low and slow attacks have an unusual element to it. And that's what we're there to kind of find and shine a light on and contextualize as quickly as possible for the security teams and for the engineers.
[00:29:35] Speaker B: Well, you know, and to you, that example you just gave, right, it's no different than if I'm going to infiltrate something and I dress up as a janitor and I've got a mop and I start mopping the floor, floors and all that kind of thing. I look, I look the part. I've got the badge, I'm wearing the right uniform, I'm actually cleaning the floors. All that's normal until I start doing. But if I'm just going to clean floors, then I just got a job. I'm not a bad actor, right? I'm just, I'm just doing what they're paying me to do. The point is, is I'm going to do that until I want to do something else. Until I want to steal something, until I want to break something, whatever that, that thing is. And that's the difference, right? At some point they have to do something different because they want a different outcome. They want to steal data, they want to turn off a machine, they want to ramp it up, they want to damage it, whatever their goal is. Obviously every attacker or nation state or whatever is different. But at some point they have to deviate from normal because normal is not to, you know, the turbine until it breaks. That's not normal.
[00:30:32] Speaker A: Yeah. The other thing I wanted to pick up on is the, your point about the, the, like the once, once a year maintenance cycle, right? And, and those, those infrequent events that still do recur because I get Asked about that a lot as well. It's like how I, you know, it usually framed through the false positive question.
And so a couple years ago, I think it's three years ago now, some, myself and some of three of my colleagues published a research paper which we can link with the podcast in which we explored that comparison of that, this baselining approach that you know, you've just described. And then these, this self learning AI approach that we use, which is difficult to define because it's multiple algorithms, but we package it self learning AI and we took those. So we use three different scenarios. One of, you know, the obvious one, if there's something already bad, how do you detect it? And we, you know, we detect pre existing bad using our clustering. So essentially each device is clustered into peer groups. If a device is in a peer group and one of them is doing something out of the ordinary from that peer group, then it triggers an alert. And that's how you find pre existing infections, right, because of this peer grouping scenario Number two was the situation you described, right, where something happens infrequently, but so therefore it is, it looks new the first time you see it, but then, you know, if it happens three months, it looks less unusual, six months and even less. And so how the baselining approach suffers as you've described, but the self learning AI, slowly, it might give an alert the first couple of times that might be described as a false positive potentially, but it doesn't require manual interference. And I've completely forgotten the third example, but it's there in the paper. So that gives you a reason to go read it.
[00:32:22] Speaker B: No, that's awesome. And that's the point, right, is to look at these things from different perspectives. Which one of the buzzwords, and today, and again, I'm not trying to get political at all, but one of the buzzwords that we hear in, in everything that's going on with politics and everything else is, is diversity. Right? And diversity is so important, right? And, and I'm not talking about, you know, where you went school or what religion you are or how you look or you know, anything about your personal anything. It's really about how do you look at a problem. And you and I are going to look at problems differently because of my experiences, because of some inherent ass make based on my knowledge and things that have happened in the past. And you're going to have a different one. It doesn't make mine right or wrong. It just means that when you and I are teamed together and we're looking at that problem. You may see something that I just take as a normal because I've seen it a hundred times and I assume it's always good. And you look at it as should we assume that's always good. And that's where we, we get this differing of, of ideas and perspectives. And that's where, that's where the difference comes. Right. And our attackers are coming from those perspectives as well. Right. So that' diversity of thought and perspective. And that's where AI can really help us because it doesn't have any of those inherent assumptions or anything like that. It's really just looking at the facts. Right? It's, it's, it and it's, it's, it's assuming nothing other than what we tell it to. So that's where we've got to be careful in what we tell it. It's just like the baselining. We keep going back to that, but it's why blacklist is kind of a thing of the past. And baselining is troublesome because I don't know that I didn't baseline something that was bad or I forgot to baseline something that was good. And it'll always be that way. And there's, there's really no way to fix that other than have other things that are not doing it in that old, older way. Right.
[00:34:16] Speaker A: I love the phrase diversity of thought. I mean, that's, it touches back to what we spoke about at the beginning with building teams, right, and having people with different backgrounds in terms of their education and the way they solve problems, but also the. So one of the challenges we sometimes find with, you know, a, with customers with very rigid security teams that have built up their particular playbooks. Often soc teams are built around the idea that, you know, it's like a playbook where it's very binary. Is it good, is it bad? Then they kind of work their way through the work chart. When you're investigating unusual activity, it doesn't really work like that. You're acting more like an intelligence officer, where you're having to go, okay, well, this is my starting point.
And depending on the nature of the alert, it could be a really strong starting point. It could be a really vague, like something's unusual, and then you have to pull the string. And then that's where you, firstly, you have to edit your playbooks because you're now becoming more investigative. But your diversity of thought, you people investigate that different ways. They have a different perspective on what that unusual activity may represent. And then you build up so, you know, you have this Opportunity to create those wider breadth teams and remove yourself from that silo of. I need someone with a cyber security background and training that.
[00:35:28] Speaker B: I've never heard it spoken about like that. Right, but that's a very good point. Like the word analyst. If you think about when you're, when you're thinking about from an intelligence gathering perspective, whether you're, you know, MI5 or CIA or, you know, whatever, they're, they're really looking at data and there isn't a known, you know, most of the time it's very gray. It's like, is this good, is this bad? Like you're pulling that string and it takes a lot of time and a lot of data and a lot of sources and a lot of, a lot of effort to figure out if it's good or bad. Right. And the same thing, to your point, I think a lot of socks and a lot of cyber policies, they try to make it so rigid in that, hey, we know all of these things are bad and all of these things are good and there's no gray. Right? There's. They. We've got to be able to put that in to say, you know, what happens if you're going through like a workflow and I get to a place, it's not either going to be yes or no. Sometimes it's going to be is it yes, is it no or is it question mark? Like that should be an avenue that we, we allow them to pursue. Otherwise you're going to end up categorizing things inherently as good or bad, which may not be either of those things, or at least you don't know yet if they're good or bad. And that if you force people to say, you know, that binary yes or no, good or bad, you know, block or don't block, allow or unallow, like a firewall, firewall. Not everything is that black and white. Now maybe, you know, in an IT world, they probably lean towards block, you know, inherently, you know, deny all and they're, they're probably more aggressive on that side and the OT side is, probably leans more on the opposite side of allow it through because I need it to work and if I block things, I break stuff. And that's where I think AI and tools like what you guys do is so powerful, especially in these OT spaces. Because I can't just block, like I can't just kick an XP machine out of my environment. I can't just, you know, stop a process mid thing because I think maybe it's, it's fail or it's not right because that can have downstream impacts physically to hurt people, damage equipment, bring down the grid. And those things are, you know, my response from a, from an analyst perspective in an OT world is usually pick up the phone and call Bob, the operator at the plant, as opposed to kick it off the network, block it, you know, send them the troops, all that type of stuff. Right. It's a completely different response plan and we need to allow that flexibility in our, in our process of procedure. And this, this is where I, I hone in so many times on when we're talking about these. There is no silver bullet in any product. Like I was a product owner, I was a CTO at a product company, I've been a consultant for a long time. So I sell services and all that kind of stuff. But it's people, process and technology. If your runbooks aren't good enough, it doesn't matter how great your tools are. Right. They're going to fall in their face because you broke the process and you didn't train your people. All those things. It takes the whole thing to, to really make it work and efficient.
[00:38:22] Speaker A: Completely agree. You did touch on a, one of those topics within OT security that really divides the room. Right, the, the response part. How and when do you take action? Yeah, you know, you're completely right. Like the risk profiles could be different. You would, you would rather allow wait, wait and confirm than deny and then cause a explosion. But that's something we take into considerations, part of the solution. Darktrace OT has what we call autonomous response, which is a terrible name if you're an OT engineer. But the point is, is when you find something unusual, you specifically block the unusual.
Now you can do that autonomously, but the vast majority of our customers in OT who use IT are using it in human confirmation mode. So they are able to specifically see this is, this is normal, this is the unusual. The connection or connections that are unusual.
Can I block those? How do I block those? This is what the AI is recommending. And then they can go out and kind of do whatever investigations is required before they then confirm that and block it. Because ultimately, even when you find out, you do your investigation, find out what it is, it's not always that easy to do your response. Right. So being able to be targeted, allow that machine to do normal so that you're really restricting any possible downstream effects, unwanted downstream effects. Yeah, I think there's still, we're still working out where this looks in terms of the risk profile. And again, it's going to vary by organization, by customer. But I think having some form of autonomous response in your arsenal is really important in OT security.
We just need to be able to get to grips with that as end users and then work out where it looks in our risk profile. We all have fire extinguishers, it doesn't mean we go around spraying fire extinguishers everywhere. Exactly.
It needs to be there. Now let's work out what and how when we use it.
[00:40:17] Speaker B: Well and one of the other common problems I see in OT is in IT everything is standardized, right? I've got these standard servers, they're usually very similar versions and patch levels and what applications I can install and you know, what devices they can talk to. Like I'm very mature in knowing my email server doesn't need to talk to my file server server, database server doesn't need to talk to my whatever, right? All of those things. It's very, very documented and it's pretty much the same at every IT organization. There are some exceptions, but the back office stuff we always know Exchange and Active Directory and all the things that go into these spaces, right? I don't want to put my Exchange server or my file server directly on the Internet like all that type of stuff. In an OT world though, it's so different, right? Even in an, in a space, if I'm at company A and their manufacturing facility and they've got 10 manufact facilities around the world, each one of those manufacturing facilities are different. Even if they were built on the same plan at the same time and they, they had the same hardware when they started, they're all different, right? So, so that, that adds another level of complexity when I'm looking at how do I look at what's good and bad because every facility is different and even in, you know, I look at a power generation facility and they may have multiple units. So I've got generating unit one, generating unit two, generating unit three. In between one, two and three there's going to be differences of equipment, of software, of hardware, like all sorts of differences. And yes, there are people on site that understand those differences. But a SOC analyst at corporate is not going to know that unit one is different than unit two in these exact ways. Because they've got 48 sites or 300 sites or 3,000 sites. It's impossible for one person to know all of those things, right? And that's where tools like this can really up level us to get us better and understand. Because at the end of the day, I know we say this a Lot. But at the end of the day, a bad actor only has to be right once. We have to protect this stuff all the time. And some people don't align with that or don't agree with that. But I mean, that's the way I look at this, right, Is, is we have to start, look, we have to be smarter and more efficient with our tools and what we're doing and where we're focusing our efforts. That doesn't mean we don't need firewalls, it doesn't mean we don't need, you know, all these other things. Absolutely. We need segmentation, we need firewalls, zero trust, all that kind of stuff. But how do we start monitoring and looking at these things when we know there's so much diversity in my architecture and my, my tools and all that kind of stuff. And AI is a huge level up for, for our teams to, to be able to have that, that efficiency.
[00:42:57] Speaker A: Yeah, 100%. And but I do also recognize the hesitancy with AI adoption. Right. Like is this new buzzword, it's become a marketing term. You know, there's a lot of like, well what, what actually is it? How are you actually applying it? I think that's a, a really important part of the puzzle, right. Is the AI transparency part. We've been in this space for 10 years and we spent eight years shouting AI and nobody listening to us because they thought it was too futuristic and then two years shouting it and nobody listening us because everyone else is saying it. Right. But it has allowed us to kind of peel back the bonnet a little bit, so to speak. Whereas before people weren't interested in looking under the bonnet, now people are, you know, they want to, they want to go, okay, well let's learn a little bit more about your algorithms. They don't necessarily have to be math spoffins, but just give a little bit of an understanding as to how you're applying this. And you know, we recently published a new white paper called the AI Arsenal which just talks a little bit about different, how AI is being applied across cybersecurity in general. And then more specifically the algorithms, the Bayesian, you know, the Bayesian statistics that are being underpinned in the self learning AI just to give people that, that information, that transparency, right. So that they can build trust with the tool. It's not, it's not. Well, we need to adopt it because that's the direction going. It's. Well let's, let's adopt it in the right way and understand how it's actually Helping my use case. One of our big strengths we see is the fact that our AI can be applied on prem so it can sit on a physical server, learns only from that organization's data. That's what makes it so applicable in ot. There's a lot of AI out there that, that can't be done. Right. It has to go to the cloud. It has to be compared with third party data and so just little differences like that. Not all AI is the same. Right. You can apply it in different ways. Really important to understand as you kind of adopt it it for the different.
[00:44:40] Speaker B: Challenges you've got and that's huge. So first of all, let's, let's say for our American people, when he said bonnet, he means hood, like under the hood.
[00:44:49] Speaker A: Thanks for the translation, but that's so true.
[00:44:51] Speaker B: Right. And again, going back to. A lot of my time has spent in power Utility and you know, Power Utility is not going to allow data to go to the cloud. So a lot of these tools that are coming out especially from, and this goes back to the original thing I said, where we've got IT tools that are trying to get into the OT space and they don't always really understand where and when they can use it. Some organizations look at, you know, large manufacturing facilities, things like that that may not have a regulatory reason to do it and they're, they're more open to it. But a power utility is never going to be able to do that from regulatory restrictions around those. So, so being able to have it on prem only my data, like all those types of things makes you, you're opening the door. It's harder as an, as an asset owner to manage that type, type of stuff. But it, there's, there's real benefit and value in those things. Because I don't want my data going to the cloud for obvious reasons because I don't want it mixing with anybody else's. I don't want to lose access to it. It needs to be all, you know, all those things are very well defined and understood but that doesn't mean it's not still valuable. There are ways to do it like you said on, on prem. Yes, you got to buy more hardware and when you're dealing with AI, it's different than your Exchange server, right. It's going to be a little bit more expensive because you need different, different resources and capabilities than an Exchange server. But, but that doesn't mean you can't do it. Right. And, and again you can also do it in the cloud. You can have your Own cloud. You could do it in the cloud. Like there's all sorts of ways that you can look at this, but it's looking at these problems and considering it. So, so all this to say in the next five to 10 years, what is, what, what are you, what are you excited about coming up over the horizon in this space and maybe something that's a little concerning.
[00:46:29] Speaker A: Great question.
Well, I am, I'm, I'm excited about how AI can transform cybersecurity in general. I think OT is a massive part of that. OT has been kept so siloed and separate for so long. I don't think that can last forever. We have to respect its differences. It's not gonna, it's never gonna be treated exactly the same.
All of these challenges that we face in OT cybersecurity have been the same for, well, at least as long as I've been in, which is seven, eight years. But, but speaking to other people, it's been 10, 15 years. It's the same problems. How do we manage identities? How do we, how do we manage, we talk between IT and OT teams, how do we ensure availability? How do we detect stuff like it, how do we make sure we patch. All of these things are the, the same old problems. I am an AI optimist in the sense that I believe that the right AI for the right challenge can completely transform cyber security because it should be, be just a business function that's not seen a business enabler. And at the moment, when you see the number of incidents that are happening and the knock on impacts it has, it's not happening, it's not fit for purpose for too many organizations. The second part of your question was, what am I worried about?
[00:47:40] Speaker B: Yeah.
[00:47:42] Speaker A: And I guess it would be the flip side, could the attackers stop that transformation from happening? Because they have the ability to adopt much quicker. They can afford to make mistakes as they learn and evolve and adapt and therefore potentially adopt it quicker. Ultimately that's just a logical extension of what's already been happening. Attackers like to stay ahead of the curve. So I, I'm not a doomsday about it, but you know, ultimately that's, that's the biggest challenge is the threat landscape. How is that going to evolve at a pace and a scale that we haven't seen up to this date?
[00:48:16] Speaker B: Yeah, I mean, ultimately they, they, they have no restrictions, their gloves are off, they, they can test and if it fails, they go to the next, the next victim and they try it again and they improve and they get better and they go to the next place and they try it again and there's no, there's no limitation on them trying and failing. Like they don't care.
[00:48:32] Speaker A: Yeah, the accountability is, is really hard. Right. Like they can fail and no half time you even know who they are. Even if you don't know do know who they are, there's not a lot you can do about it. So yeah, they can, they can really afford, afford to make some, some big mess ups and get away with it and just learn from it which, which helps them get better quicker.
[00:48:50] Speaker B: Right. Right. Yeah, I mean we all know that we, we learn more from failure than we do from success. So every one of their failures, they're getting better, they're getting faster, they're getting stronger, they're getting smarter and they're, they're not going to, you know. Oh well they caught me here. Or this didn't work because of this. Okay, well let's adjust that and do it, do it better and they're using these tools to do it faster. So yeah, for sure. So, so what's what, what's call to action? How do people find out more about AI? Obviously we'll, we'll link to all of the, the white papers etc that you mentioned, any others that you want. But. But what, how can people learn more about OT and AI in the space and what you guys are doing?
[00:49:26] Speaker A: So yeah, we've recently released a whole package of different documents and kind of education videos white papers around AI in cyber security. We don't split it by coverage area so it kind of covers cloud itot altogether because as I've already said, that's how we envisage the future. Right. Everything, everything needs to be treated in a similar way. Not exactly the same way, but a similar way. So, so yeah, check it out darktrace.com you can find all of the resources there. The AI arsenal is the white paper that I mentioned earlier. But also check out our blogs for examples of these threat finds, these APTs 0 days that we detect in our customer environments and yeah, and then reach out if you want to learn more.
[00:50:09] Speaker B: Specifically you could talk about what's under the bonnet.
[00:50:12] Speaker A: Under the bonnet. I love it. Under the hood. Sounds like a little Red Riding Hood.
[00:50:18] Speaker B: Well, awesome sir. Hey, it was great to have the conversation. It was great to meet you in person. My entire intent was with this, with this podcast is to grow and have people have differing challenge the, the status quo. Right. And how do we get better, faster, stronger as a community and how do we learn and look at things from different Perspectives, which is why I love, you know, having diverse opinions and ideas and perspectives on the podcast to talk about things. I'm not always going to 100 agree or align with everything, but that's the point, right, Is to have conversations and, and have dialogue and we can get. Still go have a beer even if you and I don't agree on something. Right. And that's okay. Okay. But obviously on this, we did, we didn't really disagree on many things because I think the problems are, are, are there and, and the future is going to be using AI and cloud and all those things to our benefit. It's just a matter of how and when. Like how do we push the button, how do we, how do we accelerate and do those things? Because to your point, the, the bad actors are, are already using it and they're already trying it. We've got to start using. We got to be smart and, and not avoid tools just because we're scared of them. Obviously we need to be careful because I don't want to bring down my power plant, but I also just can't say, well, I'm never going to do that because that's just going to end up hurting me and biting me in the backside because of that.
[00:51:24] Speaker A: So, yeah, completely agree. Thanks, Aaron. It's been a, it's been absolute pleasure. Really enjoyed the conversation.
[00:51:29] Speaker B: Yeah, man, I appreciate your time and thank you for, for dedicating it with me and continue doing what you guys do and I, I look forward to seeing you again at another conference coming up soon.
[00:51:38] Speaker A: Yeah, I will do. Thanks, Aaron.
[00:51:40] Speaker B: All right, man, thank you.
[00:51:41] Speaker A: Bye Bye.
[00:51:42] Speaker B: Thanks for joining us on Protect it all, where we explore the crossroads of IT and OT cybersecurity.
Remember to subscribe wherever you get your podcasts to stay ahead in this ever evolving field. Until next time.