How AI Became the Ultimate Cybersecurity Blind Spot: Understanding the Microsoft 365 Copilot Vulnerability

Episode 65 July 07, 2025 00:17:55
How AI Became the Ultimate Cybersecurity Blind Spot: Understanding the Microsoft 365 Copilot Vulnerability
PrOTect It All
How AI Became the Ultimate Cybersecurity Blind Spot: Understanding the Microsoft 365 Copilot Vulnerability

Jul 07 2025 | 00:17:55

/

Hosted By

Aaron Crow

Show Notes

In this episode, host Aaron Crow dives deep into the fast-evolving world of AI automation and its impact on cybersecurity. Aaron breaks down practical, real-world ways security professionals can leverage AI to streamline their workflows without breaking data loss prevention policies or putting proprietary information at risk. 

From drafting reports and playbooks to automating repetitive tasks and managing vulnerability data, Aaron offers actionable advice for using both public AI tools like ChatGPT and more advanced private AI models. He also addresses common fears CISOs and business leaders have about unsanctioned AI use in the workplace and shares tips for staying safe and compliant while taking advantage of AI’s efficiencies. 

Whether you’re in a large enterprise or a lean team with limited resources, you’ll come away with a fresh perspective on how to use AI responsibly to work smarter and protect your organization. Plus, Aaron invites listeners to share their own creative AI use cases and lessons learned. Let’s jump in and explore how to protect it all as AI advances.

Key Moments : 

01:20 AI's Rising Role in Media

03:22 Guidelines for Using AI Safely

07:06 "AI Integration and Automation Strategies"

10:03 Automating Windows Management Tasks

14:29 Exploring AI for Personal Tasks

Connect With Aaron Crow:

 

Learn more about PrOTect IT All:

 

To be a guest or suggest a guest/episode, please email us at [email protected]

 

Please leave us a review on Apple/Spotify Podcasts:

Apple   - https://podcasts.apple.com/us/podcast/protect-it-all/id1727211124

Spotify - https://open.spotify.com/show/1Vvi0euj3rE8xObK0yvYi4

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Just like with anything, when we bring new things and new capabilities, new stuff into the enterprise, you are also bringing in vulnerabilities and potential risks. [00:00:11] Speaker B: You're listening to Protect it all where Aaron Crowe expands the conversation beyond just ot delving into the interconnected worlds of IT and ot cybersecurity. Get ready for essential strategies and insights. Here's your host, Aaron Crowe. [00:00:30] Speaker A: AI is going to hack everything. That's what's starting out right. So welcome to another episode of Protect It All. I'm your host Aaron Crowe, where we dive into all things cyber security. It ot a little bit of everything in between. Most recently we've seen a release of a CVE vulnerability when your AI assistant kind of turns into a potential backdoor for attackers. So what happens? So the recent CVE focuses on Microsoft. Big surprise. It's not an attack against Microsoft. It's just that they're probably the most adopted where it's integrated into your systems the most. Just because everybody uses the Microsoft 365 platform and people are a little concerned for good reason in allowing their, their systems to be integrated with third party. So chat, GPT and Claude and Grok and all these others. Right. But co pilot, living in your entity, in your enterprise and your system makes you feel a little bit more comfortable and for the right reasons. It is, it has less attack paths. Right, but what is the cve? Let's talk about it and kind of all the details around it. So this eco leak, a zero click vulnerability, it really exposed a critical flaw in the Microsoft Office 365 copilot implementation. And you know, it's not just about a lot of the vulnerabilities that we see, especially coming across email in the past have been an attachment or a link or something like that. Like you have to click on something. Right. This is different. An attacker could actually send a benign looking email with embedded malicious stuff in the background. And then when you ask if you have like a copilot routine or it's automatically summarizing your inbox when it processes that, that infected email behind the scenes using its retrieval augmented generation pipeline thing that Microsoft uses, it takes sensitive data from your email, from your files and chats and embeds and images or links, and then quietly X fills them using micro approved URLs like Teams and SharePoint. No clicks, no alerts, no chance to stop it. So you know, this is really an AI blind spot. I don't know. Blind spot may be a little harsh. Blind spot implies that nobody could see this coming. I Think this is something that people have warned about but to the point it is a blind spot because there are supposedly protections against these things. Microsoft safeguards were bypassed copilot their jailbreak filters. They failed to detect, you know, the distinguished prompts, the image links and markdown formatted evaded the, the traditional link, you know, redaction of you know, a short link or bitly or, or something like that that it's going to look and find. Right. And then the internal external data were blended together so breaking the fundamental principle of, you know, trusting the boundaries. So you know, it's just a technical miss, it's a design failure and how we secure and this is going to be more and more prevalent as we start using AI. This is not a, we shouldn't use AI. Right. This is going back to, and I talk about it a lot here, you know, coming out of Idaho National Labs, you know, cyber informed engineering. We need to be designing all of our systems using cyber as, as, as part of the requirements in designing these systems. And I'm not saying that Microsoft didn't and I'm not saying that you know, if you've implemented these things that you didn't. But it's not always top of mind when we're implementing these capabilities. So it's very easy to look at the, the desire for the outcome. I want to be able to enable my people to be faster to not have to read through hundreds or thousands of emails. I mean my inbox is just full of emails I'll rarely ever get to. And Google does a good job of summarizing that stuff even without, you know, AI. Same thing with Microsoft. But when you add that AI thing you can really find the needles in the haystack of these are the things that you should look at. Again, not saying that we shouldn't use those. This is just a good example of first, first attack that really is intentional focusing on how do I get around traditional phishing software. Traditional things that would be detected by you know, antivirus and endpoint protection and DLP device data loss prevent and all of those types of tools that are there looking for very specific attack vectors and when it didn't trigger any of those. So it was able to slither around all of those and use AI. This tool, this benefit against itself and against all of your other protections that you have in that space. So it's, it's really what, what can be exposed? What is the fallout of this? Some people will swing the pendulum all the way. This is why we can't do AI. We're not doing anything and connect critical systems or any, any of our internal systems. And that may not be the wrong answer. Depending on your industry that you're in, you may not be able to do this like in OT many times we are not choosing to do AI automated systems like this because of things like this, like the unknown of the unknowns. I don't know what, I don't know. So I, I'm not comfortable enough. It's not, it's so, it's so immature. I'm not going to connect it to critical stuff because I just don't, I don't feel comfortable that I know what the outcome is and what the risks are. So until I can really understand what those risks are, I'm not to, you know, deploy that in a super critical system. Right. You know, what, what was exposed in this type of thing. This copilot has access in theory to everything. Now obviously your configuration could be different but you know, it has access to your email, it has messages, access to your team's messages, it has access to SharePoint and your OneDrive. You know, internal context for each individual user, who they are, what their role is, what their title is, you know, who their boss is, who their subordinates are, you know, where they're physically loc. Like it's a lot of enterprise data that it should, we don't want to get out. And there's, there's other implications that could come from this data being released and, and, and you know, having access to this. Just imagine everything in SharePoint, everything in OneDrive, everything in your email, having access. You know, if it's the intern, maybe it's not as big of a deal, but if it's the CEO, the founder, the cto, you know, VP of sales, like there's lots of people that have, you know, critical information that they don't want or should not have released. Not even counting if you're environment that has any regulatory, if there's any PCI information in there. If you're in a, you know, regulatory entity like a NERC SIP or something like that, somehow that data got out from a system that it shouldn't have. That can be a big impact beyond just the fact that they can, you know, bad actors and attackers can then use that data to pivot because they know more information and maybe they have an attack path now that they didn't know about before. So what did Microsoft do? What did they, what did they not do? You know, to their credit, Microsoft move really fast. They, they hardened the prompt injection detection. They tightened the, you know, user, you know, the isolation of the user context and they implemented, you know, stricter content security policies. They claim no known in the wild exploitation, which is good, but the fact that it's never, the fact that it ever happened is a red flag that every AI integrated enterprise tool we need to be considering like what is the absolute worst thing that can happen. Like we need to be having those conversations and I'm sure people have, but I know we just switched over to you know, Microsoft 365 and Copilot in this, you know, kind of new branding that we've gone through and we're, we're asking a lot of those questions and, and you know, from your CRM to your you know, third party tools, whether it's, you know, HubSpot or Salesforce or you know, how to, how do all those things together because honestly that's the goal for us all businesses is to work more efficiently. So if I can have a product that can integrate a lot of those things, I can automate things, I can, you know, sprinkling in some AI to make me make decisions faster, bubble up, summarize all those are great things to do and again I'm not suggesting that we don't do them. I'm just suggesting this is an example of why we have to do it correctly and why we have to be careful what we enable. We can't just, you know, admin default, all that, that can be a problem. So what does this mean for the rest of us? Right? Echo Leak is a wake up call. Here's what you know, leaders and, and CISOs like we what you guys should be focused on to do now, right? So audit your AI access what can copilot see, you know, what access do they have? Does it have to run against your internal systems? If you do have, you know, obviously PCI or other type things that it doesn't need access to or that could be causing a problem. Obviously you need to limit that. Right? It's, it's, it's just, just right permissions just like everything like we used to do, you know, when we had file servers and we had folders and we you know, delegated authority down to you know, a specific team and we had you know, finance different than engineering different than sales and you know, it's, it's, it's simple implementations of security posture that we've been doing for decades. We seem to make sure when we're deploying AI or any of these automated type tools that we're considering those simple policies and procedures that we've been doing for years and make sure that we have that and then audit it. Right. Make sure that Bob doesn't have admin rights and can't adjust that and then expand that beyond what it should be, you know, monitor for, you know, prompt injection vectors. There's all sorts of tools that can monitor your AI side, but the SOC team needs to, you know, really understand what happened and what to look for. Now that we know this, there's a, there's a patch out for it. Make sure that we are training our SOC and our tools to look for those types of things. You know, already mentioned it, but you know, to double down on it, make sure that you have separate, trusted and untrusted data in AI pipelines. Like, there should be no blending of internal external context inputs. Like, you really need to, you know, segment those spaces and make sure you don't have data overlap. You're not getting things that, you know, Aaron has access to. Because I'm authenticated now I have access to public information and private information and then my AI, because it's basically, I gave it permissions now it can bridge those two gaps. Now it has access to everything. Right. So in the NERC SIP world, the way, the way that you look at, you know, segmentation and the way that it's implemented, it's a single shared cyber asset that can impact more than 1500 megawatts. Right? So that's, that's power utility talk. But really what it's saying is, is there is simple, like if I configure this device, and even though this device in and of itself, and maybe it's just an hmi, but if this thing can have connections to both sides, to two sides, that can impact something bigger, I, I want to be really careful that I have to lock it down or I need to segment it. I should have two different devices, right? And that gets a little bit more complex and a little difficult. But this is the problem that we have when we bridge those environments, when we're bridging public and private, when we're bridging, we're, we're plugging AI. Or again, it's not just AI, it's not the big, big bad AI thing. It's really just all of these tools as we're bringing them on. We need to be careful with what access and what data access it has. This is just a way that it bypassed a lot of the protections that we were saying. Yeah, you can have access, but I'm going to do DLP and I'm going to do all those things. But it found a way through those things. So just having access didn't matter that had those other tool tools that potentially blocked them. But we, this is a way that it found a way around, right? So AI isn't just an assistant. It, it, it's a, it's a, it's really a new attack surface. Just like with anything when we bring new things and new capabilities, news, new stuff into the, the enterprise, you are also bringing in vulnerabilities and potential risks. Every time we, we implement new technology it is also going to bring in new risk. And in the beginning we don't always know what those things are. We don't under truly understand what that, what that you know, risk environment looks like and what risks we're really bringing in. Until we start seeing some things like this pick off and then we know try to how to lock down what is best practices. There aren't great best practices in the space because we are truly bleeding edge in a lot of these spaces and we're all trying to figure it out. So again just look at what you have. Make sure you're considering what to do in the, in the spaces and what's the right thing. Make sure you're having the conversation. You're. You have somebody that is challenging every configuration, every decision and making sure you're looking at it from that what is the worst thing that can happen and how do we protect against that? Right? You know it really true, you know this, this echo leak really proves that your own AI may actually be the way in. You know, don't want to talk about Terminator Judgment Day. But you know that that's what it is, right? AI is able to think faster and think differently than a human and find different ways in. Now obviously human is the one that sent this thing but AI is, is a way to work around problems that we put in because we don't necessarily think well a person didn't do that and couldn't do that. Right? So let's not wait for the next one. Let's make sure that we are again auditing our environments. We're reviewing our co pilot deployments or any whatever your AI is if you on. This isn't just a co pilot problem folks. This is going to be a problem for any AI. So if you have your own in house LLMs on Olama, running on your own servers, this can still be a problem. This is still an attack path that can come in. So you need to make sure that you're looking at all of these things, even if you're using local, even if you're using one off, custom in house, all those types of things, right. Audit those environments. Make sure that you're looking and understanding what those environments have access to, who has access, what environments, what data they can see. Make sure that you have. And you're looking at this type of attack and this type of, you know, exfil of data, which is really what it was. It was getting data out of these environments in a play in a way that we weren't. The environment wasn't. You would not have been able to detect with traditional detection capabilities. Right. And then really educate your teams beyond just how to configure it, but also what the risks are, making sure that they're not connecting two things that don't need to be. You know, if you look in the, in the government world, that's why you have two different computers sometimes where this one is on a, on a secure network and this one's on an insecure network or a public network. And you don't want physically those things there. So there's no way that I can get data from this machine to that machine. So I have two different. So if you have those environments and you want AI in both of them, you may have to set up different AI systems to, to access and give you the capabilities you're looking for, because one in the middle can bridge both of those things and that can become a problem. You know, the attackers aren't asking for permissions. They're not, you know, they don't follow the handbook. They didn't do the cyber security training. They, they don't care if you, you know, send them a bad email or give them a horrible performance review. They're, they're, they're trying to get in any way they can. So they don't need your permission. They're going to try to get in in any and every way that you can or they can. So anyways, just wanted to really dive into this one. This one's a new one that came out. Figured it would be a good one to jump on and talk through. Love to hear anybody that's seen anything like this, thought about it, have you solved these types of things? Please make sure that you like and subscribe to this. Leave some comments. Definitely listen and love all those. Love, love hearing from, from everyone that listens and, you know, always looking for guests to come on and talk about interesting and cool things like this. So if you've attacked this, or better yet, if you've approached this and found a way that you're implementing it in a better, more secure way. Love to have you on the podcast to talk about it. AI is definitely at everybody's on everybody's mind in all spaces from IT to OT enterprise and everything in between. So it'd be really good to dive into this topic. AI seems to be a very conversation that I'm having here lately and and for good reason. It is really the next thing that's coming and it's coming as fast faster than I think we're even ready for. So anyways thanks thanks for time everybody. Thanks for listening until next week. You know like I said just reach out, get on the podcast, send me suggestions for folks topics etc. So thanks a lot. [00:17:31] Speaker B: Thanks for joining us on Protect it all where we explore the crossroads of IT and OT cyber security. Remember to subscribe wherever you get your podcasts to stay ahead in this ever evolving field. Until next time.

Other Episodes

Episode 45

February 10, 2025 01:12:29
Episode Cover

From Navy to Consulting - Dan Ricci's Unique Perspective on Bridging Security Gaps

In this episode, host Aaron Crowe speaks to Dan Ricci, founder of the ICS Advisory Project, to delve into OT cybersecurity. Dan brings a...

Listen

Episode 47

March 03, 2025 00:48:48
Episode Cover

The Intersection of AI, OT, and Cybersecurity with Sulaiman Alhasawi

In this episode, host Aaron Crow is joined by Sulaiman Alhasawi, a cybersecurity expert based in Kuwait. Sulaiman shares his journey into OT security,...

Listen

Episode 62

June 16, 2025 00:54:21
Episode Cover

Inside OT Penetration Testing: Red Teaming, Risks, and Real-World Lessons for Critical Infrastructure with Justin Searle

In this episode, host Aaron Crow sits down with OT security expert Justin Searle, Director of ICS Security at InGuardians, for a deep dive...

Listen