Platform Capabilities: Secure Microsoft Copilot with Lightbeam

Discover how to enhance security with Microsoft Copilot in our latest video, "Secure Microsoft Copilot with Lightbeam." Watch as we take you through how Lightbeam helps with AI governance by connecting Copilot activity to user roles, highlighting access mapping, and live session

Platform Capabilities: Secure Microsoft Copilot with Lightbeam

Discover how to enhance security with Microsoft Copilot in our latest video, "Secure Microsoft Copilot with Lightbeam." Watch as we take you through how Lightbeam helps with AI governance by connecting Copilot activity to user roles, highlighting access mapping, and live session

Transcript

What would a tech presentation be without talking about
copilot or AI in any way, shape or form today?
So, you know, as employees ask copilot
to analyze spreadsheets
or summarize product roadmaps, the security teams lose sight
of which sensitive files appear in the AI output.
Traditionally, entitlement tools only look at acls, not at
what copilot actually surfaces, which leaves the blind spots
that increase oversharing risk
and delay remediation of the problem.
Our summary release brings discovery
and governance to copilot activity.
It captures all the user prompts and responses
and the reference documents,
and then tags sensitive attributes, checks, entitlements
and routes alerts into existing access governments
and privacy operations workflows.
This ensures that security teams gain user level insight
into how much sensitive data appears in copilot conversation
and gives them the ability to filter by a time range and
or data type
and acts instantly from the same console
that they're already operating within.
So with this you get user level exposure insights,
identify which employers share the most,
or employees share the most sensitive data in copilot,
and then prioritize those investigations.
You can trigger alerts when regulated data appears
and revoke permissions, disable entra ID accounts
or archive files, all without leaving lightbeam.
You can bring in co-pilot analytics with permissions context
so teams know what what AI showed
and whether users should retain access to that data.
Um, and we'll talk about this a little screenshot here on
the right that I was mocking up today here in just a second.
We apply the standard Microsoft Information protection
labels to ensure that Microsoft has all the labels necessary
to understand what's going on there,
and that will allow them to classify or auto quarantine
or require review when uploads for finance models
or intellectual property documents surface
inside of copilot.
And then of course, we have an end-to-end audit
trail along the way.
In the screenshot here that I've put together, let's say
that I have managed to find a CSV
and that CSV contains two columns, one with employee numbers
and the other with their salaries.
If you work for a company
with more than a couple dozen employees,
you probably won't be able to directly correlate
between employee numbers and salaries.
You won't know who is who.
But if I wanted to figure out whose salary is what,
and I know that AI is excellent at doing this type of work,
I might be able to just pop it in here
and ask this simple question of this, uh,
or prompt it with the CSVS salaries
and employees numbers in it.
Could you map the employee numbers to their employee names?
And when that fires off, it's gonna go
and it's gonna scan all sorts
of things that it has access to.
And given that it's connected to SharePoint
and all sorts of other things on the backend,
it could likely surface some things
that I probably shouldn't be able to see.
Just the tie between having the employee numbers
and their employee names will be enough for me
to then extrapolate some sensitive data beyond that.
So what would happen in
The backend is Lightbeam would see the prompt,
it would see the salary file,
and it would automatically classify that, um,
it would likely classify it as an HR doc
and drop down the the MIP label.
And if my user is not actually entitled to see that,
then I'm not gonna get a response
that's gonna answer my question here.
More than that, it would raise an alert, log it inside
of the lightbeam platform for further review.
We can also use any of our standard automations
or playbooks on these alerts.
So if I'm finding that a user does stuff like this often,
then we could go ahead and automatically revoke their access
to copilot or revoke access
to their account entirely while we go through
and investigate what the solution to the problem would be.
But without this, companies are finding
that copilot is surfacing data
that employees shouldn't necessarily have access to,
but they do because
that data is using role-based access control
and acls that haven't been kept up to date.
So we wanna make sure that based on the context
of the situation, the content of the file,
so say this wasn't a CSV of salaries, it was a CSV
of customer data, including credit card numbers,
that would get tagged differently
and it would get flagged differently, right?
So what we're wanting to do is ensure that anything
that gets uploaded into copilot the queries, the prompts
and the responses are all being logged and audited
and tagged appropriately so that the security teams,
they can have action taken on the event to ensure
that this doesn't happen anymore if it's inappropriate,
or approve it and move on.

Related Posts

What Is DSPM? | Data Security Posture Management Explained
 blog card

What Is DSPM? | Data Security Posture Management Explained

Learn More
What Is Data Governance? Explained in Under 2 Minutes | Benefits, Challenges & Automation
 blog card

What Is Data Governance? Explained in Under 2 Minutes | Benefits, Challenges & Automation

Learn More
How to Automate Data Subject Requests (DSRs) with Lightbeam | DSR Compliance Made Easy
 blog card

How to Automate Data Subject Requests (DSRs) with Lightbeam | DSR Compliance Made Easy

Learn More