Microsoft sets new benchmark in AI data security with Purview upgrades

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Hear from top industry leaders on Nov 15. Reserve your free pass


At the Microsoft Ignite conference today, the software giant unveiled new data security and compliance capabilities in Microsoft Purview aimed at protecting information used in generative AI systems like Copilot.

The new features will allow Copilot users on Microsoft 365 to control what data the AI coding assistant can access, automatically classify sensitive data in responses, and institute compliance controls around LLM usage.

Herain Oberoi, general manager of Microsoft data security, compliance, and privacy, and Rudra Mitra, corporate vice president at Microsoft, spoke with VentureBeat ahead of the announcement in a candid conversation where they shared key insights into Microsoft’s innovative approach.

“Data is effectively the foundation on which AI is built. AI is only as good as the data that goes in. And so it turns out, it’s an extremely important part of it,” Oberoi said,  highlighting the critical role data plays in AI applications.

VB Event

AI Unleashed

Don’t miss out on AI Unleashed on November 15! This virtual event will showcase exclusive insights and best practices from data leaders including Albertsons, Intuit, and more.

 

Register for free here

“With Purview, now, if you connect those two dots, Microsoft is looking to secure the future of AI or secure the future of the data with AI. And I think that’s just such a responsible approach,” Mitra told VentureBeat.

Credit: Microsoft

Visibility into Copilot risks and usage

A new AI hub in Purview will give administrators visibility into Copilot usage across the organization. They can see which employees are interacting with the AI and assess associated risks.

Sensitive data will also be blocked from being input into Copilot based on user risk profiles. And output from the AI will inherit protective labels from source data.

“It’s not just visibility across Microsoft’s Copilots, we think the complete picture is what’s important for the customer here,” Mitra said.

Sensitive data will also be blocked from being input into Copilot based on user risk profiles. And output from the AI will inherit protective labels from source data.

Compliance policies extended to Copilot

On the compliance side, Purview’s auditing, retention and communication monitoring will now extend to Copilot interactions.

But this is just the beginning, as Microsoft plans to expand Purview’s protection beyond Copilot to in-house built AI and third party consumer apps like ChatGPT.

With AI poised for greater adoption, Microsoft is positioning itself at the forefront of responsible and ethical data use in enterprise AI systems. Robust data governance will be key to ensuring privacy and preventing misuse in this next frontier of technology.

However, true responsible AI will require buy-in across the entire tech industry. Competitors like Google, Amazon and IBM will need to make data ethics a priority as well. For if users can’t trust AI, it will never reach its full potential.

The path forward is clear — enterprises want both cutting edge innovation and cast iron data protection. Whichever company makes trust job one will lead us into the AI-powered future.

Originally appeared on: TheSpuzz

iSlumped