April 11, 2025
Data privacy comes with a cost. There are security techniques that protect sensitive user data, like customer addresses, from attackers who may attempt to extract them from AI models — but they often make those models less accurate.
MIT researchers recently developed a framework, based on a new privacy metric called PAC Privacy, that could maintain the performance of an AI model while ensuring sensitive data, such as medical images or financial records, remain safe from attackers. Now, they’ve taken this work a step further by making their technique more computationally efficient, improving the tradeoff between accuracy and privacy, and creating a formal template that can be used to privatize virtually any algorithm without needing access to that algorithm’s inner workings.
The team utilized their new version of PAC Privacy to privatize several classic algorithms for data analysis and machine-learning tasks.
Complete article from MIT News.
Explore
Security Scheme Could Protect Sensitive Data during Cloud Computation
Adam Zewe | MIT News
MIT researchers crafted a new approach that could allow anyone to run operations on encrypted data without decrypting it first.
To Keep Hardware Safe, Cut Out the Code’s Clues
Alex Shipps | MIT CSAIL
New “Oreo” method from MIT CSAIL researchers removes footprints that reveal where code is stored before a hacker can see them.
Towards Secure Machine Learning Acceleration: Threats and Defenses Across Algorithms, Architecture, and Circuits
Thursday, December 19, 2024
Hybrid
Zoom & MIT Campus