March 19, 2025
A hospital that wants to use a cloud computing service to perform artificial intelligence data analysis on sensitive patient records needs a guarantee those data will remain private during computation. Homomorphic encryption is a special type of security scheme that can provide this assurance.
The technique encrypts data in a way that anyone can perform computations without decrypting the data, preventing others from learning anything about underlying patient records. However, there are only a few ways to achieve homomorphic encryption, and they are so computationally intensive that it is often infeasible to deploy them in the real world.
MIT researchers have developed a new theoretical approach to building homomorphic encryption schemes that is simple and relies on computationally lightweight cryptographic tools. Their technique combines two tools so they become more powerful than either would be on its own. The researchers leverage this to construct a “somewhat homomorphic” encryption scheme — that is, it enables users to perform a limited number of operations on encrypted data without decrypting it, as opposed to fully homomorphic encryption that can allow more complex computations.
Complete article from MIT News.
Explore
New Method Efficiently Safeguards Sensitive AI Training Data
Adam Zewe | MIT News
The approach maintains an AI model’s accuracy while ensuring attackers can’t extract secret information.
To Keep Hardware Safe, Cut Out the Code’s Clues
Alex Shipps | MIT CSAIL
New “Oreo” method from MIT CSAIL researchers removes footprints that reveal where code is stored before a hacker can see them.
Towards Secure Machine Learning Acceleration: Threats and Defenses Across Algorithms, Architecture, and Circuits
Thursday, December 19, 2024
Hybrid
Zoom & MIT Campus