Physics

Security protocol leverages quantum mechanics to protect data from attackers during cloud-based computing

Optical implementation. Credit: arXiv (2024). DOI: 10.48550/arxiv.2408.05629

Deep learning models are used in a variety of fields, from medical diagnostics to financial forecasting, but they are computationally intensive and require the use of powerful cloud-based servers.

Reliance on cloud computing poses significant security risks, especially in sectors such as healthcare, where privacy concerns may make hospitals hesitant to use AI tools to analyze sensitive patient data.

To tackle this pressing problem, MIT researchers have developed a security protocol that leverages the quantum properties of light to ensure that data sent to and from cloud servers during deep learning calculations remains safe.

The protocol exploits the fundamental principles of quantum mechanics by encoding data into the laser light used in fiber optic communication systems, making it impossible for an attacker to copy or intercept the information without detection.

Moreover, the technique ensures security without compromising the accuracy of the deep learning model: in tests, the researchers demonstrated that the protocol can maintain 96% accuracy while ensuring robust security measures.

“Deep learning models like GPT-4 have unprecedented capabilities but require enormous computational resources.

“Our protocol allows users to take advantage of these powerful models without compromising the privacy of their data or the uniqueness of the models themselves,” said Kfir Sulimany, a postdoctoral researcher at the MIT Laboratory of Electronics (RLE) and lead author of a paper posted to the arXiv preprint server about the security protocol.

In addition to Slimani, other contributors to the paper include Sri Krishna Vadlamani, a postdoctoral researcher at MIT, Ryan Hamery, now a former postdoctoral researcher at NTT Research, Prahlad Iyengar, a graduate student in the Department of Electrical Engineering and Computer Science (EECS), and lead author Dirk Englund, EECS professor and principal investigator in the Quantum Photonics and Artificial Intelligence group and at RLE.

The research was recently presented at the Annual Conference on Quantum Cryptography (Qcrypt 2024).

Security in Deep Learning: A Two-Way Street

The cloud-based computing scenario the researchers focused on involves two parties: a client with sensitive data, such as medical images, and a central server that controls deep learning models.

The client wants to use deep learning models to make predictions, such as whether a patient has cancer based on medical images, without revealing any information about the patient.

In this scenario, sensitive data must be transmitted to generate predictions, but patient data must remain secure during the process.

Sarver also doesn’t want companies like OpenAI to release some of the proprietary models they’ve built over years and millions of dollars.

“Both the parties want to hide something,” Vadlamani added.

In digital computing, it is easy for a malicious actor to copy data sent by a server or client.

Quantum information, on the other hand, cannot be perfectly copied, and researchers are exploiting this property, known as the no-cloning principle, in their security protocols.

In the researchers’ protocol, a server uses laser light to encode the weights of a deep neural network into a light field.

A neural network is a deep learning model composed of layers of interconnected nodes (neurons) that perform calculations on data. Weights are the components of the model that perform mathematical operations on each input, one layer at a time. The output of one layer is fed into the next layer, and the final layer produces a prediction.

The server sends the network weights to the client, and the client performs operations to obtain results based on the private data, which remains secret from the server.

At the same time, security protocols ensure that clients can only measure one outcome, and the quantum nature of light prevents clients from copying weights.

When the client feeds the first result into the next layer, the protocol is designed to cancel the first layer, so the client cannot learn anything else about the model.

“Instead of measuring all the light coming from the server, the client measures only the light needed to run the deep neural network and sends the results to the next layer. The client then sends the remaining light back to the server for a security check,” Sulimany explains.

Due to the no-cloning theorem, clients will necessarily apply small errors to the model when they measure the model’s results. When the server receives the residual light from the client, it can measure these errors to determine whether any information has been leaked. Importantly, this residual light is proven not to leak any client data.

Practical Protocol

Modern communications equipment must support huge bandwidth over long distances, so it typically relies on optical fibers to transmit information. This equipment already incorporates optical lasers, allowing the researchers to encode data for security protocols into light without the need for special hardware.

When the researchers tested this approach, they found that the deep neural network was able to achieve 96% accuracy while ensuring server and client security.

The small amount of information about the model that a client leaks when performing an operation is less than 10% of the information an adversary needs to recover the hidden information. Conversely, a malicious server can obtain only about 1% of the information it needs to steal the client’s data.

“It ensures that it’s secure in both directions, from client to server and from server to client,” Sulimany said.

“A few years ago, when we developed a demonstration of distributed machine learning inference between the MIT main campus and MIT Lincoln Laboratory, we realized that there was an entirely new way to provide physical layer security, building on years of quantum cryptography research that was also being demonstrated in that testbed,” Englund said.

“However, there were many deep theoretical challenges that needed to be overcome to see whether the prospect of privacy-guaranteed distributed machine learning was feasible. This was not possible until Kfir joined our team, as he uniquely understood the experimental and theoretical elements to develop a unified framework on which this research could be based.”

Going forward, the researchers hope to explore how their protocol can be applied to a technique called federated learning, in which multiple parties use data to train a central deep-learning model. It could also be used for quantum computing rather than the classical computing studied in this work, which could offer benefits in both accuracy and security.

“This research combines two disciplines that don’t usually intersect, specifically deep learning and quantum key distribution, in a clever and interesting way. The latter technique adds an extra layer of security to the former while also allowing for realistic implementation.”

“This could be interesting for privacy protection in distributed architectures. I’m excited to see how the protocol behaves under the imperfections of the experiments and then see it realized in practice,” said Eleni Diamanti, CNRS research director at the Sorbonne University in Paris, who was not involved in the research.

Further information: Kfir Sulimany et al. “Quantum-secure multiparty deep learning” arXiv (2024). DOI: 10.48550/arxiv.2408.05629

Journal information: arXiv

Courtesy of Massachusetts Institute of Technology

This story is reprinted with permission from MIT News (web.mit.edu/newsoffice/), a popular site covering news about MIT research, innovation and education.

Citation: Security protocol leverages quantum mechanics to protect data from attackers during cloud-based computing (September 26, 2024) Retrieved September 26, 2024 from https://phys.org/news/2024-09-protocol-leverages-quantum-mechanics-shield.html

This document is subject to copyright. It may not be reproduced without written permission, except for fair dealing for the purposes of personal study or research. The content is provided for informational purposes only.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button