Silenzio: Secure Non-Interactive Outsourced MLP Training
Abstract
Outsourcing the ML training to cloud providers presents a compelling opportunity for resource constrained clients, while it simultaneously bears inherent privacy risks, especially for highly sensitive training data. We introduce Silenzio, the first fully non-interactive outsourcing scheme for the training of multi-layer perceptrons that achieves 128 bit security using FHE. Unlike traditional MPC based protocols that necessitate interactive communication between the client and server(s) or non-collusion assumptions among multiple servers, Silenzio enables the fire-and-forget paradigm without such assumptions. In this approach, the client encrypts the training data once, and the cloud server performs the training without any further interaction. Silenzio operates over low bitwidth integers - never exceeding 8 bit - to mitigate the computational overhead of FHE. Our approach features a novel low-bitwidth matrix multiplication that leverages input-dependent residue number systems and a Karatsuba-inspired multiplication routine, ensuring that no intermediate FHE-processed value overflows 8 bit. Starting from an RNS-to-MRNS conversion process, we propose an efficient block-scaling mechanism, which approximately shifts encrypted tensor values to the user-specified most significant bits. To instantiate the backpropagation of the error, Silenzio introduces a low-bitwidth and TFHE friendly gradient computation for the cross entropy loss. Implemented using the state-of-the-art Concrete library, we evaluate Silenzio on standard MLP training tasks regarding runtime as well as model performance and achieve similar classification accuracy as MLPs trained using standard PyTorch with 32 bit floating-point computations. Our open-source implementation represents a significant advancement in privacy-preserving ML, providing a new baseline for secure and non-interactive outsourced MLP training.