This is a kind of machine learning where one wants to train a model or perform inference without transmitting sensitive information.
This information could leak because of data transmission to an untrusted computing server, or because the model itself reveals the structure of its training data (Ateniese et al. 2013; Song, Ristenpart, and Shmatikov 2017).
- Ateniese, Giuseppe, Giovanni Felici, Luigi V. Mancini, Angelo Spognardi, Antonio Villani, and Domenico Vitali. June 19, 2013. "Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers". arXiv:1306.4447 [Cs, Stat]. http://arxiv.org/abs/1306.4447.
- Song, Congzheng, Thomas Ristenpart, and Vitaly Shmatikov. September 22, 2017. “Machine Learning Models That Remember Too Much”. arXiv:1709.07886 [Cs]. http://arxiv.org/abs/1709.07886.