Privacypreserving Artificial Intelligence Techniques in Biomedicine
[name=David, color=Green]DBB \definechangesauthor[name=Reihaneh&Reza, color=Purple]RR \addbibresourcemain.bib
Abstract
Artificial intelligence (AI) has been successfully applied in numerous scientific domains including biomedicine and healthcare. Here, it has led to several breakthroughs ranging from clinical decision support systems, image analysis to whole genome sequencing. However, training an AI model on sensitive data raises also concerns about the privacy of individual participants. Adversary AIs, for example, can abuse even summary statistics of a study to determine the presence or absence of an individual in a given dataset. This has resulted in increasing restrictions to access biomedical data, which in turn is detrimental for collaborative research and impedes scientific progress. Hence there has been an explosive growth in efforts to harness the power of AI for learning from sensitive data while protecting patients’ privacy. This paper provides a structured overview of recent advances in privacypreserving AI techniques in biomedicine. It places the most important stateoftheart approaches within a unified taxonomy, and discusses their strengths, limitations, and open problems.
Introduction
AI strives to emulate human intelligence and to develop intelligent algorithms that undertake complicated tasks. For many complex tasks, AI already surpasses humans in terms of accuracy, speed and cost. Recently, the rapid adoption of AI and its subfields, specifically machine learning and deep learning, has led to substantial progress in applications such as autonomous driving [Schwarting_2018], text translation [gehring2017convolutional] and voice assistance [xiong2018microsoft]. At the same time, AI is becoming essential in biomedicine, where it has increasingly captured the attention of researchers. In particular, the rise of big data in healthcare makes it necessary to develop techniques that help scientists to gain understanding from it [HolzingerKieseWeipplTjoa:2018:trends].
Success stories such as acquiring the compressed representation of druglike molecules [gomez2018automatic], modeling the hierarchical structure and function of a cell [ma2018using] and translating magnetic resonance images to computed tomography [nie2018medical] using deep learning models illustrate the remarkable performance of these AI approaches. AI has not only achieved remarkable success in analyzing biomedicine data [hosny2018artificial, beam2018big, yu2018artificial, michael2018visible, chen2018rise, wainberg2018deep, min2017deep, litjens2017survey, shen2017deep, jiang2017artificial, libbrecht2015machine], but also has surpassed humans in applications such as sepsis prediction [nemati2018interpretable], malignancy detection on mammography [teare2017malignancy] and mitosis detection in breast cancer [veta2015assessment].
Despite these AIfueled advancements, important privacy concerns have been raised regarding the individuals who contribute to the training datasets. While taking care of the confidentiality and privacy of sensitive biological data is crucial, several studies showed that AI techniques often do not maintain data privacy [shokri2017membership, papernot2018sok, zhang2016understanding]. In general, attacks known as membership inference can be used to infer an individual’s membership by querying over the dataset [shringarpure2015privacy] or the trained model [shokri2017membership], or by having access to certain statistics about the dataset [homer2008resolving, harmanci2018analysis, wang2009learning]. Homer et al. [homer2008resolving] showed that under some assumptions, adversaries can use the genomic statistics published as the results of genomewide association studies (GWAS) to find out if an individual was a part of the study. Another example of this kind of attack was demonstrated by attacks on Genomics Beacons [beacon, shringarpure2015privacy], in which an adversary (an attacker who attempts to invade data privacy) could identify the presence of an individual in the dataset by simply querying the presence of a particular allele. Moreover, the attacker could identify the relatives of those individuals and obtain sensitive disease information [wang2009learning]. Besides targeting the training dataset, an adversary may attack a fullytrained AI model to extract individuallevel membership by training an adversarial inference model that learns the behaviour of the target model [shokri2017membership].
As a result of the aforementioned studies, health research centers such as the National Institutes of Health (NIH) as well as hospitals have restricted access to the pseudonymized data [zerhouni2008protecting, erlich2014routes, naveed2015privacy]. Furthermore, data privacy laws such as those enforced by the Health Insurance Portability and Accountability Act (HIPAA), and the Family Educational Rights and Privacy Act (FERPA) in the US as well as the EU General Data Protection Regulation (GDPR) restrict the use of sensitive data [GDPR, cohen2020towards]. Consequently, everyone who needs access to these datasets has to go through a difficult approval process, which significantly impedes collaborative research. Therefore, both industry and academia urgently need to apply privacypreserving techniques to respect individual privacy and comply with these laws.
This paper provides a systematic overview over various recently proposed privacypreserving AI techniques, which facilitate the collaboration between health research institutes while ensuring data privacy. Several efforts exist to tackle the privacy concerns in the biomedical domain, some of which have been examined in a couple of surveys [aziz2019privacy, xu2019federated, Kaissis_2020]. Aziz et al. [aziz2019privacy] investigated previous studies which employed differential privacy and cryptographic techniques for human genomic data. Kaissis et al.[Kaissis_2020] briefly reviewed federated learning, differential privacy and cryptographic techniques applied in medical imaging. Xu et al. [xu2019federated] surveyed the general solutions to challenges in federated learning including communication efficiency, optimization, as well as privacy and discussed possible applications for federated learning including a few examples in healthcare. Our review differs from previous works in several aspects. Compared to [aziz2019privacy] and [Kaissis_2020], this paper covers a broader set of privacy preserving techniques including federated learning and hybrid approaches but also a wider range of problems such as privacypreserving medical image segmentation and electronic health record classification. In contrast to [xu2019federated] that only surveyed federated learning and hybrid approaches, this paper discusses cryptographic techniques and differential privacy approaches and their applications in healthcare too. Moreover, it covers a wider range of studies which employed four different privacypreserving techniques for healthcare applications and compares the approaches using different criteria such as privacy, accuracy and efficiency.
The presented approaches in this review, are divided into four categories, namely, cryptographic techniques, differential privacy, federated learning, and hybrid approaches. First, we describe how cryptographic techniques — in particular, homomorphic encryption (HE) and secure multiparty computation (SMPC) — ensure secrecy of sensitive data by carrying out computations on encrypted biological data. Next, we illustrate the differential privacy approach and its capability in quantifying individuals’ privacy in published summary statistics of, for instance, GWAS data and deep learning models trained on clinical data. Then, we elaborate on federated learning, which allows health institutes to train AIs locally and to share only selected parameters without sensitive data with a coordinator, who aggregates them and builds a global model. Following that, we discuss the hybrid approaches which enhance data privacy by combining multiple privacypreserving techniques. We elaborate on the strengths and drawbacks of each approach as well as its applications in biomedicine and healthcare. Next, we provide a comparison among the approaches from different perspectives such as computational and communication efficiency, accuracy, and privacy. Afterwards, we discuss the most realistic approaches from practical viewpoint and provide a list of open problems and challenges to the adoption of these techniques in realworld healthcare applications.
Cryptographic Techniques
In the healthcare domain and GWAS in particular, cryptograohic techniques have been used to collaboratively compute result statistics while preserving the data privacy [cho2018secure, bonte2018towards, jagadeesh2019keeping, kim2015private, lauter2014private, lu2015privacy, zhang2015foresee, kamm2013new, constable2015privacy, zhang2015secure, mohassel2017secureml]. These cryptographic approaches are based on HE [gentry2009fully] or SMPC [cramer2015secure]. HE enables the computation of addition and multiplication over encrypted data.
HEbased approaches share three steps (Figure 1):

Participants (e.g. hospitals or medical centers) encrypt their private data and send the encrypted data to a computing party.

The computing party calculates the statistics over the encrypted data and shares the statistics (which are encrypted) with the participants.

The participants access the results by decrypting them.
In SMPC, there are multiple participants as well as a couple of computing parties which perform computations on secret shares from the participants. Given participants and computing parties, SMPCbased approaches follow three steps (Figure 2):

Each participant sends a separate and different secret to each of the computing parties.

Each computing party computes the intermediate results on the secret shares from the participants and shares the intermediate results with the other computing parties.

Each computing party aggregates the intermediate results from all computing parties including itself to calculate the final (global) results. In the end, the final results computed by all computing parties are the same and can be shared by the participants.
To clarify the concepts of secret sharing [shamir1979secretshare] and multiparty computation, consider a scenario [jagadeesh2019smpcscenario] with two participants and and two computing parties and . and possess the private data and , respectively. The aim is to compute , where neither nor reveals its data to the computing parties. To this end, and generate random numbers and , respectively; reveals to and to ; likewise, shares with and with ; , , and are secret shares. computes and sends it to and calculates and reveals it to . Both and sum the result they computed and the result each obtained from the other computing party. The sum is in fact , which can be shared with and .
Authors  Year  Privacy Technique  Model  Application  
Kim et al. [kim2015private]  2015  HE  statistics minor allele frequency Hamming Distance Edit distance 


Lu et al. [lu2015privacy]  2015  HE 



Lauter et al. [lauter2014private]  2014  HE  and measures Pearson goodnessoffit expectation maximization CochranArmitage  genetic associations  
Kim et al. [kim2018secure]  2018  HE  logistic regression  medical decision making  
Morshed et al. [morshed2018parallel]  2018  HE  linear regression  medical decision making  
Kamm et al. [kamm2013new]  2013  SMPC  statistics  genetic associations  
Constable et al. [constable2015privacy] Zhang et al. [zhang2015secure]  \[email protected]c#21  SMPC 

genetic associations  
Shi et al. [shi2016smpclogistic]  2016  SMPC  logistic regression  genetic associations  
Bloom [bloom2019smpclinear]  2019  SMPC  linear regression  genetic associations  
Cho et al. [cho2018secure]  2018  SMPC 

genetic associations 
It is worth mentioning that to preserve data privacy, the computing parties and must be noncolluding. That is, must not send and to and must not share and with . Otherwise, the computing parties can compute and , revealing the participants’ data. In general, in a SMPC with computing parties, data privacy is protected as long as at most computing parties collude with each other. The larger , the stronger the privacy but the higher the communication overhead and processing time.
Several studies use HE to develop secure, privacyaware algorithms for healthcare data. Kim et al. [kim2015private] and Lu et al. [lu2015privacy] implemented a secure test for GWAS data using HE. Lauter et al. [lauter2014private] developed privacypreserving versions of common statistical tests in in GWAS, such as Pearson good of fit test, tests for linkage disequilibrium, and the Cochran Armitage trend test. Kim et al. [kim2018secure] and Morshed et al. [morshed2018parallel] presented a secure logistic for GWAS and linear regression algorithm for healthcare data, based on HE.
Other studies mainly capitalized on SMPC to implement different privacypreserving algorithms applicable to healthcare data. Zhang et al. [zhang2015secure], Constable et al. [constable2015privacy], and Kamm et al. [kamm2013new] developed a secure test based on SMPC for GWAS data. Shi et al. [shi2016smpclogistic] developed a secure logistic regression algorithm using SMPC. Bloom [bloom2019smpclinear] implemented a secure linear regression test based on SMPC for GWAS data. Cho et al. [cho2018secure] introduced a SMPC based framework to facilitate quality control and population stratification correction for largescale GWAS and showed that their framework is scalable to one million individuals and half million single nucleotide polymorphisms (SNPs).
Despite the promises of privacypreserving algorithms leveraging cryptographic techniques (Table 1), the road for the wide adoption of these algorithms in the biomedicine and healthcare community is long [berger2019cryptlimit]. The major limitations of HE are few supported operations and computational overhead [chialva2018helimit]. HE supports only addition and multiplication operations, and as a result, developing complex AI models with nonlinear operations such as deep neural networks (DNNs) using HE is very challenging. Moreover, HE incurs remarkable computational overhead since it performs operations on encrypted data. The main constraints of SMPC are computational overhead and network bottleneck [alexandru2020smpclimits]. Similar to HE, SMPC suffers from high overhead which comes from operating on secret shares from a large number of participants or large amount of data. Additionally, SMPC consumes high network bandwidth because participants need to send a large number of secret shares to the computing parties, which in turn, send the intermediate results to the other parties. Unlike HE, SMPC is flexible in terms of operations. On the other hand, HE is more communicationefficient compared to SMPC. Both HE and SMPC based algorithms are not scalable due to their computational overhead, which hinders their adoption for largescale biomedical and healthcare data [berger2019cryptlimit].
Differential Privacy
One of the stateoftheart concepts for eliminating and quantifying the chance of information leakage that has gained considerable attention in recent years is differential privacy [su2016differentially, abadi2016deep, phan2016differential, beaulieu2019privacy, ren2018textsf, cormode2018marginal, johnson2013privacy]. Differential privacy [dwork2016calibrating, dwork2006our, nissim2017differential] is a mathematical model that encapsulates the idea of injecting enough randomness or noise to sensitive data. So, even a strong adversary with arbitrary auxiliary information about the data will still be uncertain in identifying any of the individuals in the dataset. It’s primary goal is to camouflage the contribution of every single individual by inserting uncertainty into the learning process. It has become standard in data protection and has been effectively deployed by Google [erlingsson2014rappor] and Apple [thakurta2017learning] as well as agencies such as the United States Census Bureau. Furthermore, it has drawn the attention of researchers in privacysensitive fields such as biomedicine and healthcare [johnson2013privacy, beaulieu2018privacy, fienberg2011privacy, uhlerop2013privacy, yu2014scalable, han2018differential, tramer2015differential, vu2009differential, yu2014differentially, honkela2018efficient, han2019differential, simmons2016realizing, simmons2016enabling, wang2014differentially, wan2017controlling, al2017aftermath, fienberg2011privacy].
Differential privacy ensures that the model we train does not overfit the sensitive data of a particular user. In particular, the model trained on a dataset containing information of a specific individual should be statistically indistinguishable from a model trained without the individual (Figure 3). As an example, assume that a patient would like to give consent to his/her doctor to include his/her personal health record in a medical dataset to study the coordination between age and Cardiovascular disease. Differential privacy provides a mathematical guarantee which captures the privacy risk associated with the patient’s participation in the study and explains to what extent the analyst or the potential adversary can learn about a particular individual in the dataset.
More formally, a randomized algorithm (an algorithm that has randomness in its logic and its output can vary even on a fixed input) is (, )differentially private if for all subsets and for all adjacent datasets that differ in at most one record the following inequality holds:
Here, and are privacy loss parameters where lower values imply stronger privacy guarantees. is an exceedingly small value (e.g. ) indicating the probability of an uncontrolled breach, where the algorithm produces a specific output only in the presence of a specific individual and not otherwise. represents the worst case privacy breach in the absence of any such rare breach. If you assume , you will have a pure ()differentially private algorithm, while if you consider to approximate the case in which pure differential privacy is broken, you will have an approximate (, )differentially private algorithm.
Two important properties of differential privacy are composability [kairouz2017composition] and resilience to postprocessing. Composability means that combining multiple differentially private algorithms yields another differentially private algorithm. More precisely, if you combine (, )differentially private algorithms, the composed algorithm is at least (, )differentially private. Differential privacy also assures resistance to postprocessing theorem which states passing the output of an (, )differentially private algorithm to any arbitrary randomized algorithm will still uphold the (, )differential privacy guarantee.
The community efforts to ensure the privacy of sensitive biomedicine data using differential privacy can be grouped into four categories according to the problem they address (Table 2):

Approaches to query genomics databases [johnson2013privacy, wan2017controlling, al2017aftermath].

Statistical and AI modeling techniques in biomedicine [honkela2018efficient, vu2009differential, yu2014differentially, han2019differential, simmons2016realizing, simmons2016enabling].

Data release, i.e., releasing summary statistics such as values and contingency tables [fienberg2011privacy, uhlerop2013privacy, yu2014scalable, wang2014differentially].

Training privacypreserving generative models [abay2018privacy, beaulieu2019privacy, jordon2018pate].
Studies in the first category proposed solutions to reduce the privacy risks of genomics databases such as GWAS databases and genomics beacon service [fiume2019federated]. The Beacon Network [beacon] is an online web service developed by the Global Alliance for Genomics and Health (GA4GH) through which the users can query the data provided by owners or research institutes, ask about the presence of a genetic variant in the database, and get a YES/NO as response. Studies have shown that an attacker can detect membership in the Beacon or GWAS by querying these databases multiple times and asking different questions [shringarpure2015privacy, raisaro2018protecting, hardt2012simple]. In a recent work, Aziz et al. [al2017aftermath] proposed two lightweight algorithms to make the Beacon’s response inaccurate by controlling a bias variable. These algorithms decide when to answer the query correctly/incorrectly according to specific conditions in the bias variable so that it gets harder for the attacker to succeed. In another work, Johnson et al. [johnson2013privacy] developed a differentially private query answering framework. With this framework the analysts can explore the GWAS data without any prior knowledge of the number and location of SNPs in the DNA sequence. The analysts can retrieve statistical properties such as the correlation between SNPs and get an almost accurate answer while the GWAS dataset is protected against privacy risks.
Some of the efforts in the second category addressed the privacy concerns in GWAS data analysis by introducing differentially private logistic regression to identify associations between SNPs and diseases [han2019differential] or associations among multiple SNPs [yu2014differentially]. Honkela et al. [honkela2018efficient] improve drug sensitivity prediction by effectively employing differential privacy for Bayesian linear regression. Moreover, Simmons et al. [simmons2016enabling] presented a differentially private EIGENSTRAT (PrivSTRAT) [price2006principal] and linear mixed model (PrivLMM) [yang2014advantages] while correcting for population stratification. In another work, Simmons et al. [simmons2016realizing] tackled the problem of finding significant SNPs by modeling it as an optimization problem. Solving this problem provides a differentially private estimate of the neighbor distance for all SNPs, such that high scoring SNPs can be found.
Authors  Year  Model  Application  
Aziz et al. [al2017aftermath]  2017  eliminating random positions biased random response  querying genomics database  
Johnson et al. [johnson2013privacy]  2013 

querying genomics database  
Han et al. [han2019differential] Yu et al. [yu2014differentially]  \[email protected]c#21  logistic regression  genetic associations  
Honkela et al. [honkela2018efficient]  2018  bayesian linear regression  drug sensitivity prediction  
Simmons et al. [simmons2016enabling]  2016  EIGENSTRAT linear mixed model  genetic associations  
Simmons et al. [simmons2016realizing]  2016  nearest neighbor optimization  genetic associations  
Fienberg et al. [fienberg2011privacy] Uhlerop et al. [uhlerop2013privacy] Yu et al. [yu2014scalable] Wang et al. [wang2014differentially]  \[email protected]c#21 

genetic associations  
Abay et al. [abay2018privacy]  2018  deep autoencoder  generating artificial medical data  
Beaulieu et al. [beaulieu2019privacy]  2019  GAN  simulating SPRINT trial  
Jordon et al. [jordon2018pate]  2018  GAN  generating artificial medical data 
The third category focused on releasing summary statistics such as values, contingency tables, and minor allele frequencies in a differentially private fashion. The common approach in these works is to add Laplacian noise to the true value of the statistics, so that sharing the perturbed statistics preserves privacy of the individuals. They vary in the sensitivity of the algorithms (that is, the maximum change on the output of an algorithm in presence or absence of a specific data point) and hence require different amounts of injected noise [fienberg2011privacy, uhlerop2013privacy, wang2014differentially].
The forth category proposed novel privacyprotecting methods to generate synthetic healthcare data leveraging differentially private generative models (Figure 4). Deep generative models, such as generative adversarial networks (GANs), can be trained on sensitive biomedical data to capture its properties and generate artificial data with similar characteristics as the original data.
Abay et al. [abay2018privacy] presented a differentially private deep generative model, DPSYN, a generative autoencoder that splits the input data into multiple partitions, then learns and simulates the representation of each partition while maintaining the privacy of input data. They assessed the performance of DPSYN on sensitive datasets of breast cancer and diabetes. Beaulieu et al. [beaulieu2019privacy] trained an auxiliary classifier GAN (ACGAN) in a differentially private manner to simulate the participants of the SPRINT trial (Systolic Blood Pressure Trial), so that the clinical data can be shared while respecting participants’ privacy. In another approach, Jordon et al. [jordon2018pate] introduced a differentially private GAN, PATEGAN, and evaluated the quality of synthetic data on MetaAnalysis Global Group in Chronic Heart Failure (MAGGIC) and the United Network for Organ Transplantation (UNOS) datasets.
Despite the aforementioned achievements in adopting differential privacy in the field, several challenges remain to be addressed. Although differential privacy involves less network communication, memory usage and time complexity compared to cryptographic techniques, it still struggles with giving highly accurate results within a reasonable privacy budget, namely, intended and , on large scale datasets such as genomics datasets [wang2017genome, aziz2019privacy]. In more details, since the genomics datasets are huge, the sensitivity of the applied algorithms on these datasets is large. Hence, the amount of distortion required for anonymization increases significantly, sometimes to the extent that the results will not be meaningful anymore [KiesebergEtAl:2014:protecting]. Therefore, to make differential privacy more practical in the field, balancing a trade off between privacy and utility demands more attention than it has received [vu2009differential, han2018differential, tramer2015differential, wang2014differentially].
Federated Learning
Federated learning [mcmahan2016federated] is a type of distributed learning where multiple clients (e.g. hospitals) collaboratively learn a model under the coordination of a central server while preserving the privacy of their data [MalleEtAl:2017:FederatedLearning], [kairouz2019advancesfederated]. Instead of sharing its private data with the server or the other clients, each client extracts knowledge (that is, model parameters) from its data and transfers it to the server for aggregation (Figure 5).
Federated learning is an iterative process in which each iteration consists of the following steps [kairouz2019advancesfederated]:

The server chooses a set of clients to participate in the current iteration of the model.

The selected clients obtain the current model from the server.

Each selected client computes the local parameters using the current model and its private data (e.g., runs gradient descent algorithm initialized by the current model on its local data to obtain the local gradient updates).

The server collects the local parameters from the selected clients and aggregates them to update the current model.
The data of the clients can be considered as a table, where rows represent samples (e.g., individuals) and columns represent features or labels (e.g., age, blood pressure, case vs. control). We refer to the set of samples, features, and labels of the data as sample space, feature space, and label space, respectively. Federated learning can be categorized into three types based on the distribution characteristics of the clients’ data:

Horizontal (samplebased) federated learning [yang2019federatedconcepts]: Data from different clients shares similar feature space but is very different in sample space. As an example, consider two hospitals in two different cities which collected similar information such as age or sex. In this case, the feature spaces are similar; but because the people who participated in the hospitals’ data collections are from different cities, their intersection is most probably very small, and the sample spaces are hence very different.

Vertical (featurebased) federated learning [yang2019federatedconcepts] : Clients’ data is similar in sample space but very different in feature space. For example, two hospitals with different expertise in the same city might collect different information (different feature space) from almost the same people (similar sample space).

Hybrid federated learning: Both feature space and sample space are different in the data from the clients. For example, consider a medical center with expertise in brain image analysis located in New York and a research center with expertise in protein research based in Berlin. Their data is completely different (image vs. protein data) and disjoint groups of individuals participated in the data collection of each center.
To illustrate the concept of federated learning, consider a scenario with two hospitals and . and possess lists and , containing the age of their cancer patients, respectively. A simple federated mean algorithm to compute the average age of cancer patients in both hospitals without revealing the real values of and works as follows: For the sake of brevity, we assume that both hospitals are selected in the first step and that the current global model parameters in the second step are zero (see federated learning steps).

Hospital computes the average age () and number of its cancer patients (). Hospital does the same, resulting in , . Here, and are private data while , , , are the parameters extracted from the private data.

The server obtains the values of local model parameters from the hospitals and computes the global mean as follows:
Two wellknown concepts in machine learning are also related to federated learning: transfer learning [pan2009transferlearning] and multitask learning [caruana1997multitask, zhang2017mtlsurvey]. In transfer learning, there are source and destination tasks. The aim is to transfer the knowledge from the source to the destination task. As an example of federated transfer learning, suppose that hospital A has a DNN model trained on its rich dataset of medical images (source task). On the other hand, the hospital B wants to train a DNN model on a dataset containing brain images (a special kind of medical images) of its cancer patients (destination task) but the dataset does not have enough samples. Hospital B can take advantage of hospital A’s DNN model by incorporating some parts of the source model into its own DNN model (knowledge transfer) instead of training the model from scratch on its dataset [HolzingerHaibeJurisica:2019:ImagingIntegration].
In multitask learning, there are multiple tasks and the goal is to exchange the knowledge among the tasks to improve the performance (accuracy) of all tasks. As an example of federated multitask learning, assume hospitals A and B again, where hospital A has a task of training a DNN model on its cancer image dataset and hospital B’s task is to train a logistic regression model on a dataset including the age, sex, and genetic variants of its cancer patients. Here, both DNN and logistic regression models are trained concurrently (and iteratively) and the knowledge (weights) from both models are exchanged in each iteration to improve (tune the weights) both models.
Authors  Year  Model  Application  
Sheller et al. [sheller2018multi]  2018  DNN  medical image segmentation  

\[email protected]c#21 

medical image classification  
Nasirigerdeh et al. [Nasirigerdeh2020splink]  2020  linear regression, chisquare, logistic regression  GWAS  

\[email protected]c#21  logistic regression  GWAS  
Brisimi et al. [brisimi2018federated]  2018  support vector machine  classifying electrical health records  
Huang et al. [huang2018loadaboost]  2018  adaptive boosting ensemble  classifying medical data  
Liu et al. [liu2018fadl]  2018  autonomous deep learning  classifying medical data  
Chen et al. [chen2019fedhealth]  2019  transfer learning  training wearable healthcare devices 
A crucial consideration in both transfer and multitask learning is task relatedness. Employing unrelated tasks can lead to transferring negative knowledge and deteriorating the performance of the model(s). To learn more about transfer/multitask learning, interested readers are referred to [pan2009transferlearning, smith2017fedmtl, caruana1997multitask, zhang2017mtlsurvey]. Moreover, federated transfer/multitask learning can be a horizontal or hybrid federated learning approach. In the example provided for federated transfer learning, if the shape of the images in the source and destination tasks (feature space) are the same, it is considered as a horizontal approach. Otherwise, it is a hybrid federated learning approach similar to the example given for federated multitask learning.
The emerging demand for federated learning gave rise to a wealth of both simulation [TFF, ryffel2018generic] and productionoriented [FATE, PaddleFL] open source frameworks. Additionally, there are AI platforms whose goal is to apply federated learning in realworld healthcare settings [FC, ClaraFL]. In the following, we survey works on federated AI techniques in biomedicine and healthcare (Table 3). The recent studies in this regard mainly focused on horizontal federated learning and there are a few vertical federated learning and federated transfer/multitask learning algorithms applicable to healthcare and biomedical data.
A number of the studies provided solutions for the lack of sufficient data due to the the privacy challenges in the medical imaging domain [sheller2018multi, vepakomma2018split, vepakomma2019reducing, poirot2019split, balachandar2020accounting, chang2018distributed]. For instance, Sheller et al. developed a supervised DNN in a federated way for semantic segmentation of brain Gliomas from magnetic resonance imaging scans [sheller2018multi]. Chang et al. [chang2018distributed] simulated a distributed DNN in which multiple participants collaboratively update model weights using training heuristics such as single weight transfer and cyclical weight transfer (CWT). They evaluated this distributed model using image classification tasks on medical image datasets such as mammography and retinal fundus image collections, which were evenly distributed among the participants. Balachandar et al. [balachandar2020accounting] optimized CWT for cases where the datasets are unevenly distributed across participants. They assessed their optimization methods on simulated diabetic retinopathy detection and chest radiograph classification.
Federated linear/logistic regression or chisquare test have been developed for sensitive biological data that is vertically or horizontally distributed [Nasirigerdeh2020splink, wu2012g, wang2013expectation, li2016vertical]. The grid binary logistic regression (GLORE) [wu2012g] and the expectation propagation logistic regression (EXPLORER) [wang2013expectation] are horizontal federated learning approaches designed for clinical data. Unlike GLORE, EXPLORER supports asynchronous communication and online learning functionality so that the system can continue collaborating in case a participant is absent or if communication is interrupted. Li et al. presented VERTIGO [li2016vertical], a vertical grid logistic regression algorithm designed for vertically distributed biological datasets such as breast cancer genome and myocardial infarction data. Nasirigerdeh et al. [Nasirigerdeh2020splink] developed a horizontally federated tool set for GWAS, called sPLINK, which supports chisquare test, linear regression, and logistic regression. Notably, federated results from sPLINK on distributed datasets are the same as those from aggregated analysis conducted with PLINK [purcell2007plink]. Moreover, they showed that sPLINK is robust against heterogeneous (imbalanced) data distributions across clients and does not lose its accuracy in such scenarios.
Moreover, there are studies in the literature that combine federated learning with other traditional AI modeling techniques such as ensemble learning, support vector machines (SVMs) and principle component analysis (PCA) [brisimi2018federated, huang2018loadaboost, liu2018fadl, chen2019fedhealth, silva2019federated]. Brisimi et al. [brisimi2018federated] presented a federated softmargin support vector machine (sSVM) for distributed electronic health records. Huang et al. [huang2018loadaboost] introduced LoAdaBoost, a federated adaptive boosting method for learning medical data such as intensive care unit data from distinct hospitals [pollard2018eicu] while Liu et al. [liu2018fadl] trained a federated autonomous deep learner to this end. There have also been a couple of attempts at incorporating federated learning into multitask learning and transfer learning in general [smith2017federated, corinzia2019variational, liu2018secure]. However, to the best of our knowledge, FedHealth [chen2019fedhealth] is the only federated transfer learning framework specifically designed for healthcare applications. It enables users to train personalized models for their wearable healthcare devices by aggregating the data from different organizations without compromising privacy.
One of the major challenges for adopting federated learning in large scale healthcare applications is the significant network communication overhead, especially for complex AI models such as DNNs that contain millions of model parameters and require thousands of iterations to converge. A rich body of literature exists to tackle this challenge, known as communicationefficient federated learning. These approaches can be categorized into three categories: gradient quantification [gupta2015gradquant], gradient sparsification [aji2017gradsparse], and more local updates in clients than global model update [mcmahan2016communication].
Authors  Year  Privacy Technique  Model  Application  
Li et al. [li2019privacy]  2019  FL+DP  DNN  medical image segmentation  
Li et al. [li2020multi]  2020  FL+DP  domain adoption  medical image pattern recognition  
Choudhury et al. [choudhury2019differential]  2019  FL+DP  perceptron neural network support vector machine logistic regression  classifying electronic health records  
Constable et al. [constable2015privacy]  2015  FL+SMPC 

genetic associations  
Lee et al. [lee2018privacy]  2018  FL+HE  contextspecific hashing  learning patient similarity  
Kim et al. [kim2019secure]  2019  FL+DP+HE  logistic regression  classifying medical data 
The main idea behind gradient quantification is to use less bytes for each model parameter (gradient), e.g., 2 bytes instead of 8. In gradient sparsification, instead of sending all parameters, a fraction, e.g. , of parameters is exchanged between the server and clients, saving network bandwidth. In the last category of communicationefficient approaches, the clients update their local parameters multiple times before sending them to the server to reduce the number of total iterations, and as a result, decrease the total network bandwidth usage.
There is a tradeoff between communication efficiency and model convergence (accuracy). Employing communicationefficient approaches reduces the network overhead but might jeopardize the model convergence. Consequently, one should keep in mind that communicationefficient approaches should be leveraged as long as they keep the accuracy of the model acceptable. Interested readers are referred to relevant publications [mcmahan2016communication, tang2020communication] for detailed descriptions.
Another challenge in federated learning is the possible accuracy loss from the aggregation process if the data distribution across the clients is not independent and identically distributed (IID). More specifically, federated learning can deal with nonIID data while preserving the model accuracy if the learning model is simple such as ordinary least squares (OLS) linear regression (sPLINK [Nasirigerdeh2020splink]). However, when it comes to learning complex models such as DNNs, the global model might not converge on nonIID data across the clients. Zhao et al. [zhao2018noniid1] showed that simple averaging of the model parameters in the server significantly diminishes the accuracy of a convolutional neural network model in highly skewed nonIID settings. To solve this problem, they train a warmup model on an IID dataset and share the model as well as a portion of the dataset with all clients. Each client uses its local data and the shared dataset to train the local model and the simple averaging is employed in the server to aggregate the model parameters. Developing the aggregation strategies which are robust against nonIID scenarios is still an open and interesting problem in federated learning.
Finally, federated learning is based on the assumption that the centralized server is honest and not compromised, which is not necessarily the case in real applications. To relax this assumption, differential privacy or cryptographic techniques can be leveraged in federated learning, which is covered in the next section. For further reading on future directions of federated learning in general, we refer the reader to comprehensive surveys [li2019federated, kairouz2019advancesfederated, rieke2020future].
Hybrid Privacypreserving Techniques
The hybrid techniques combine federated learning with the other paradigms (cryptographic techniques and differential privacy) to enhance privacy or provide privacy guarantees (Table 4). Federated learning preserves privacy to some extent because it does not require the health institutes to share the patients’ data with the central server. However, the model parameters that participants share with the server might be abused to reveal the underlying private data if the coordinator is compromised [melis2019exploiting]. To handle this issue, the participants can leverage differential privacy and add noise to the model parameters before sending them to the server (FL+DP) [geyer2017differentially, li2019privacy, li2020multi, truex2019hybrid] or they employ HE (FL+HE) or SMPC (FL+SMPC) to securely share the parameters with the server [lee2018privacy, constable2015privacy].
HE  SMPC  DP  FL  FL+DP  FL+HE  FL+SMPC  
Accuracy  2  6  1  5  3  4  5 
Computational efficiency  1  2  6  6  5  3  4 
Network communication efficiency  5  4  6  3  3  2  1 
Privacy of exchanged traffic  4  3  NA  1  2  4  3 
Exchanging low sensitive traffic  ✗  ✗  NA  ✓  ✓  ✓  ✓ 
Privacy guarantee  ✗  ✗  ✓  ✗  ✓  ✗  ✗ 
In the biomedical field, several hybrid approaches have been presented recently. Li et al. [li2019privacy] presented a federated deep learning framework for magnetic resonance brain image segmentation in which the client side provides differential privacy guarantees on selecting and sharing the local gradient weights with the server for imbalanced data. A recent study [li2020multi] extracted neural patterns from brain functional magnetic resonance images by developing a privacypreserving pipeline that analyzes image data of patients having different psychiatric disorders using federated domain adaption methods. Choudhury et al. [choudhury2019differential] developed a federated differential privacy mechanism for gradientbased classification on electronic health records. There are also some studies that incorporate federate learning with cryptographic techniques. For instance, Constable et al. [constable2015privacy] implemented a privacyprotecting structure for federated statistical analysis such as statistics on GWAS while maintaining privacy using SMPC. In a slightly different approach, Lee et al. [lee2018privacy] presented a privacypreserving platform for learning patient similarity in multiple hospitals using a contextspecific hashing approach which employs homomorphic encryption to limit the privacy leakage. Moreover, Kim et al. [kim2019secure] presented a privacypreserving federated logistic regression algorithm for horizontally distributed diabetes and intensive care unit datasets. In this approach, the logistic regression ensures privacy by making the aggregated weights differentially private and encrypting the local weights using homomorphic encryption.
Incorporating HE, SMPC, and differential privacy into federated learning brings about enhanced privacy but it combines the limitations of the approaches, too. FL+HE puts much more computational overhead on the server, since it requires to perform aggregation on the encrypted model parameters from the clients. The network communication overhead is exacerbated in FL+SMPC, because clients need to securely share the model parameters with multiple computing parties instead of one. FL+DP might result in inaccurate models because of adding noise to the model parameters in the clients.
Comparison
We compare the privacypreserving techniques (HE, SMPC, differential privacy, federated learning, and the hybrid approaches) using various performance and privacy criteria such as computational/communication efficiency, accuracy, privacy guarantee, exchanging sensitive traffic through network and privacy of exchanged traffic (Table 5 and Figure 6). We employ a generic ranking (lowest =1 to highest = 6 ) [aziz2019privacy] for all comparison criteria except for privacy guarantee and exchanging sensitive traffic through network, which are binary criteria. This comparison is made under the assumption of applying a complex model (e.g. DNN with a huge number of model parameters) on the large sensitive biomedical datatsets distributed across dozens of clients in IID configuration. Additionally, there are a few computing parties in SMPC (practical configuration).
Computational efficiency is an indicator of the extra computational overhead an approach incurs to preserve the privacy. According to Table 5 and Figure 6, differential privacy and federated learning are the best from this perspective. This is because the noise injection procedure in differential privacy is not computationally expensive and federated learning follows the paradigm of bringing computation to data, distributing computational overhead among the clients. HE and SMPC are based on the paradigm of moving data to computation. In HE, encryption of the whole private data in the clients and carrying out computation on encrypted data by the computing party cause a huge amount of overhead. In SMPC, a couple of computing parties process the secret shares from dozens of clients, incurring considerable computational overhead. Among the hybrid approaches, FL+DP has the best computational efficiency given the lower overhead of the two approaches whereas FL+HE has the highest overhead because aggregation process on encrypted parameters is computationally expensive.
Network communication efficiency indicates how efficient an approach utilizes the network bandwidth. The less data traffic is exchanged in the network, the more communicationefficient the approach is. Federated learning is the least efficient approach from the communication aspect since exchanging a large number of model parameter values between the clients and the server generates a huge amount of network traffic. Notice that network bandwidth usage of federated learning is independent of the clients’ data because federated learning does not move data to computation but depends on the model complexity (i.e. the number of model parameters). The next approach in this regard is SMPC, where not only each participant sends a large traffic (almost as big as its data) to each computing party but also each computing party exchanges intermediate results (which might be large) with the other computing parties through the network. The network overhead of homomorphic encryption comes from sharing the encrypted data of the clients (as big as the data itself) with the computing party, which is small compared to network traffic generated by federated learning and SMPC. The best approach is differential privacy with no network overhead. Accordingly, FL+DP and FL+SMPC are the best and worst among the hybrid approaches from communication efficiency viewpoint, respectively.
Accuracy of the model in a privacypreserving approach is a crucial factor in whether to adopt the approach. SMPC and federated learning are the most accurate approaches incurring no or a little bit accuracy loss in the final model. The next is homomorphic encryption whose accuracy loss is due to approximating the nonlinear operations using addition and multiplication (e.g. least squares approximation [kim2018secure]). The worst approach is differential privacy where the added noise can considerably affect the model accuracy. In the hybrid approaches, FL+SMPC is the best and FL+DP is the worst considering the accuracy of SMPC and differential privacy approaches.
The rest of the comparison measures are privacyrelated. The traffic transferred from the clients (participants) to the server (computing parties) is highly sensitive if it carries the private data of the clients. The less sensitive the exchanged traffic is, the more robust the approach is from the privacy perspective. HE and SMPC send the encrypted and anonymous form of the clients’ private data to the server, respectively. Federated learning and hybrid approaches share only the model parameters with the server. In HE, if the server has the key to decrypt the traffic from the clients, the whole private data of the clients will be revealed. The same holds if the computing parties in SMPC collude with each other. This might or might not be the case for the other approaches (e.g. federated learning) depending on the exchanged model parameters and whether they can be abused to infer the underlying private data.
Privacy of the exchanged traffic indicates how much the traffic is kept private from the server. In HE/SMPC, the data is encrypted/anonymized first and then shared with the server, which is reasonable since it is the clients’ private data. In federated learning, the traffic (model parameters) is directly shared with the server assuming that it does not reveal any details regarding individual samples in the data. The aim of the hybrid approaches is to hide the real values of the model parameters from the server to minimize the possibility of inference attacks using the model parameters. FL+HE is the best among the hybrid approaches from this viewpoint.
Privacy guarantee is a metric which quantifies the degree to which the privacy of the clients’ data can be preserved. Differential privacy and the corresponding hybrid approach (FL+DP) are the only approaches providing a privacy guarantee, whereas all other approaches can only protect the privacy under a set of certain assumptions. HE assumes that the server does not have the decryption key; The underlying assumption in SMPC is that the computing parties do not collude with each other; federated learning supposes that the model parameters do not give any detail about a sample in the clients’ data.
Discussion and open problems
From a practical point of view, homomorphic encryption and SMPC that follow the paradigm of ”move data to computation” do not scale as the number of clients or data size in clients become large. This is because they put the computational burden on a single or a few computing parties. Federated learning, on the other hand, distributes the computation across the clients (aggregation on the server is not computationally heavy) but the communication overhead between the server and clients is the major challenge to scalability of federated learning. The hybrid approaches inherit this issue and it is exacerbated in FL+SMPC. Combining homomorphic encryption with federated learning (FL+HE) adds another obstacle (computational overhead) to scalability of federated learning. There is a growing body of literature on communicationefficient approaches to federated learning, which we already discussed. These approaches can dramatically improve the scalability of federated learning and make it suitable for largescale applications including those in biomedicine.
Given that federated learning is the most realistic approach from a scalability viewpoint, it can be used as a standalone approach as long as inferring the clients’ data from the model parameters is practically impossible. Otherwise, it should be combined with differential privacy to avoid possible inference attacks and exposure of clients’ private data and to provide privacy guarantee. The accuracy of the model will be satisfactory in federated learning but it might be deteriorated in FL+DP. A realistic tradeoff needs to be considered depending on the application of interest.
Moreover, differential privacy can have many practical applications in biomedicince as a standalone approach. It works very well for lowsensitivity queries such as counting queries (e.g number of patients with a specific disease) on biomedical databases and its generalizations (e.g. histograms) since the presence or absence of an individual changes the query’s response by at most one. Moreover, it can be employed to release summary statistics such as and pvalues in a differentially private manner while keeping the accuracy acceptable. A novel promising research direction is to incorporate differential privacy in deep generative models to generate synthetic biomedical data.
Future studies can investigate how to reach a compromise between scalability, privacy, and accuracy in realworld settings. The communication overhead of federated learning is still an open and interesting problem since although stateoftheart approaches considerably reduce the network overhead, they adversely affect the accuracy of the model. Hence, novel approaches are required to preserve the accuracy, which is of great importance in biomedicine applications, while making federated learning communicationefficient.
Adopting federated learning in nonIID settings, where biomedical datasets across different hospitals/medical centers are heterogeneous, is another important challenge to address. This is because typical aggregation procedures such as simple averaging do not work well for these settings, yielding inaccurate models. Hence, new aggregation procedures are required to tackle nonIID scenarios. Moreover, current communicationefficient approaches which were developed for an IID setting might not be applicable to heterogeneous scenarios. Consequently, new techniques are needed to reduce network overhead in these settings, while keeping the model accuracy satisfactory.
Combining differential privacy with federated learning to enhance privacy and to provide a privacy guarantee is still a challenging issue in the field. It becomes even more challenging for healthcare applications, where accuracy of the model is of crucial importance. Moreover, the concept of privacy guarantee in differential privacy has been defined for local settings. In distributed scenarios, a dataset might be employed multiple times to train different models with various privacy budgets. Therefore, a new formulation of privacy guarantee should be proposed for distributed settings.
Conclusion
The advent of AI in biomedicine has brought about indispensable progress in the field and is expected to result in even more impressive advances in the future [sanders2019artificial]. For AI techniques to succeed, big biomedical or healthcare data needs to be available and accessible. However, the more AI models are trained on sensitive biological data, the more pressing privacy concerns become, which, in turn, necessitate strategies for shielding the data [berger2019emerging]. Hence, privacyenhancing techniques are crucial to allow AI to benefit from the sensitive biological data.
Cryptographic techniques, differential privacy and federated learning can be considered as the prime strategies for protecting personal data privacy. Broadly, these emerging techniques are based on either securing sensitive data, perturbing it or not moving it off site. In particular, cryptographic techniques securely share the data with a single (HE) or multiple computing parties (SMPC), differential privacy adds noise to sensitive data and quantifies privacy loss accordingly, while federated learning enables collaborative learning under orchestration of a centralized server without moving the private data outside local environments.
All of these techniques have their own strengths and limitations. HE and SMPC are more communication efficient compared to federated learning but they are computationally expensive since they move data to computation and put the computational burden on a server or a few computing parties. Federated learning, on the other hand, distributes computation across the clients but suffers from high network communication overhead. Differential privacy is an efficient approach from a computational and a communication perspective but it introduces accuracy loss by adding noise to data or model parameters. Hybrid approaches are studied to combine the advantages or to overcome the disadvantages and limitations of the individual techniques. We argued that federated learning as a standalone approach or in combination with differential privacy is the most realistic approach to be adopted in healthcare applications. We discussed the open problems and challenges in this regard including the balance of communication efficiency and model accuracy in nonIID settings, and need for a new notion of privacy guarantee for distributed biomedical datasets.
Incorporating privacy into the analysis of biomedical and healthcare data is still an open challenge, yet preliminary accomplishments are promising to bring practical privacy even closer to realworld healthcare settings. Future research should investigate how to make a tradeoff between scalability, privacy, and accuracy in real healthcare settings.
Acknowledgement
This work has received funding from the European Union’s Horizon2020 research and innovation program under grant agreement nr. 826078. The work of JB, HS and TK was also supported by H2020 project REPOTRIA (nr. 777111). ML, TK, and JB have further been supported by BMBF projects SYS_Care (01ZX1908A) and SYMBOD (01ZX1910D). JB’s contribution was also supported by his VILLUM Young Investigator grant (nr. 13154). This paper reflects only the author’s view and the Commission is not responsible for any use that may be made of the information it contains.