top of page

A Brief Overview on Federated Learning Technique



Artificial Intelligence (AI) and Machine Learning (ML) algorithms are being implemented in several business applications, from making predictions to customer service chatbots and decision-making. Lately, with the introduction of the ChatGPT, a conversational AI system that can engage in natural language conversations with users from OpenAI, people all over the globe are awestruck by the potential of AI & ML. The banking, finance, and insurance (BFSI) sector contribute significantly to the current ML market, with life science and healthcare showing rapid growth.


AI models, however, often require large amounts of data to be trained on to be effective. This can be an impediment for businesses that deal with sensitive customer data, as they may be reluctant to share this data with third parties.


Federated learning (FL) is a cutting-edge ML technique that enables organizations to train AI models on decentralized data, without the need to centralize or share that data. This enables businesses to use AI without sacrificing data privacy and risking breach of personal information. Federated learning was introduced in 2016 by Brendan McMahan. FL is used to train other machine learning algorithms by using multiple local datasets without exchange of data hence allowing companies to create a shared global model without storing training data in a central location. These locally trained models are combined and a single federated and enhanced global model is then supplied to the devices.


It helps in ensuring that privacy policies and access control privileges are highly secured, as Robust classifiers are created using FL without requiring information disclosure.

Federated learning enables machine learning to extract data from various datasets stored at different locations, thus several organizations can work in partnership on model advancement, not including distribution of confidential information. Shared models are exposed to a much broader range of data than a single internal entity during multiple training processes and is trained in several domains with multiple iterations. The server is responsible for managing the training procedure, which consists of the following essential steps:

1. Implementing the training algorithm.

2. Assembling all learning results for devices.

3. Changing the global model.

4. Notifying devices after the global model-based improvement and preparing for the next training session.


The benefits of federated learning over traditional Machine learning approaches are:

  • Data security: A data pool is not required for the model as the training dataset is kept on the devices,

  • Data diversity: In case of network unavailability in edge devices, companies may be prevented from merging datasets from different sources. Federated learning facilitates access to heterogeneous data even in cases where data sources can communicate only during certain times

  • Real-time continual learning: Models are constantly improved using client data and hence there is no need to aggregate data for continual learning.

  • Hardware efficiency: A less complex hardware is used in this approach, because federated learning models do not need one complex central server to analyze data

  • Privacy-Preserving: The challenge with traditional machine learning is that user’s data gets aggregated in a central location for machine learning training which may be against the privacy policies of certain countries and may make the data more vulnerable to data breaches. Federated learning overcomes these challenges by enabling continual learning on end-user devices while ensuring that end user data does not leave end-user devices.

  • Federated Learning enables ML benefits to smaller domains where sufficient training data is not available to build a standalone ML model.

How can Federated Learning assist in Preserving Privacy


As rightly said, “Federated Learning brings the code to the data, instead of the data to the code and addresses the fundamental problems of privacy, ownership, and locality of data”.

  • An essential feature of FL is the protection of user privacy. FL promotes privacy by default through lessening the user data footprint in the network (a central server). FL offers a way to preserve user privacy by decentralizing data from the central server to end-devices and enables AI benefits to domains with sensitive data and heterogeneity.

  • This technique came to light mainly for two reasons: (1) The unavailability of sufficient data to reside centrally on the server-side (as opposed to traditional machine learning) due to direct access restrictions on such data. (2) Data privacy protections using local data from edge devices, i.e., clients, instead of sending sensitive data to the server where network asynchronous communication comes into play.

  • Computational power is shared among the interested parties instead of relying on a centralized server by iterative local model training processes on end-devices.

  • The principle of data minimization includes the objective to collect only the data needed for the specific computation (focused collection), to limit access to data at all stages, to process individuals’ data as early as possible (early aggregation), and to discard both collected and processed data as soon as possible (minimal retention).

  • The principle of data anonymization captures the objective that the final released output of the computation does not reveal anything unique to an individual. The goal is that data contributed by any individual user to the computation has only a small influence on the final aggregate output.

  • By design, FL structurally embodies data minimization. Critically, data collection and aggregation are inseparable in the federated approach — purpose-specific transformations of client data are collected for immediate aggregation, with analysts having no access to per-client messages.

Federated Learning technique solves privacy concerns of sensitive data in ML environments to quite an extent, however at the same time, sharing of model parameters and an increased number of training iterations and communications exposes the federated environment to a new set of risks and opens new avenues for hacking to trace vulnerabilities to manipulate ML model output or get access to sensitive user data.

A few challenges while using federated learning technique:

i. Differences between different local portions of data: Each node may have some bias towards multiple individuals, and the size of databases may vary significantly.

ii. Temporary heterogeneity: the database distribution for each area may vary over time.

iii. Database interaction of each node is a requirement.

iv. The database for each node may need to be overwritten by default.

v. Disappearing training data may allow attackers to go after the domain standard.

vi. Due to the lack of global training data, it is necessary to identify the undesirable options that feed into the training, such as age and gender.

vii. Limited or complete model loss is renewed due to node failure affecting the global standard.

FL can integrate the models of various user groups and update the federated model without revealing the original data when a lack of data hinders users from training suitable models. It is one of the growing fields in ML in recent years, as it comes with a decentralized approach and security and privacy features promising to abide by emerging user data protection laws.

106 views0 comments
bottom of page