Abstract
With the emergence of Large Language Model (LLM) based Generative AI systems and their ability to store, analyze, and interpret large amounts of data, protecting user privacy is essential. A Federated Learning (FL) approach utilizing model training on edge devices as opposed to traditional Machine Learning (ML) provides several benefits for such systems. In this work, a thorough distinction is made between FL and traditional centralized ML (CML) while also distinguishing between the Advanced Encryption Standard (AES) and Homomorphic Encryption (HE) cryptographic algorithms for securing data privacy. The potential use of AES and HE in the gradient transmission phase of ML/FL model training is used to link these concepts. The theory and need to balance computational overhead with security is highlighted in these approaches, together with current data model properties, risks, algorithmic biases, and vulnerabilities in Generative AI systems.