Abstract
Federated learning (FL) is a powerful framework that enables multiple distributed clients to collaborate without the need to transfer their data to a central server. However, FL does not inherently guarantee the level of privacy that clients often require. In our review of recent studies on privacy-enhancing techniques in FL, we found that frequency estimation (FE) methods remain underexplored. To address this gap, we developed and integrated FE techniques on the client side, further examining the effects of incorporating an adaptive range and a shuffled model. We also analyzed the impact of varying hyper-parameters on privacy preservation. Our results provide clear guidance on the algorithms and configurations that are most effective for enhancing privacy in FL, particularly when using long short-term memory (LSTM) architectures.