Domain generalization (DG) has been a hot topic in image recognition, with a goal to train a general model that can perform well on unseen domains.
Recently, federated learning (FL), an emerging machine learning paradigm to train a global model from multiple decentralized clients without compromising data privacy, has brought new challenges and possibilities to DG.
In the FL scenario, many existing state-of-the-art (SOTA) DG methods become ineffective because they require the centralization of data from different domains during training.
In this paper, we propose a novel domain generalization method for image recognition under federated learning through cross-client style transfer (CCST) without exchanging data samples.
Our CCST method can lead to more uniform distributions of source clients, and make each local model learn to fit the image styles of all the clients to avoid the different model biases.
Two types of style (single image style and overall domain style) with corresponding mechanisms are proposed to be chosen according to different scenarios. Our style representation is exceptionally lightweight and can hardly be used to reconstruct the dataset. The level of diversity is also flexible to be controlled with a hyper-parameter.
Our method outperforms recent SOTA DG methods on two DG benchmarks (PACS, OfficeHome) and a large-scale medical image dataset (Camelyon17) in the FL setting. Last but not least, our method is orthogonal to many classic DG methods, achieving additive performance by combined utilization.