Abstract:The mainstream FL(federated learning) methods require gradient interaction and the ideal assumption of the independently identically distribution, which brings additional communication overhead, privacy leakage, and data inefficiency. Therefore, a new FL framework called MAFML (model agnostic federated mutual learning) was proposed. MAFML only used a small amount of low-dimensional information (for example, the soft labels output of the neural network in the image classification task) for sharing to achieve cross-participants “mutual learning and mutual education”. Moreover, MAFML did not need a shared global model, users can customize their own private models without restricting the model structure and parameters. At the same time, MAFML used a general approach for avoiding gradient interference so that each participant′s model could be well generalized to other domains without reducing the performance of its own domain data. Experiments on multiple cross-domain datasets show that MAFML can provide a promising solution for alliance business facing the “competition and cooperation” dilemma.