Research

My research interest lies in optimization and machine learning in distributed or decentralized networks. Distributed or decentralized learning, which involves a series of single devices (workers) collaborating to train a machine learning model, usually serves as a promising solution in the following scenarios:
  • Accelerate large-scale machine learning through parallel computation in data centers.
  • Exploit the potential value of large-volume, heterogeneous, and privacy-sensitive data located at geographically distributed devices in settings like federated learning (FL) or multi-agent reinforcement learning (MARL).
As one of the central topics in distributed networks, my recent work focus on robust distributed optimization algorithms. Despite the well-known advantages, the distributed nature of networks makes them vulnerable to workers’ misbehaviors, especially in FL scenarios. This misbehavior, including malicious misleading, data poisoning, backdoor injection, and so on, can be abstracted into a so-called Byzantine attack model, where some workers (attackers) send arbitrary malicious messages to others. Hoping to fulfill the robustness requirement, I dedicate myself to designing and analyzing Byzantine-resilient optimization algorithms.


alt text

Journals

Conference

Preprints