EECS Colloquium: Toward secure and efficient distributed learning — Minghong Fang
About the event
Abstract: Federated learning is a distributed machine learning approach that enables multiple clients (e.g., smartphones, IoT devices, and edge devices) to collaboratively learn a model with help of a server, without sharing their raw local data. Due to its potential promise of protecting private or proprietary user data, and in light of emerging privacy regulations such as GDPR, federated learning has become a central playground for innovation. However, due to its distributed nature, federated learning is vulnerable to malicious clients. In this talk, we will discuss local model poisoning attacks to federated learning, in which malicious clients send carefully crafted local models or their updates to the server to corrupt the global model. Moreover, we will discuss our work on building federated learning methods that are secure against a bounded number of malicious clients.
Biography: Minghong Fang is a Ph.D. candidate in the Department of Electrical and Computer Engineering at The Ohio State University under the supervision of Prof. Jia (Kevin) Liu. His research interests lie broadly in the span of machine learning, security, privacy, with a recent focus on the intersection among them. He is also interested in the distributed optimization for learning and networking. His research has been published in top-tier security, machine learning and networking venues, such as USENIX Security, NDSS, ICLR, The Web Conference (WWW), MobiHoc, etc.