学术讲座:表达学习与联邦学习前沿进展


发布时间:2023-11-15   字体大小T|T

讲座时间:2023年11月25日9:00~12:00

讲座地点:腾讯会议:983-609-201

报告1:New Representation Learning Algorithms: Graph and Beyond

主讲人简介:

Yanfu Zhang received the BS degree from University of Science and Technology of China in 2012, and MS degrees from University of Chinese Academy of Science and University of Rochester, in 2015 and 2017, respectively, and PhD degree from University of Pittsburgh, in 2023. He is currently an assistant professor with the school of Computer Science, William and Mary. His research interests span graph neural networks, efficient and robust representation learning, and fairness-aware machine learning, with applications to medical images, multi-omics, and other relevant data mining and machine learning problems. His works have been published in top-tier conferences and prestigious journals, such as KDD, ICML, NeurIPS, ICCV, ECCV, WebConf (WWW), ICDM, IPMI, MICCAI, Nuclei Acids Research, and PNAS Nexus. He served as a program committee member of KDD, MICCAI, ICCV, etc.

讲座内容提要:

A central question when designing an AI system in the real world is, "how to learn representations of the data that make it easier to extract useful information when building classifiers or other predictors?" Satisfactory answers are attainable for certain data, such as images, texts, or audio. However, various types of data are gathered as graphs, such as social networks, protein interaction networks, brain connectomes, etc. Despite the emerging and powerful graph neural network techniques, researchers are yet unanimous in the answer dedicated to learning embeddings from graphs due to their high irregularity, complexity, and sparsity. I have been focusing on addressing the critical challenges in graph representation learning.

In this talk, I will first introduce my recent research results on solving two key problems in graph representation learning. i) A significant limitation of the famous graph convolutional network is over-smoothed embeddings with deeper networks. ii) The scalability to big data, though facilitated by self-supervised pretraining, loses the focus on local structure. After that, I will expand the horizons beyond the nodes and discuss how they interact with the algorithm design. i) Graph-level embeddings are desired instead of node-level embeddings in some applications. ii) Data distribution implies graph structure---even if it is not explicitly given. Besides addressing the efficacy of representation learning, I also designed fairness-aware machine learning algorithms to tackle the bias in model training and data processing. I applied my new representation learning methods to successfully solve various real-world applications, such as brain disease early diagnosis, drug repositioning, and social media network predictions.

报告2:The optimization for Federated learning

主讲人简介:

Xidong Wu received the BS degree from Beihang University in 2018, and MS degrees from University of California, Berkeley in 2020. He is pursuing PHD degree in University of Pittsburgh. His research interests span Federated Learning, Natural Language processing, Model Compression, Optimization. His works have been published in top-tier conferences and prestigious journals, such as NeurIPS, KDD, ICML, AAAI, ICDM, IJCAI.

讲座内容提要:

Federated learning (FL) has gained popularity, most existing works focus on the standard stochastic minimization problem. Recently, some algorithms for non-minimization optimization in FL are proposed to solve complicate machine learning tasks. However, existing FL algorithms have not achieved the complexity level reached by single-machine algorithms.

In this talk, I will first introduce our work on minimax optimization in federated learning. we study a class of federated nonconvex minimax optimization problems. We propose FL algorithms (FedSGDA+ and FedSGDA-M) and reduce existing complexity results for the most common minimax problems. Extensive experimental results on fair classification and AUROC maximization show the efficiency of our algorithms. Then I will introduce our work on federated conditional stochastic optimization. Our work considers the nonconvex conditional stochastic optimization in federated learning and proposes the first federated conditional stochastic optimization algorithm (FCSG) with a conditional stochastic gradient estimator and a momentum-based algorithm (i.e. FCSG-M). To match the lower bound complexity in the single-machine setting, we design an accelerated algorithm (Acc-FCSG-M) via the variance reduction to achieve the best sample and communication complexity. Compared with the existing optimization analysis for Meta-Learning in FL, federated conditional stochastic optimization considers the sample of tasks. Extensive experimental results on various tasks validate the efficiency of these algorithms.