Feng Liu (he/him) -- Assistant Professor at The University of Melbourne


Research Focus

As we deploy many machine learning systems in the real world, the reliability and safety of these systems are crucial. To cope with possible risks brought by these systems, we focus on investigating trustworthy machine learning. At the current stage, we mainly concentrate on statistical hypothesis testing and trustworthy machine learning algorithms. The former one provides fundamental tools for constructing trustworthy machine learning systems, and the latter focuses on addressing concrete risks of some existing machine learning algorithms.


Statistical Hypothesis Testing

In statistical hypothesis testing, we aim to verify if the data at hand sufficiently support a particular hypothesis. For example, testing if two datasets are drawn from the same distribution (i.e., the two-sample testing); testing if data are drawn from a given distribution (i.e., the goodness-of-fit testing); and testing if two datasets are independent (i.e., the independence testing).

The statistics used in testing methods or testing methods themselves are widely used in the field of machine learning, such as transfer learning, generative models, and causal discovery. This research line focuses on discovering more powerful statistics and testing methods for the field. Specifically, we focus on deep-kernel-based hypothesis testing methods in the following areas [ICML 2020] [ICML 2021] [NeurIPS 2021] [ICML 2022].

  • Two-sample Testing: Testing if two datasets are drawn from the same distribution.

  • Goodness-of-fit Testing: Testing if data are drawn from a given distribution.

  • Independence Testing: Testing if two datasets are independent.


Close Collaborators: Dr. Wenkai Xu@Oxford, Dr. Danica J. Sutherland@UBC, and Prof. Arthur Gretton@UCL-Gatsby


Trustworthy Machine Learning Algorithms

At the current stage, we mainly focus on four specific tasks to make machine learning algorithms more reliable and safe.

Defending against Adversarial Attacks

Deep neural networks are susceptible to adversarial examples that are generated by changing natural inputs with malicious perturbation. Those examples are imperceptible to human eyes but can fool deep models to make wrong predictions with high confidence. Thus, to make deep neural networks more reliable, we focus on the following two topics [ICML 2021] [NeurIPS 2021] [ICML 2022] [ICML 2022] [ICML 2023].

  • Detecting adversarial attacks (i.e., adversarial attack detection).

  • Training a robust model against future adversarial attacks (i.e., adversarial training).


Close Collaborators: Dr. Jingfeng Zhang@RIKEN-AIP, Dr. Bo Han@HKBU, Dr. Tongliang Liu@USYD, Dr. Gang Niu@RIKEN-AIP, and Prof. Masashi Sugiyama@UTokyo

Being Aware of Out-of-distribution/Open-set Data

The success of supervised learning is established on an implicit assumption that training and test data share the same distribution (especially share the same label set), i.e., the in-distribution (ID) assumption. However, test data distribution in many real-world scenarios may violate the assumption and, instead, contain out-of-distribution (OOD) data whose label set is different from ID data. Given a well-trained ID classifier, if this classifier classifies OOD data as ID classes, we might face serious accidents when deploying the classifier into real-world scenarios. To mitigate the risk of OOD data, we focus on the following topics [ICML 2021] [NeurIPS2022a (Outstanding Paper)] [NeurIPS2022b (Spotlight)] [ICLR2023] [ICML2023].

  • Detecting out-of-distribution data.

  • Training a robust model in the open world (e.g., open-set learning, out-of-distribution generalization).


Close Collaborators: Dr. Zhen Fang@UTS, Dr. Bo Han@HKBU, and Dr. Tongliang Liu@USYD

Learning/Inference under Distribution Shift (a.k.a., Transfer Learning)

Test data are not perfect (e.g., only few data are available) and might have different distribution from the training data. To complete the task on such imperfect test data (or the target domain), we want to leverage the knowledge from domains with abundant labels (i.e., source domains)/pre-trained models (i.e., source models) to complete classification/clustering tasks in an unlabeled domain (i.e., target domain), where two domains are different but related. Specifically, we focus on the following topics [NeurIPS 2019] [IJCAI 2020] [AAAI 2021] [NeurIPS 2021 (Spotlight)] [ICLR 2022 (Spotlight)] [ICML 2023].

  • Domain adaptation with imperfect data.

  • Hypothesis adaptation/transfer.

  • Novel Class Discovery.


Close Collaborators: Dr. Zhun Zhong@UniTN, Dr. Zhen Fang@UTS, Dr. Bo Han@HKBU, Dr. Tongliang Liu@USYD, Dr. Gang Niu@RIKEN-AIP, and Prof. Masashi Sugiyama@UTokyo

Protecting Data Privacy

With the development of machine learning (ML) algorithms, deep neural networks (DNNs) are increasingly adopted in various privacy-sensitive applications, such as facial recognition, medical diagnoses, and intelligent virtual assistants. Since training DNNs could involve processing sensitive and proprietary datasets in privacy-related applications, there are great concerns about privacy leakage. To protect the privacy of individuals whose personal information is used during the training, enterprises typically release only well-trained DNNs through ML-as-a-services platforms, wherein users can download pre-trained models (e.g., Pytorch Hub) or query the model via some sort of programming or user interfaces (e.g., Amazon Recognition), which are referred to as white-box access and black-box access, respectively. However, a pre-trained model can still be used to restore the orginal training data. To prevent the data-leakage issue of pre-trained models, we focus on the following topics [KDD 2022].

  • Evaluation of Model-inversion Risks.

  • Defending against Model-inversion Attacks.


Close Collaborators: Dr. Jingfeng Zhang@RIKEN-AIP, Dr. Bo Han@HKBU, and Dr. Tongliang Liu@USYD