Feng Liu (he/him) -- Assistant Professor at The University of Melbourne


Home


Feng Liu

Feng Liu, Ph.D.

Assistant Professor in Statistics (Data Science),
School of Mathematics and Statistics, The University of Melbourne

Visiting Scientist @ Imperfect Information Learning Team,
RIKEN Center for Advanced Intelligence Project (RIKEN-AIP)

Visting Fellow @ DeSI Lab,
Australian Artificial Intelligence Institute, UTS

Address: Room 108, Old Geology Building (South Wing),
Building #156, Monash Road, Parkville VIC 3052, Australia.
E-mail: fengliu.ml [at] gmail.com or feng.liu1 [at] unimelb.edu.au
Phone: +61 3 9035 3645
[Google Scholar] [Github] [Group Website]


Opportunities

  • Long-time Recruitment: I am always looking for self-motivated PhD (two per semester), MPhil, Research Assistants, and Visiting Researchers. Please see this page for recruiting information, and check this page for the school information. Meanwhile, I am happy to host remote research trainees. You can collaborate with many excellent researchers in the frontier machine learning research areas in our group. Check this page for more information.

  • Project-based Recruitment:

    • [Posdoc] We are looking for one Posdoc Research Fellow for our ARC DP project: Advanced Machine Learning with Bilevel Optimization. You will work at UTS with A\Prof Guangquan Zhang and work at UoM with me (as a visiting scholar).

    • [Posdoc] We are looking for one Posdoc Research Fellow for our NSF-CSIRO project: Graph Representation Learning for Fair Teaming in Crisis Response.

    • [PhD] We are looking for one PhD student for our ARC DP project: Advanced Machine Learning with Bilevel Optimization. You will work with me as a UoM PhD student. Please see this page for recruiting information.

    • [3 x RAs] We are looking for three RAs for our NSF-CSIRO project: Graph Representation Learning for Fair Teaming in Crisis Response.


Biography

    I am a machine learning researcher with research interests in hypothesis testing and trustworthy machine learning. I am currently an Assistant Professor in Statistics (Data Science) at the School of Mathematics and Statistics, The University of Melbourne, Australia. We are also running the Trustworthy Machine Learning and Reasoning (TMLR) Lab where I am one of co-directors (see this page for details). In addition, I am a Visiting Scientist at RIKEN-AIP, Japan, and a Visting Fellow at DeSI Lab, Australian Artificial Intelligence Institute, University of Technology Sydney. I was the recipient of the Australian Laureate postdoctoral fellowship. I received my Ph.D. degree in computer science at the University of Technology Sydney in 2020, advised by Dist. Prof. Jie Lu and Prof. Guangquan Zhang. I was a research intern at the RIKEN-AIP, working on the robust domain adaptation project with Prof. Masashi Sugiyama, Dr. Gang Niu and Dr. Bo Han. I visited Gatsby Computational Neuroscience Unit at UCL and worked on the hypothesis testing project with Prof. Arthur Gretton, Dr. Danica J. Sutherland and Dr. Wenkai Xu.

    I have received the Outstanding Paper Award of NeurIPS (2022), the Outstanding Reviewer Award of NeurIPS (2021), the Outstanding Reviewer Award of ICLR (2021), the UTS-FEIT HDR Research Excellence Award (2019) and the Best Student Paper Award of FUZZ-IEEE (2019). My publications are mainly distributed in high-quality journals or conferences, such as Nature Communications, IEEE-TPAMI, IEEE-TNNLS, IEEE-TFS, NeurIPS, ICML, ICLR, KDD, IJCAI, and AAAI. I have served as a senior program committee (SPC) member for IJCAI, ECAI and program committee (PC) members for NeurIPS, ICML, ICLR, AISTATS, ACML, AAAI and so on. I also serve as reviewers for many academic journals, such as JMLR, IEEE-TPAMI, IEEE-TNNLS, IEEE-TFS and so on.


Recent News

  • 03/14/2023: Our newly proposed journal, ACM Transactions on Probabilistic Machine Learning (ACM TOPML), is officially approved! I will serve as an associate editor of this journal. Welcome to submit your papers to our journal!

  • 02/25/2023: One paper regarding explanable AI (XAI) and fairness is accepted by ECIS 2023. Congratulations to the team!

  • 02/20/2023: Grateful to secure one project from the NSF-CSIRO Joint Program in Responsible and Ethical AI. [NSF Announcement][CSIRO Announcement]

  • 02/14/2023: One paper is accepted by TMLR. Congratulations to the team!

  • 01/21/2023: One paper is accepted by ICLR 2023. Congratulations to the team!

  • 12/02/2022: Will serve as a senior program committee member for IJCAI 2023.

  • 11/24/2022: Grateful to secure one ARC Discovery Project. [Announcement]

  • 11/21/2022: Our paper received the NeurIPS Outstanding Paper Award. Congratulations to the team!

  • 11/17/2022: One paper is accepted by Nature Communications. Congratulations to the team!

  • 09/15/2022: Two papers are accepted by NeurIPS 2022. One is selected as Oral, and one is selected as Spotlight. Congratulations to the team!

  • 05/18/2022: One paper is accepted by KDD 2022. Congratulations to the team!

  • 05/14/2022: Two papers are accepted by ICML 2022. Congratulations to the team!

  • 01/24/2022: One paper is accepted by ICLR 2022. This paper is also selected as Spotlight. Congratulations to the team!


Research Interests

    My research interests lie in statistical hypothesis testing and trustworthy machine learning. Specifically, my current research work center around the following topics:
    Statistical Hypothesis Testing:
  • Two-sample Testing: Testing if two datasets are drawn from the same distribution.

  • Goodness-of-fit Testing: Testing if data are drawn from a given distribution.

  • Independence Testing: Testing if two datasets are independent.

    Trustworthy Machine Learning:
  • Defending against Adversarial Attacks: Detecting adversarial attacks (i.e., adversarial attack detection); Training a robust model against future adversarial attacks (i.e., adversarial training).

  • Being Aware of Out-of-distribution Data: Detecting out-of-distribution data; Training a robust model in the open world (e.g., open-set learning, out-of-distribution generalization).

  • Learning/Inference under Distribution Shift (a.k.a., Transfer Learning): Leveraging the knowledge from domains with abundant labels (i.e., source domains)/pre-trained models (i.e., source models) to complete classification/clustering tasks in an unlabeled domain (i.e., target domain), where two domains are different but related.

  • Protecting Data Privacy: Training a model to ensure that the training data will not be obtained by inverting the model (i.e., defending against model-inversion attacks).


Research Experience


Education

  • Ph.D. in Computer Science (November 2020)

  • Faculty of Engineering and Information Technology,
    University of Technology Sydney, Sydney, Australia.
    Supervised by Dist. Prof. Jie Lu and Prof. Guangquan Zhang

  • Master of Science (June 2015)

  • School of Mathematic and Statistics, Lanzhou University, Lanzhou, China
    Supervised by Prof. Jianzhou Wang

  • Bachelor of Science (June 2013)

  • School of Mathematic and Statistics, Lanzhou University, Lanzhou, China


Sponsors

Australian Research Council CSIRO NSF