Feng Liu (he/him) -- Assistant Professor at The University of Melbourne


Publications


Currently, I research the trustworthy machine learning (mainly focus on transfer learning and adversarial machine learning) and two-sample testing (a fundamental problem in machine learning and statistics). Previously (2013-2016), I researched the time series prediction using neural networks. In the following, represents equal contribution, and ✉ represents corresponding author.

[Selected Conference Papers, Selected Journal Articles, Theses ]


Working Papers

  1. R. Gao, F. Liu, K. Zhou, G. Niu, B. Han and J. Cheng.
    Local Reweighting for Adversarial Training.
    [ arXiv ]


Selected Conference Papers

  1. X. Jiang, F. Liu, Z. Fang, H. Chen, T. Liu, F. Zheng, B. Han.
    Negative Label Guided OOD Detection with Pretrained Vision-Language Models.
    In International Conference on Learning Representations (ICLR 2024), to appear (CORE A*).
    [ arXiv ] [ CODE] [ Spotlight ]

  2. X. Xu, J. Zhang, F. Liu, M. Sugiyama, M. Kankanhalli.
    Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection.
    In Advances in Neural Information Processing Systems (NeurIPS 2023), New Orleans, US (CORE A*).
    [ arXiv ] [ CODE] [ Spotlight ]

  3. X. Xu, J. Zhang, F. Liu, M. Sugiyama, M. Kankanhalli.
    Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization.
    In Advances in Neural Information Processing Systems (NeurIPS 2023), New Orleans, US (CORE A*).
    [ arXiv ] [ CODE]

  4. H. Zheng, Q. Wang, Z. Fang, X. Xia, F. Liu, T. Liu, B. Han.
    Out-of-distribution Detection Learning with Unreliable Out-of-distribution Sources.
    In Advances in Neural Information Processing Systems (NeurIPS 2023), New Orleans, US (CORE A*).
    [ arXiv ] [ CODE]

  5. Q. Wang, Z. Fang, Y. Zhang, F. Liu, Y. Li, B. Han.
    Learning to Augment Distributions for Out-of-distribution Detection.
    In Advances in Neural Information Processing Systems (NeurIPS 2023), New Orleans, US (CORE A*).
    [ arXiv ] [ CODE]

  6. K. Liu, F. Liu, H. Wang, N. Ma, J. Bu, B. Han.
    Partition Speeds Up Learning Implicit Neural Representations Based on Exponential-Increase Hypothesis.
    In International Conference on Computer Vision (ICCV 2023), Paris, France (CORE A*).
    [ arXiv ] [ CODE]

  7. R. Dong, F. Liu, H. Chi, T. Liu, M. Gong, G. Niu, M. Sugiyama, B. Han.
    Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation.
    In International Conference on Machine Learning (ICML 2023), Hawaii, US (CORE A*).
    [ arXiv ] [ CODE]

  8. S. Zhang, F. Liu, J. Yang, Y. Yang, C. Li, B. Han, M. Tan.
    Detecting Adversarial Data by Probing Multiple Perturbations Using Expected Perturbation Score.
    In International Conference on Machine Learning (ICML 2023), Hawaii, US (CORE A*).
    [ arXiv ] [ CODE]

  9. X. Jiang, F. Liu, Z. Fang, H. Chen, T. Liu, F. Zheng, B. Han.
    Detecting Out-of-distribution Data through In-distribution Class Prior.
    In International Conference on Machine Learning (ICML 2023), Hawaii, US (CORE A*).
    [ Openreview ] [ CODE]

  10. T. Cui, Y. Li, K. Chen, J. Bailey, F. Liu.
    Designing Fair AI Systems: Exploring the Interaction of Explainable AI and Task Objectivity on Users’ Fairness Perception.
    In Pacific Asia Conference on Information Systems (PACIS 2023), Nanchang, China
    [ Link ] [ Social Science Theory ]

  11. Y. Song, T. Cui, F. Liu.
    Designing Fair AI Systems: How Explanation Specificity Influences Users' Perceived Fairness and Trusting Intentions.
    In European Conference on Information Systems (ECIS 2023), Kristiansand, Norway
    [ Link ] [ Social Science Theory ] [ Best RiP Paper Runner-up Award (2nd) ]

  12. Q. Wang, J. Ye, F. Liu, Q. Dai, M. Kalander, T. Liu, J. Hao, B. Han.
    Out-of-distribution Detection with Implicit Outlier Transformation.
    In International Conference on Learning Representations (ICLR 2023), Kigali, Rwanda (CORE A*).
    [ arXiv ] [ CODE]

  13. Z. Fang, Y. Li, J. Lu, J. Dong, B. Han, F. Liu.
    Is Out-of-distribution Detection Learnable?
    In Advances in Neural Information Processing Systems (NeurIPS 2022), New Orleans, US (CORE A*).
    [ arXiv ] [ ML Theory ] [ Outstanding Paper Award ] (outstanding papers:acceptance:submissions=13:2672:10411)

  14. Q. Wang, F. Liu, Y. Zhang, J. Zhang, C. Gong, T. Liu, B. Han.
    Watermarking for Out-of-distribution Detection.
    In Advances in Neural Information Processing Systems (NeurIPS 2022), New Orleans, US (CORE A*).
    [ arXiv ] [ CODE] [ Spotlight ]

  15. X. Peng, F. Liu, J. Zhang, L. Lan, J. Ye, T. Liu, B. Han.
    Bilateral Dependency Optimization: Defending Against Model-inversion Attacks.
    In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2022), online (CORE A*).
    [ arXiv ] [ CODE]

  16. R. Gao, J. Wang, K. Zhou, F. Liu, B. Xie, G. Niu, B. Han, J. Cheng.
    Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack.
    In International Conference on Machine Learning (ICML 2022), online (CORE A*).
    [ arXiv ] [ CODE]

  17. X. Xu, J. Zhang, F. Liu, M. Sugiyama, M. Kankanhalli.
    Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests.
    In International Conference on Machine Learning (ICML 2022), online (CORE A*).
    [ arXiv ] [ CODE]

  18. H. Chi, F. Liu, B. Han, W. Yang, L. Lan, T. Liu, G. Niu, M. Zhou and M. Sugiyama.
    Meta Discovery: Learning to Discover Novel Classes given Very Limited Data.
    In International Conference on Learning Representations (ICLR 2022), online, 2022 (CORE A*).
    [ arXiv ] [ CODE] [ Spotlight ] (spotlights:acceptance:submissions=176:1095:3391)

  19. F. Liu, W. Xu, J. Lu, D. J. Sutherland.
    Meta Two-Sample Testing: Learning Kernels for Testing with Limited Data.
    In Advances in Neural Information Processing Systems (NeurIPS 2021), online, 2021 (CORE A*).
    [ arXiv ] [ CODE]

  20. H. Chi, F. Liu, W. Yang, L. Lan, T. Liu, B. Han, W. Cheung and J. T. Kwok.
    TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation.
    In Advances in Neural Information Processing Systems (NeurIPS 2021), online, 2021 (CORE A*).
    [ arXiv ] [ CODE] [ Spotlight ] (spotlights:acceptance:submissions=260:2372:9122)

  21. Q. Wang, F. Liu, B. Han, T. Liu, C. Gong, M. Zhou and M. Sugiyama.
    Probabilistic Margins for Instance Reweighting in Adversarial Training.
    In Advances in Neural Information Processing Systems (NeurIPS 2021), online, 2021 (CORE A*).
    [ arXiv ] [ CODE]

  22. R. Gao, F. Liu, J. Zhang, B. Han, T. Liu, G. Niu and M. Sugiyama.
    Maximum Mean Discrepancy is Aware of Adversarial Attacks.
    In International Conference on Machine Learning (ICML 2021), online, 2021 (CORE A*).
    [ arXiv ] [ CODE]

  23. Z. Fang, J. Lu, A. Liu, F. Liu, G. Zhang.
    Learning Bounds for Open-Set Learning.
    In International Conference on Machine Learning (ICML 2021), online, 2021 (CORE A*).
    [ arXiv ] [ CODE ]

  24. L. Zhong, Z. Fang, F. Liu, B. Yuan, G. Zhang and J. Lu.
    How does the Combined Risk Affect the Performance of Unsupervised Domain Adaptation Approaches?
    In AAAI Conference on Artificial Intelligence (AAAI 2021), online, 2021 (CORE A*).
    [ arXiv ] [ CODE ]

  25. F. Liu, W. Xu, J. Lu, G. Zhang and A. Gretton, D. J. Sutherland.
    Learning Deep Kernels for Non-parametric Two Sample Test.
    In International Conference on Machine Learning (ICML 2020), online, 2020 (CORE A*).
    [ arXiv ] [ CODE ]

  26. Y. Zhang, F. Liu, Z. Fang, B. Yuan, G. Zhang and J. Lu.
    Clarinet: A One-step Approach Towards Budget-friendly Unsupervised Domain Adaptation.
    In International Joint Conference on Artificial Intelligence (IJCAI 2020), online, 2021 (CORE A*).
    [ arXiv ] [ CODE ]

  27. F. Liu, J. Lu, B. Han, G. Niu, G. Zhang and M. Sugiyama.
    Butterfly: A Panacea for All Difficulties in Wildly Unsupervised Domain Adaptation.
    In Learning Transferable Skills Workshop on Neural Information Processing Systems (NeurIPS 2019 Workshop), Vancouver, Canada, 2019 (CORE A*).
    [ PDF ] [ CODE ]

  28. F. Liu, G. Zhang and J. Lu.
    A Novel Fuzzy Neural Network for Unsupervised Domain Adaptation in Heterogeneous Scenarios.
    In IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2019), New Orleans, US, 2019 (CORE A).
    [ link ] [ Best Student Paper Award ]


Selected Journal Articles

  1. Z. Fang, Y. Li, F. Liu, B. Han, J. Lu.
    On the Learnability of Out-of-distribution Detection
    Journal of Machine Learning Research, 2024 (ERA A*).
    [ arXiv ] [ ML Theory ]

  2. H. Chi, W. Yang, F. Liu, L. Lan, B. Han.
    Does Confusion Really Hurt Novel Class Discovery?.
    International Journal of Computer Vision, 2024 (ERA A*).
    [ link ] [ CODE ]

  3. Z. Zhong, J. Hou, Z. Yao, L. Dong, F. Liu, J. Yue, T. Wu, J. Zheng, G. Ouyang, C. Yang, J. Song.
    Cancer-Finder: Domain generalization enables general cancer cell annotation in single-cell and spatial transcriptomics.
    Nature Communications, 2024.
    [ link ] [ CODE ]

  4. X. Feng, Z. Yu, H. Fang, H. Jiang, G. Yang, L. Chen, X. Zhou, B. Hu, C. Qin, G. Hu, G. Xing, B. Zhao, Y. Shi, J. Guo, F. Liu, B. Han, B. Zechmann, Y. He, and F. Liu.
    Plantorganelle Hunter: An Effective Deep Learning-based Method for Plant Organelle Phenotyping in Electron Microscopy.
    Nature Plants, 2023.
    [ link ] [ CODE ]

  5. X. Guo, F. Lin, J. Song, S. Di, L. Lin, Z. Zhong, Z. Wu, X. Wang, Y. Zhang, J. Li, H. Zhang, F. Liu, C. Yang, J. Song.
    Deep Transfer Learning Enables Lesion Tracing of Circulating Tumor Cells.
    Nature Communications, 2022.
    [ link ] [ CODE ]

  6. Z. Fang, J. Lu, F. Liu, G. Zhang.
    Semi-supervised Heterogeneous Domain Adaptation: Theory and Algorithms.
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022 (ERA A*).
    [ link ] [ CODE ]

  7. F. Liu, J. Lu, B. Han, G. Niu, G. Zhang and M. Sugiyama.
    Butterfly: One-step Approach towards Wildly Unsupervised Domain Adaptation.
    Preprint, 2021.
    [ arXiv ] [ CODE ]

  8. L. Zhong, Z. Fang, F. Liu, B. Yuan, G. Zhang and J. Lu.
    Bridging the Theoretical Bound and Deep Algorithms for Open Set Domain Adaptation.
    IEEE Transactions on Neural Networks and Learning Systems, 2021 (ERA A*).
    [ arXiv ]

  9. Y. Zhang, F. Liu, Z. Fang, B. Yuan, G. Zhang and J. Lu.
    Learning from a Complementary-label Source Domain: Theory and Algorithms.
    IEEE Transactions on Neural Networks and Learning Systems, 2021 (ERA A*).
    [ arXiv ] [ CODE ]

  10. F. Liu, G. Zhang and J. Lu.
    Multi-source Heterogeneous Unsupervised Domain Adaptation via Fuzzy-relation Neural Networks.
    IEEE Transactions on Fuzzy Systems, 2020 (ERA A*).
    [ link ]

  11. F. Liu, G. Zhang and J. Lu.
    Heterogeneous domain adaptation: An unsupervised approach.
    IEEE Transactions on Neural Networks and Learning Systems, 2020 (ERA A*).
    [ arXiv ]

  12. S. Qin, H. Ding, Y. Wu and F. Liu.
    High-dimensional sign-constrained feature selection and grouping.
    Annals of the Institute of Statistical Mathematics, Oct., 2020 (ERA A).
    [ link ]

  13. Z. Fang, J. Lu, F. Liu, J. Xuan and G. Zhang.
    Open set domain adaptation: Theoretical bound and algorithm.
    IEEE Transactions on Neural Networks and Learning Systems, 2020 (ERA A*).
    [ arXiv ] [ CODE ]

  14. F. Liu, J. Lu and G. Zhang.
    Unsupervised heterogeneous domain adaptation via shared fuzzy equivalence relations.
    IEEE Transactions on Fuzzy Systems, vol. 26, no. 6, pp. 3555–-3568, 2018 (ERA A*).
    [ link ] [ CODE ]

  15. H. Zuo, J. Lu, G. Zhang and F. Liu.
    Fuzzy transfer learning using an infinite gaussian mixture model and active learning.
    IEEE Transactions on Fuzzy Systems, vol. 27, no. 2, pp. 291–-303, 2018 (ERA A*).
    [ link ]


Theses

  1. Feng Liu.
    Towards Realistic Transfer Learning Methods: Theory and Algorithms.
    Doctoral Thesis, Australian Artificial Intelligence Institute, University of Technology Sydney, Australia, November 2020.

  2. Feng Liu.
    Time Series Interpolation and Prediction for the Electricity Market.
    Master Thesis, School of Mathematic and Statistics, Lanzhou University, China, June 2015.