Feng Liu (he/him) -- Assistant Professor at The University of Melbourne


Publications


Currently, I research the trustworthy machine learning (mainly focus on transfer learning and adversarial machine learning) and two-sample testing (a fundamental problem in machine learning and statistics). Previously (2013-2016), I researched the time series prediction using neural networks. In the following, represents equal contribution, and ✉ represents corresponding author.

[Selected Conference Papers, Selected Journal Articles, Theses ]


Working Papers

  1. X. Tian, L. Peng, Z. Zhou, M. Gong, F. Liu.
    Revisit Non-parametric Two-sample Testing as a Semi-supervised Learning Problem.
    [ arXiv ]

  2. R. Gao, X. Wu, S. Luo, C. Han, F. Liu.
    'No' Matters: Out-of-Distribution Detection in Multimodality Long Dialogue.
    [ arXiv ] [ CODE ]

  3. R. Gao, F. Liu, K. Zhou, G. Niu, B. Han and J. Cheng.
    Local Reweighting for Adversarial Training.
    [ arXiv ]


Selected Conference Papers

  1. H. Tian, F. Liu, Z. Zhou, T. Liu, C. Zhang, B. Han.
    Mind the Gap Between Prototypes and Images in Cross-domain Finetuning.
    In Advances in Neural Information Processing Systems (NeurIPS 2024), to appear.
    [ arXiv ] [ CODE]

  2. C. Cai, Z. Ye, L. Feng, J. Qi, F. Liu
    Bayesian-Guided Label Mapping for Visual Reprogramming.
    In Advances in Neural Information Processing Systems (NeurIPS 2024), to appear.
    [ arXiv ] [ CODE] [ Oral ] (oral papers:acceptance:submissions=61:4037:15671)

  3. X. Peng, B. Han, F. Liu, T. Liu, M. Zhou.
    Pseudo-Private Data Guided Model Inversion Attacks.
    In Advances in Neural Information Processing Systems (NeurIPS 2024), to appear.
    [ OpenReview ] [ CODE]

  4. H. Chi, H. Li, W. Yang, F. Liu, L. Lan, X. Ren, T. Liu, B. Han.
    Unveiling Causal Reasoning in Large Language Models: Reality or Mirage?
    In Advances in Neural Information Processing Systems (NeurIPS 2024), to appear.
    [ OpenReview ] [ CODE]

  5. D. Hu, H. Fu, J. Guo, L. Peng, T. Chu, F. Liu, T. Liu, M. Gong.
    In-N-Out: Lifting 2D Diffusion Prior for 3D Object Removal via Tuning-Free Latents Alignment.
    In Advances in Neural Information Processing Systems (NeurIPS 2024), to appear.
    [ arXiv ] [ CODE]

  6. H. Tian, F. Liu, T. Liu, B. Du, Y. Cheung, B. Han.
    MOKD: Cross-domain Finetuning for Few-shot Classification via Maximizing Optimized Kernel Dependence.
    In International Conference on Machine Learning (ICML 2024), Vienna, Austria.
    [ arXiv ] [ CODE]

  7. J. Zhang, F. Liu, D. Zhou, J. Zhang, T. Liu.
    Improving Accuracy-robustness Trade-off via Pixel Reweighted Adversarial Training.
    In International Conference on Machine Learning (ICML 2024), Vienna, Austria.
    [ arXiv ] [ CODE]

  8. C. Cai, Z. Ye, L. Feng, J. Qi, F. Liu.
    Sample-specific Masks for Visual Reprogramming-based Prompting.
    In International Conference on Machine Learning (ICML 2024), Vienna, Austria.
    [ arXiv ] [ CODE] [ Spotlight ] (spotlight papers:acceptance:submissions=335:2609:9473)

  9. J. Li, H. Li, S. M. Erfani, L. Feng, J. Bailey, F. Liu.
    Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models.
    In International Conference on Machine Learning (ICML 2024), Vienna, Austria.
    [ arXiv ] [ CODE]

  10. J. Zhang, Q. Wei, F. Liu, L. Feng.
    Candidate Pseudolabel Learning: Enhancing Vision-Language Models by Prompt Tuning with Unlabeled Data.
    In International Conference on Machine Learning (ICML 2024), Vienna, Austria.
    [ arXiv ] [ CODE] [ Oral ] (oral papers:acceptance:submissions=144:2609:9473)

  11. W. Wang, B. Huang, F. Liu, X. You, T. Liu, K. Zhang, M. Gong
    Optimal Kernel Choice for Score Function-based Causal Discovery.
    In International Conference on Machine Learning (ICML 2024), Vienna, Austria.
    [ arXiv ] [ CODE]

  12. X. Jiang, F. Liu, Z. Fang, H. Chen, T. Liu, F. Zheng, B. Han.
    Negative Label Guided OOD Detection with Pretrained Vision-Language Models.
    In International Conference on Learning Representations (ICLR 2024), Vienna, Austria.
    [ arXiv ] [ CODE] [ Spotlight ]

  13. X. Xu, J. Zhang, F. Liu, M. Sugiyama, M. Kankanhalli.
    Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection.
    In Advances in Neural Information Processing Systems (NeurIPS 2023), New Orleans, US.
    [ arXiv ] [ CODE] [ Spotlight ]

  14. X. Xu, J. Zhang, F. Liu, M. Sugiyama, M. Kankanhalli.
    Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization.
    In Advances in Neural Information Processing Systems (NeurIPS 2023), New Orleans, US.
    [ arXiv ] [ CODE]

  15. H. Zheng, Q. Wang, Z. Fang, X. Xia, F. Liu, T. Liu, B. Han.
    Out-of-distribution Detection Learning with Unreliable Out-of-distribution Sources.
    In Advances in Neural Information Processing Systems (NeurIPS 2023), New Orleans, US.
    [ arXiv ] [ CODE]

  16. Q. Wang, Z. Fang, Y. Zhang, F. Liu, Y. Li, B. Han.
    Learning to Augment Distributions for Out-of-distribution Detection.
    In Advances in Neural Information Processing Systems (NeurIPS 2023), New Orleans, US.
    [ arXiv ] [ CODE]

  17. K. Liu, F. Liu, H. Wang, N. Ma, J. Bu, B. Han.
    Partition Speeds Up Learning Implicit Neural Representations Based on Exponential-Increase Hypothesis.
    In International Conference on Computer Vision (ICCV 2023), Paris, France.
    [ arXiv ] [ CODE]

  18. R. Dong, F. Liu, H. Chi, T. Liu, M. Gong, G. Niu, M. Sugiyama, B. Han.
    Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation.
    In International Conference on Machine Learning (ICML 2023), Hawaii, US.
    [ arXiv ] [ CODE]

  19. S. Zhang, F. Liu, J. Yang, Y. Yang, C. Li, B. Han, M. Tan.
    Detecting Adversarial Data by Probing Multiple Perturbations Using Expected Perturbation Score.
    In International Conference on Machine Learning (ICML 2023), Hawaii, US.
    [ arXiv ] [ CODE]

  20. X. Jiang, F. Liu, Z. Fang, H. Chen, T. Liu, F. Zheng, B. Han.
    Detecting Out-of-distribution Data through In-distribution Class Prior.
    In International Conference on Machine Learning (ICML 2023), Hawaii, US.
    [ Openreview ] [ CODE]

  21. T. Cui, Y. Li, K. Chen, J. Bailey, F. Liu.
    Designing Fair AI Systems: Exploring the Interaction of Explainable AI and Task Objectivity on Users’ Fairness Perception.
    In Pacific Asia Conference on Information Systems (PACIS 2023), Nanchang, China
    [ Link ] [ Social Science Theory ]

  22. Y. Song, T. Cui, F. Liu.
    Designing Fair AI Systems: How Explanation Specificity Influences Users' Perceived Fairness and Trusting Intentions.
    In European Conference on Information Systems (ECIS 2023), Kristiansand, Norway
    [ Link ] [ Social Science Theory ] [ Best RiP Paper Runner-up Award (2nd) ]

  23. Q. Wang, J. Ye, F. Liu, Q. Dai, M. Kalander, T. Liu, J. Hao, B. Han.
    Out-of-distribution Detection with Implicit Outlier Transformation.
    In International Conference on Learning Representations (ICLR 2023), Kigali, Rwanda.
    [ arXiv ] [ CODE]

  24. Z. Fang, Y. Li, J. Lu, J. Dong, B. Han, F. Liu.
    Is Out-of-distribution Detection Learnable?
    In Advances in Neural Information Processing Systems (NeurIPS 2022), New Orleans, US.
    [ arXiv ] [ ML Theory ] [ Outstanding Paper Award ] (outstanding papers:acceptance:submissions=13:2672:10411)

  25. Q. Wang, F. Liu, Y. Zhang, J. Zhang, C. Gong, T. Liu, B. Han.
    Watermarking for Out-of-distribution Detection.
    In Advances in Neural Information Processing Systems (NeurIPS 2022), New Orleans, US.
    [ arXiv ] [ CODE] [ Spotlight ]

  26. X. Peng, F. Liu, J. Zhang, L. Lan, J. Ye, T. Liu, B. Han.
    Bilateral Dependency Optimization: Defending Against Model-inversion Attacks.
    In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2022), online.
    [ arXiv ] [ CODE]

  27. R. Gao, J. Wang, K. Zhou, F. Liu, B. Xie, G. Niu, B. Han, J. Cheng.
    Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack.
    In International Conference on Machine Learning (ICML 2022), online.
    [ arXiv ] [ CODE]

  28. X. Xu, J. Zhang, F. Liu, M. Sugiyama, M. Kankanhalli.
    Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests.
    In International Conference on Machine Learning (ICML 2022), online.
    [ arXiv ] [ CODE]

  29. H. Chi, F. Liu, B. Han, W. Yang, L. Lan, T. Liu, G. Niu, M. Zhou and M. Sugiyama.
    Meta Discovery: Learning to Discover Novel Classes given Very Limited Data.
    In International Conference on Learning Representations (ICLR 2022), online, 2022.
    [ arXiv ] [ CODE] [ Spotlight ] (spotlights:acceptance:submissions=176:1095:3391)

  30. F. Liu, W. Xu, J. Lu, D. J. Sutherland.
    Meta Two-Sample Testing: Learning Kernels for Testing with Limited Data.
    In Advances in Neural Information Processing Systems (NeurIPS 2021), online, 2021.
    [ arXiv ] [ CODE]

  31. H. Chi, F. Liu, W. Yang, L. Lan, T. Liu, B. Han, W. Cheung and J. T. Kwok.
    TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation.
    In Advances in Neural Information Processing Systems (NeurIPS 2021), online, 2021.
    [ arXiv ] [ CODE] [ Spotlight ] (spotlights:acceptance:submissions=260:2372:9122)

  32. Q. Wang, F. Liu, B. Han, T. Liu, C. Gong, M. Zhou and M. Sugiyama.
    Probabilistic Margins for Instance Reweighting in Adversarial Training.
    In Advances in Neural Information Processing Systems (NeurIPS 2021), online, 2021.
    [ arXiv ] [ CODE]

  33. R. Gao, F. Liu, J. Zhang, B. Han, T. Liu, G. Niu and M. Sugiyama.
    Maximum Mean Discrepancy is Aware of Adversarial Attacks.
    In International Conference on Machine Learning (ICML 2021), online, 2021.
    [ arXiv ] [ CODE]

  34. Z. Fang, J. Lu, A. Liu, F. Liu, G. Zhang.
    Learning Bounds for Open-Set Learning.
    In International Conference on Machine Learning (ICML 2021), online, 2021.
    [ arXiv ] [ CODE ]

  35. L. Zhong, Z. Fang, F. Liu, B. Yuan, G. Zhang and J. Lu.
    How does the Combined Risk Affect the Performance of Unsupervised Domain Adaptation Approaches?
    In AAAI Conference on Artificial Intelligence (AAAI 2021), online, 2021.
    [ arXiv ] [ CODE ]

  36. F. Liu, W. Xu, J. Lu, G. Zhang and A. Gretton, D. J. Sutherland.
    Learning Deep Kernels for Non-parametric Two Sample Test.
    In International Conference on Machine Learning (ICML 2020), online, 2020.
    [ arXiv ] [ CODE ]

  37. Y. Zhang, F. Liu, Z. Fang, B. Yuan, G. Zhang and J. Lu.
    Clarinet: A One-step Approach Towards Budget-friendly Unsupervised Domain Adaptation.
    In International Joint Conference on Artificial Intelligence (IJCAI 2020), online, 2021.
    [ arXiv ] [ CODE ]

  38. F. Liu, J. Lu, B. Han, G. Niu, G. Zhang and M. Sugiyama.
    Butterfly: A Panacea for All Difficulties in Wildly Unsupervised Domain Adaptation.
    In Learning Transferable Skills Workshop on Neural Information Processing Systems (NeurIPS 2019 Workshop), Vancouver, Canada, 2019.
    [ PDF ] [ CODE ]

  39. F. Liu, G. Zhang and J. Lu.
    A Novel Fuzzy Neural Network for Unsupervised Domain Adaptation in Heterogeneous Scenarios.
    In IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2019), New Orleans, US, 2019.
    [ link ] [ Best Student Paper Award ]


Selected Journal Articles

  1. Z. Fang, Y. Li, F. Liu, B. Han, J. Lu.
    On the Learnability of Out-of-distribution Detection
    Journal of Machine Learning Research, 2024.
    [ arXiv ] [ ML Theory ]

  2. H. Chi, W. Yang, F. Liu, L. Lan, T. Qin, B. Han.
    Does Confusion Really Hurt Novel Class Discovery?.
    International Journal of Computer Vision, 2024.
    [ link ] [ CODE ]

  3. Z. Zhong, J. Hou, Z. Yao, L. Dong, F. Liu, J. Yue, T. Wu, J. Zheng, G. Ouyang, C. Yang, J. Song.
    Cancer-Finder: Domain generalization enables general cancer cell annotation in single-cell and spatial transcriptomics.
    Nature Communications, 2024.
    [ link ] [ CODE ]

  4. X. Feng, Z. Yu, H. Fang, H. Jiang, G. Yang, L. Chen, X. Zhou, B. Hu, C. Qin, G. Hu, G. Xing, B. Zhao, Y. Shi, J. Guo, F. Liu, B. Han, B. Zechmann, Y. He, and F. Liu.
    Plantorganelle Hunter: An Effective Deep Learning-based Method for Plant Organelle Phenotyping in Electron Microscopy.
    Nature Plants, 2023.
    [ link ] [ CODE ]

  5. X. Guo, F. Lin, J. Song, S. Di, L. Lin, Z. Zhong, Z. Wu, X. Wang, Y. Zhang, J. Li, H. Zhang, F. Liu, C. Yang, J. Song.
    Deep Transfer Learning Enables Lesion Tracing of Circulating Tumor Cells.
    Nature Communications, 2022.
    [ link ] [ CODE ]

  6. Z. Fang, J. Lu, F. Liu, G. Zhang.
    Semi-supervised Heterogeneous Domain Adaptation: Theory and Algorithms.
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
    [ link ] [ CODE ]

  7. F. Liu, J. Lu, B. Han, G. Niu, G. Zhang and M. Sugiyama.
    Butterfly: One-step Approach towards Wildly Unsupervised Domain Adaptation.
    Preprint, 2021.
    [ arXiv ] [ CODE ]

  8. L. Zhong, Z. Fang, F. Liu, B. Yuan, G. Zhang and J. Lu.
    Bridging the Theoretical Bound and Deep Algorithms for Open Set Domain Adaptation.
    IEEE Transactions on Neural Networks and Learning Systems, 2021.
    [ arXiv ]

  9. Y. Zhang, F. Liu, Z. Fang, B. Yuan, G. Zhang and J. Lu.
    Learning from a Complementary-label Source Domain: Theory and Algorithms.
    IEEE Transactions on Neural Networks and Learning Systems, 2021.
    [ arXiv ] [ CODE ]

  10. F. Liu, G. Zhang and J. Lu.
    Multi-source Heterogeneous Unsupervised Domain Adaptation via Fuzzy-relation Neural Networks.
    IEEE Transactions on Fuzzy Systems, 2020.
    [ link ]

  11. F. Liu, G. Zhang and J. Lu.
    Heterogeneous domain adaptation: An unsupervised approach.
    IEEE Transactions on Neural Networks and Learning Systems, 2020.
    [ arXiv ]

  12. S. Qin, H. Ding, Y. Wu and F. Liu.
    High-dimensional sign-constrained feature selection and grouping.
    Annals of the Institute of Statistical Mathematics, Oct., 2020.
    [ link ]

  13. Z. Fang, J. Lu, F. Liu, J. Xuan and G. Zhang.
    Open set domain adaptation: Theoretical bound and algorithm.
    IEEE Transactions on Neural Networks and Learning Systems, 2020.
    [ arXiv ] [ CODE ]

  14. F. Liu, J. Lu and G. Zhang.
    Unsupervised heterogeneous domain adaptation via shared fuzzy equivalence relations.
    IEEE Transactions on Fuzzy Systems, vol. 26, no. 6, pp. 3555–-3568, 2018.
    [ link ] [ CODE ]

  15. H. Zuo, J. Lu, G. Zhang and F. Liu.
    Fuzzy transfer learning using an infinite gaussian mixture model and active learning.
    IEEE Transactions on Fuzzy Systems, vol. 27, no. 2, pp. 291–-303, 2018.
    [ link ]


Theses

  1. Feng Liu.
    Towards Realistic Transfer Learning Methods: Theory and Algorithms.
    Doctoral Thesis, Australian Artificial Intelligence Institute, University of Technology Sydney, Australia, November 2020.

  2. Feng Liu.
    Time Series Interpolation and Prediction for the Electricity Market.
    Master Thesis, School of Mathematic and Statistics, Lanzhou University, China, June 2015.