Feng Liu (Assistant Professor at The University of Melbourne)


Publications


Currently, I research the trustworthy machine learning (mainly focus on transfer learning and adversarial machine learning) and two-sample testing (a fundamental problem in machine learning and statistics). Previously (2013-2016), I researched the time series prediction using neural networks. In the following, represents equal contribution, and * represents corresponding author.

[Selected Conference Papers, Selected Journal Articles, Theses ]


Working Papers

  1. R. Gao, F. Liu, K. Zhou, G. Niu, B. Han and J. Cheng.
    Local Reweighting for Adversarial Training.
    [ arXiv ]

  2. C. Zhou, F. Liu, C. Gong, T. Liu, B. Han and W. Cheung.
    KRADA: Known-region-aware Domain Alignment for Open World Semantic Segmentation.
    [ arXiv ]


Selected Conference Papers (including workshops)

  1. X. Peng, F. Liu, J. Zhang, L. Lan, J. Ye, T. Liu, B. Han.
    Bilateral Dependency Optimization: Defending Against Model-inversion Attacks.
    In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2022), to appear (CORE A*).
    [ arXiv ] [ CODE]

  2. R. Gao, J. Wang, K. Zhou, F. Liu, B. Xie, G. Niu, B. Han, J. Cheng.
    Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack.
    In International Conference on Machine Learning (ICML 2022), to appear (CORE A*).
    [ arXiv ] [ CODE]

  3. X. Xu, J. Zhang, F. Liu, M. Sugiyama, M. Kankanhalli.
    Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests.
    In International Conference on Machine Learning (ICML 2022), to appear (CORE A*).
    [ arXiv ] [ CODE]

  4. H. Chi, F. Liu, B. Han, W. Yang, L. Lan, T. Liu, G. Niu, M. Zhou and M. Sugiyama.
    Meta Discovery: Learning to Discover Novel Classes given Very Limited Data.
    In International Conference on Learning Representations (ICLR 2022), to appear, 2022 (CORE A*).
    [ arXiv ] [ CODE] [ Spotlight ] (spotlights:acceptance:submissions=176:1095:3391)

  5. F. Liu, W. Xu, J. Lu, D. J. Sutherland.
    Meta Two-Sample Testing: Learning Kernels for Testing with Limited Data.
    In Advances in Neural Information Processing Systems (NeurIPS 2021), online, 2021 (CORE A*).
    [ arXiv ] [ CODE]

  6. H. Chi, F. Liu, W. Yang, L. Lan, T. Liu, B. Han, W. Cheung and J. T. Kwok.
    TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation.
    In Advances in Neural Information Processing Systems (NeurIPS 2021), online, 2021 (CORE A*).
    [ arXiv ] [ CODE] [ Spotlight ] (spotlights:acceptance:submissions=260:2372:9122)

  7. Q. Wang, F. Liu, B. Han, T. Liu, C. Gong, M. Zhou and M. Sugiyama.
    Probabilistic Margins for Instance Reweighting in Adversarial Training.
    In Advances in Neural Information Processing Systems (NeurIPS 2021), online, 2021 (CORE A*).
    [ arXiv ] [ CODE]

  8. R. Gao, F. Liu, J. Zhang, B. Han, T. Liu, G. Niu and M. Sugiyama.
    Maximum Mean Discrepancy is Aware of Adversarial Attacks.
    In International Conference on Machine Learning (ICML 2021), online, 2021 (CORE A*).
    [ arXiv ] [ CODE]

  9. Z. Fang, J. Lu, A. Liu, F. Liu, G. Zhang.
    Learning Bounds for Open-Set Learning.
    In International Conference on Machine Learning (ICML 2021), online, 2021 (CORE A*).
    [ arXiv ] [ CODE ]

  10. L. Zhong, Z. Fang, F. Liu, B. Yuan, G. Zhang and J. Lu.
    How does the Combined Risk Affect the Performance of Unsupervised Domain Adaptation Approaches?
    In AAAI Conference on Artificial Intelligence (AAAI 2021), online, 2021 (CORE A*).
    [ arXiv ] [ CODE ]

  11. F. Liu, W. Xu, J. Lu, G. Zhang and A. Gretton, D. J. Sutherland.
    Learning Deep Kernels for Non-parametric Two Sample Test.
    In International Conference on Machine Learning (ICML 2020), online, 2020 (CORE A*).
    [ arXiv ] [ CODE ]

  12. Y. Zhang, F. Liu, Z. Fang, B. Yuan, G. Zhang and J. Lu.
    Clarinet: A One-step Approach Towards Budget-friendly Unsupervised Domain Adaptation.
    In International Joint Conference on Artificial Intelligence (IJCAI 2020), online, 2021 (CORE A*).
    [ arXiv ] [ CODE ]

  13. F. Liu, J. Lu, B. Han, G. Niu, G. Zhang and M. Sugiyama.
    Butterfly: A Panacea for All Difficulties in Wildly Unsupervised Domain Adaptation.
    In Learning Transferable Skills Workshop on Neural Information Processing Systems (NeurIPS 2019 Workshop), Vancouver, Canada, December 8-14, 2019 (CORE A*).
    [ PDF ] [ CODE ]

  14. F. Liu, G. Zhang and J. Lu.
    A Novel Fuzzy Neural Network for Unsupervised Domain Adaptation in Heterogeneous Scenarios.
    In IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2019), New Orleans, USA, June 23-26, 2019 (CORE A).
    [ link ] [ Best Student Paper Award ]


Selected Journal Articles

  1. Z. Fang, J. Lu, F. Liu, G. Zhang.
    Semi-supervised Heterogeneous Domain Adaptation: Theory and Algorithms.
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022 (ERA A*).

  2. F. Liu, J. Lu, B. Han, G. Niu, G. Zhang and M. Sugiyama.
    Butterfly: One-step Approach towards Wildly Unsupervised Domain Adaptation.
    Preprint, 2021.
    [ arXiv ] [ CODE ]

  3. L. Zhong, Z. Fang, F. Liu, B. Yuan, G. Zhang and J. Lu.
    Bridging the Theoretical Bound and Deep Algorithms for Open Set Domain Adaptation.
    IEEE Transactions on Neural Networks and Learning Systems, 2021 (ERA A*).
    [ arXiv ]

  4. Y. Zhang, F. Liu, Z. Fang, B. Yuan, G. Zhang and J. Lu.
    Learning from a Complementary-label Source Domain: Theory and Algorithms.
    IEEE Transactions on Neural Networks and Learning Systems, 2021 (ERA A*).
    [ arXiv ] [ CODE ]

  5. F. Liu, G. Zhang and J. Lu.
    Multi-source Heterogeneous Unsupervised Domain Adaptation via Fuzzy-relation Neural Networks.
    IEEE Transactions on Fuzzy Systems, 2020 (ERA A*).
    [ link ]

  6. F. Liu, G. Zhang and J. Lu.
    Heterogeneous domain adaptation: An unsupervised approach.
    IEEE Transactions on Neural Networks and Learning Systems, 2020 (ERA A*).
    [ arXiv ]

  7. S. Qin, H. Ding, Y. Wu and F. Liu.
    High-dimensional sign-constrained feature selection and grouping.
    Annals of the Institute of Statistical Mathematics, Oct., 2020 (ERA A).
    [ link ]

  8. Z. Fang, J. Lu, F. Liu, J. Xuan and G. Zhang.
    Open set domain adaptation: Theoretical bound and algorithm.
    IEEE Transactions on Neural Networks and Learning Systems, 2020 (ERA A*).
    [ arXiv ] [ CODE ]

  9. F. Liu, J. Lu and G. Zhang.
    Unsupervised heterogeneous domain adaptation via shared fuzzy equivalence relations.
    IEEE Transactions on Fuzzy Systems, vol. 26, no. 6, pp. 3555–-3568, 2018 (ERA A*).
    [ link ] [ CODE ]

  10. H. Zuo, J. Lu, G. Zhang and F. Liu.
    Fuzzy transfer learning using an infinite gaussian mixture model and active learning.
    IEEE Transactions on Fuzzy Systems, vol. 27, no. 2, pp. 291–-303, 2018 (ERA A*).
    [ link ]


Theses

  1. Feng Liu.
    Towards Realistic Transfer Learning Methods: Theory and Algorithms.
    Doctoral Thesis, Australian Artificial Intelligence Institute, University of Technology Sydney, Australia, November 2020.

  2. Feng Liu.
    Time Series Interpolation and Prediction for the Electricity Market.
    Master Thesis, School of Mathematic and Statistics, Lanzhou University, China, June 2015.