About me

I am a PostDoc Research Associate at Department of Computer Science, University of Virginia. My research interests is Responsible AI. In particular, I have investigated the interplay between privacy and fairness in learning and decision making problems under different settings of privacy/fairness, learning(semi-supervised/supervised), local/distributed.

I love to play soccer and badminton when I have free time.

Short bio

I went to Hanoi University of Science and Technology, Vietnam for my bachelor degree and Rutgers Univeristy, USA for my master. I spent wonderful time working in industry at Trusting Social and Amazon Alexa.


*[Sep, 2023] Our paper “Data minimization at inference time” was accepted to NeurIPS 2023. *[May, 2023] I will serve as Program Committee Member for NeurIPS 2023.

  • [Mar, 2023] Two papers accepted at IJCAI 2023.
  • [Sep, 2022] Our paper “Pruning has a disparate impact on model accuracy” was accepted to NeurIPS 2022. It was nomiated for best paper award ~
  • [Aug, 2022] I will serve as Program Committee Member for AAAI 2023 and AAMAS 2023
  • [Jul, 2022] Congrats ! We won the The 2022 Caspar Bowden PET Award for Outstanding Research in Privacy. Link
  • [Jun, 2022] I will give a talk on “Differential privacy and fairness for learning and decision problems” at MILA (Quebec Institute for Learning Algorithms)
  • [May, 2022] Check out our new paper “Pruning has a disparate impact on model accuracy” submitted to NeurIPS 2022. See here
  • [Apr, 2022]
    • Our paper on scalable private fair models submitted to ECML PKDD 2022

    • Our survey paper “Differential Privacy and Fairness in Decisions and Learning Tasks: A Survey” was accepted to IJCAI 2022 Survey track. See you in Austria this summer.

  • [Feb, 2022] New survey paper on the intersection between fairness and differential privacy submitted to IJCAI 2022. Check out the paper here

    We review the conditions under which privacy and fairness may have aligned or contrasting goals, analyze how and why DP may exacerbate unfairness in decision problems and learning tasks, and describe available mitigation measures for the fairness issues arising in DP systems.

  • [Oct, 2021] A new preprint titled A Fairness Analysis on Private Aggregation of Teacher Ensembles. Check out the paper here

    We address the unfairness problems in PATE framework under both data properties and model training angles An effective mitigation solutions is proposed to improve fairness.

  • [Sept, 2021] Paper titled Differentially Private Empirical Risk Minimization under the Fairness Lens is accepted to Neurips 2021.
  • [May,2021] Paper titled Decision Making with Differential Privacy under a Fairness Lens is accepted to IJCAI 2021.
  • [Jan, 2021] Our team won the third place in NIST Differential Privacy Temporal Map Challenge. Congrats! Check out the media coverage here
  • [Dec, 2020] Paper titled Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach is accepted to AAAI 2021.
  • [Nov, 2020] Our paper titled Privacy-preserving and accountable multi-agent learning is accepted to AAMAS 2021.
  • [Jun, 2020] Our paper titled Lagrangian Duality for Constrained Deep Learning is accepted to ECML 2020.

Contact information

The best way to contact me is via email: cutran@syr.edu

Center for Science and Technology 4-288, 111 College Pl, Syracuse, NY 13244