PhD candidate Nils Lukas receives 2024 Mathematics Doctoral Prize’s top honour

Thursday, April 25, 2024

Nils Lukas, a PhD candidate at the Cheriton School of Computer Science, is the first-place winner of the 2024 Faculty of Mathematics Doctoral Prize. Now in its sixth year, this prestigious award recognizes and celebrates the achievements of top doctoral students in the Faculty of Mathematics. As a first-place recipient, Nils will receive $1,500 and is nominated for the university-wide Governor General’s Gold Medal, which is awarded at spring convocation.

“Congratulations to Nils on receiving this prestigious and much-deserved recognition,” said Raouf Boutaba, University Professor and Director of the Cheriton School of Computer Science. “The research he has conducted on trustworthy, secure and privacy-preserving machine learning and published at the top international conferences in these fields is not only academically rigorous but also hugely significant for industry and society.”

Nils works on most of the pressing security and privacy problems in machine learning — untrustworthy data, untrustworthy providers, and untrustworthy users, explains his advisor Professor Florian Kerschbaum. 

“Across these areas Nils has an outstanding publication record and at a level of academic excellence rarely seen among even applicants for faculty positions,” Professor Kerschbaum said. “Since joining my group, Nils has published five first-author papers and another as the supervisor of an undergraduate student, all of them in the top venues, with several other papers in submission. His published works include his paper presented at the IEEE Symposium on Security and Privacy in 2022 and another in 2023, a paper presented at USENIX Security Symposium in 2023, a paper in the International Conference on Learning Representations in 2021, followed by two more at that venue in 2024.”

photo of Nils Lukas in the Davis Centre

Nils Lukas, a PhD candidate in the Cheriton School of Computer Science’s Cryptography, Security, and Privacy (CrySP) group, focuses on trustworthy machine learning. He has an MSc with distinction in Computer Science from RWTH Aachen University in Germany.

His research explores the threats that arise when deploying deep neural networks from three perspectives: (1) privacy when the model is trained on private data, (2) reliability when the model’s training data cannot be trusted, and (3) model misuse when the users cannot be trusted. His work includes studying privacy attacks against large language models fine-tuned on private datasets, developing defences against data poisoning, and creating multiple methods for controlling model misuse.

Nils received a prestigious 2022–24 Cheriton Graduate Scholarship. Additionally, his research has won two notable poster competitions: the 2023 Cheriton Research Symposium poster competition and the 2019 Cybersecurity and Privacy Institute poster competition.

More about Nils Lukas’s research

The rapid advancement of generative AI models in recent years holds great promise to transform businesses and society, but they also pose novel trust, security and privacy challenges. The research Nils conducts is helping to reduce the risks of these technologies. 

In his paper titled Analyzing Leakage of Personally Identifiable Information in Language Models, published in IEEE Symposium on Security & Privacy in 2023 with colleagues from Microsoft Research, Nils introduced novel attack algorithms capable of extracting ten times more personally identifiable information than existing attacks. This work revealed that standard sentence-level differentially private training, while largely reducing the risk of disclosing personally identifiable information, still leaks about 3% of such information. The significance of this work is that it is one of the first comprehensive studies of the risk of personally identifiable information memorization in language models, and it exposed the subtle insufficiency of sentence-level differentially private training for protecting record level personally identifiable information. Nils has released his code to the public to reproduce and conduct further research.

In SoK: How Robust is Image Classification Deep Neural Network Watermarking?, a paper with Edward Jiang, Xinda Li and Florian Kerschbaum presented at IEEE Symposium on Security & Privacy in 2022, Nils conducted a systematic evaluation of the robustness of existing watermarking schemes that aim to verify provenance of machine learning models and to prevent misuse of AI generated content. Nils found that none of the surveyed watermarking schemes can withstand all removal attacks, showcasing the importance of a thorough evaluation framework. 

In Deep Neural Network Fingerprinting by Conferrable Adversarial Examples, a paper with Yuxuan Zhang and Florian Kerschbaum presented at ICLR 2021, Nils developed a fingerprinting method for deep neural networks, aimed at detecting the surrogate models that an adversary may build by querying a proprietary source model. Nils proposed a new method to generate conferrable adversarial examples and, importantly, demonstrated their superior effectiveness and robustness against previous fingerprints and watermarks.

In PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators, a paper with his advisor presented at the 32nd USENIX Security Symposium, Nils explored image generators, such as those used in deepfake creation. He proposed pivotal tuning watermarking to prevent misuse of image generators, achieving three orders of magnitude speedup while obviating the need of any training data. Moreover, Nils revealed some intrinsic trade-off between the undetectability and robustness of watermarks.

In Leveraging Optimization for Adaptive Attacks on Image Watermarks, a paper with Abdulrahman Diaa, Lucas Fenaux, and Florian Kerschbaum presented at ICLR 2024, the authors continued the investigation of image watermarking attacks through the lens of adaptive, learnable attacks. The core idea is that an adaptive attacker who knows the watermarking algorithm can create their own surrogate keys and use them to optimize the parameters of a watermark removal attack. Such adaptive, learnable attacks can undermine the robustness of all five tested, state-of-the-art watermarking methods and require limited computational resources. Nils has presented his watermarking results to Google, with the goal that the research will limit misuse of its image generators and combat misinformation.

  1. 2024 (31)
    1. April (9)
    2. March (13)
    3. February (1)
    4. January (8)
  2. 2023 (70)
    1. December (6)
    2. November (7)
    3. October (7)
    4. September (2)
    5. August (3)
    6. July (7)
    7. June (8)
    8. May (9)
    9. April (6)
    10. March (7)
    11. February (4)
    12. January (4)
  3. 2022 (63)
    1. December (2)
    2. November (7)
    3. October (6)
    4. September (6)
    5. August (1)
    6. July (3)
    7. June (7)
    8. May (8)
    9. April (7)
    10. March (6)
    11. February (6)
    12. January (4)
  4. 2021 (64)
  5. 2020 (73)
  6. 2019 (90)
  7. 2018 (82)
  8. 2017 (51)
  9. 2016 (27)
  10. 2015 (41)
  11. 2014 (32)
  12. 2013 (46)
  13. 2012 (17)
  14. 2011 (20)