Machine Learning for Programming (Seminar)
Quick Facts
Organizer | Michael Pradel |
Teaching assistants | TBD |
Course type | Advanced seminar |
Language | English |
Ilias | Ilias course (for discussions, etc.) |
Place | Universitätstr. 38, room 0.453 |
Content
This seminar is about recent research on improving software and increasing developer productivity by using machine learning, including deep learning. We will discuss research papers that present novel techniques for improving software reliability and security, such as program analyses to detect bugs, to complete partial code, or to de-obfuscate code, based on machine learning models of code.
After the initial kick-off meeting, each student is assigned a research paper. Each student presents her/his paper in a talk during the weekly meetings. Moreover, each student prepares a term paper that summarizes the original research paper and discusses in the context of closely related papers.
Organization
The course will be classroom-first, i.e., to the extent possible, all activities will be in a physical classroom or based on physical meetings.
Schedule
This is a preliminary schedule and may be subject to change.
Date | Event |
Oct 19, 2023, 2:00pm | Kick-off meeting |
Oct 26, 2023, 11:59pm | Deadline for choosing topics |
Nov 9, 2023, 2:00pm | Talks by Rahul Chandra (topic 6) |
Nov 23, 2023, 2:00pm | Talks by Julijan Katic (topic 13) and Marcel Wurm (topic 4) |
Nov 30, 2023, 2:00pm | Talks by Max Buchholz (topic 7) |
Dec 14, 2023, 2:00pm | Talks by David Augustat (topic 10) and Matthias Brehmer (topic 12) |
Dec 21, 2023, 2:00pm | Talks by Anusha Agnihotri (topic 15) and Jan Bothmann (topic 8) |
Jan 12, 2024, 11:59pm | Deadline for drafts of term papers |
Feb 9, 2024, 11:59pm | Deadline for term papers |
Topics
The following research papers are available for discussion. If there is not link provided, use Google Scholar to find a copy of a paper. After the kick-off meeting, each student gets assigned one paper for presentation.
[1] | Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. CoRR, abs/2107.03374, 2021. [ arXiv | http ] |
[2] | Naman Jain, Skanda Vaidyanath, Arun Iyer, Nagarajan Natarajan, Suresh Parthasarathy, Sriram Rajamani, and Rahul Sharma. Jigsaw: Large language models meet program synthesis. In ICSE, 2022. [ http ] |
[3] | Akshay Utture, Shuyang Liu, Christian Gram Kalhauge, and Jens Palsberg. Striking a balance: Pruning false-positives from static call graphs. In ICSE, 2022. [ http ] |
[4] | Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. Incoder: A generative model for code infilling and synthesis. CoRR, abs/2204.05999, 2022. [ DOI | arXiv | http ] |
[5] | Elizabeth Dinella, Gabriel Ryan, Todd Mytkowicz, and Shuvendu K. Lahiri. TOGA: A neural method for test oracle generation. In ICSE, 2022. [ DOI | http ] |
[6] | Disha Shrivastava, Hugo Larochelle, and Daniel Tarlow. Repository-level prompt generation for large language models of code. In International Conference on Machine Learning. PMLR, 2023. [ .html ] |
[7] | Nan Jiang, Kevin Liu, Thibaud Lutellier, and Lin Tan. Impact of code language models on automated program repair. In ICSE, 2023. [ http ] |
[8] | Sungmin Kang, Juyeon Yoon, and Shin Yoo. Large language models are few-shot testers: Exploring llm-based general bug reproduction. In ICSE, 2023. [ http ] |
[9] | Noor Nashid, Mifta Sintaha, and Ali Mesbah. Retrieval-based prompt selection for code-related few-shot learning. In ICSE, 2023. [ http ] |
[10] | Chunqiu Steven Xia and Lingming Zhang. Keep the conversation going: Fixing 162 out of 337 bugs for 0.42 each using chatgpt, 2023. [ arXiv ] |
[11] | Sungmin Kang, Bei Chen, Shin Yoo, and Jian-Guang Lou. Explainable automated debugging via large language model-driven scientific debugging, 2023. [ arXiv ] |
[12] | Caroline Lemieux, Jeevana Priya Inala, Shuvendu K Lahiri, and Siddhartha Sen. Codamosa: Escaping coverage plateaus in test generation with pre-trained large language models. In ICSE, 2023. [ http ] |
[13] | Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to self-debug, 2023. [ arXiv ] |
[14] | Shraddha Barke, Michael B. James, and Nadia Polikarpova. Grounded copilot: How programmers interact with code-generating models. Proc. ACM Program. Lang., 7(OOPSLA1), 2023. [ DOI | http ] |
[15] | Fengjuan Guo, Yu Wang, and Ke Wang. Discrete adversarial attack to models of code. In PLDI, 2023. [ http ] |
[16] | Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy V, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Moustafa-Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. Starcoder: may the source be with you! CoRR, abs/2305.06161, 2023. [ DOI | arXiv | http ] |
Template for Term Paper
Please use this LaTeX template for writing your term paper. The page limit is six pages (strict).
Grading
Grading is based on the term paper, the talk, and active participation during the meetings.