For future announcements, join our mailing list.


Table of contents:


Calendar

https://calendar.google.com/calendar/u/0/embed?src=8a656541004ceea17896a9a3f8815ca36bff0a62b9cc86e7b8e4bfa737608217@group.calendar.google.com&ctz=America/New_York

Talks

Upcoming

TBD.

Past

Xinyun Chen (Google DeepMind) - August 4, 2023

https://youtu.be/2Fr77hLrUwc

Title: Leveraging Execution Feedback for Program Synthesis with Large Language Models

Abstract: Recent large-scale language models have demonstrated an impressive ability to generate code, and are now able to complete simple programming tasks. However, these models still perform poorly when evaluated on more complex, unseen problems that require problem-solving skills beyond simply translating instructions into code. For example, competitive programming problems which require an understanding of algorithms and complex natural language remain extremely challenging. In this talk, I will discuss 2 lines of works on improving large language models (LLMs) for code generation via leveraging code execution feedback. First, I will discuss AlphaCode, which achieved on average a ranking of top 54.3% in several Codeforces competitions with more than 5,000 participants. In the second part, I will discuss our recent work Self-Debugging, which teaches LLMs to debug their own predicted code. In particular, we demonstrate that Self-Debugging can teach LLMs to perform rubber duck debugging; i.e., without any feedback on the code correctness or error messages, the model is able to identify its mistakes by explaining the generated code line-by-line. Self-Debugging achieves the state-of-the-art performance on several code generation tasks, including text-to-SQL generation, code translation, and synthesizing short Python functions from text descriptions. Meanwhile, by leveraging feedback messages and reusing failed predictions, Self-Debugging notably improves sample efficiency, and can match or outperform baseline models that generate more than 10x candidate programs.

Michael Pradel (University of Stuttgart) - June 26, 2023

https://youtu.be/YIYlkCbIxqc

Title: Neural Software Analysis: Recent Advances on Types, Bugs, and Executions

Abstract: Neural software analysis is an emerging approach for addressing difficult program analysis problems by exploiting the predictive power of deep neural networks. This talk gives an overview of our recent advances in neural software analysis, focusing on three kinds of problems. First, we present a neural program repair technique that fixes static type errors in Python. Second, we present work on neural bug detection, which detect incorrect code by predicting that two statements are inconsistent with each other. Finally, we present work on learning-guided execution, which enables the execution of incomplete code snippets by filling in missing information via queries to a neural model.

Jingxuan He (ETH) - May 31, 2023

https://youtu.be/YXUmEcIiyK0