Is State-of-the-Art LLM A Silver Bullet to Automated Software Engineering?
Abstract: Recently, Large Language Models (LLMs) such as GPT-3 and ChatGPT have attracted great attention from both academia and industry. They have shown substantial gains in solving a variety of problems ranging from Q&A to text summarization. Existing studies also found that some LLMs can be applied to the source code, such as code generation or debugging. However, their performance on various software engineering tasks has not been systematically investigated, and the understanding of LLMs is arguably fairly limited. Also, it is unclear how we can build software engineering capability based on LLM. In this talk, I will discuss the performance of LLMs on various software engineering tasks, including code generation, test generation, program repair, code translation, and documentation generation, and present some software engineering applications based on LLM (e.g., . vulnerability management, code search, and code idioms mining).
Xing Hu is an associate professor at school of software technology, Zhejiang University(ZJU). She got my Ph.D degree in July 2020 from School of Electronics Engineering and Computer Science (EECS), Peking University, China. Her research interests are intelligent software engineering (e.g., intelligent code generation and test case generation) and mining software repositories. Her work has been published in major and premier conferences and journals in the area of software engineering and AI.