Dr. Claire Le Goues
Professor, Software and Societal Systems
Bio
Claire is an Associate Professor in the School of Computer Science at Carnegie Mellon University, primarily affiliated with the Software and Societal Systems Department. Her research interests span software engineering and programming languages, and especially in how to construct, maintain, evolve, improve/debug, and assure high-quality software systems.
Quick professional bio: Ph.D. and M.S. degrees, Computer Science, from the University of Virginia; B.A., Computer Science, from Harvard College. Before grad school, she spent a year and a half employed as a Software Engineer at IBM in Cambridge, MA, where she specialized in rapid XML processing. Although her time in the Real World was brief, it substantively impacted the types of research problems I find interesting.
Research
Claire's research is in Software Engineering, inspired/informed by program analysis and transformation, with a side of search-based software engineering. She focuses on automatic program improvement and repair (using stochastic or search based as well as more formal approaches such as SMT-informed semantic code search); assurance and testing, especially in light of the scale and complexity of modern evolving systems; and quality metrics. She studies software from the worlds of open source and desktop all the way to embedded and robotics systems.
Projects
This list is not comprehensive, but I get the most email about these. Note that virtually all of my work is done in collaboration with many great colleagues and students!
SearchRepair: extends and then uses semantic code search over large repositories of candidate code bases to produce high-quality bug patches. (see paper, github)
GenProg: combines stochastic search methods like genetic programming with lightweight program analyses to find patches for real bugs in extant software. The main website provides an overview; a publication list; demo videos; and source code, benchmarks, workloads, and experimental reproduction instructions for all GenProg-related research.
Empirical evaluations: the ManyBugs and IntroClass benchmarks are intended to support evaluations of program repair research. A recent use of the latter established (interesting!) limits and challenges in existing state-of-the-art automated patch generation (see paperand project site).