Avaljot Singh
Room 2107
Thomas M Siebel Center
201 Goodwin Avenue
Urbana,
Illinois, 61801
United States
I am a PhD student at the University of Illinois Urbana-Champaign, working at the intersection of formal methods and artificial intelligence. My research focuses on both applying formal methods to make neural network systems trustworthy, and using AI and synthesis techniques to automate and scale formal reasoning tools.
In one direction, I develop declarative and formally verified infrastructures for neural network certification, including the ConstraintFlow DSL, automated soundness verification of certifiers (ProveSound), and a compiler for executable certifier specifications (Neuton). In the other direction, I study how AI can be used to generate and improve formal tools, such as synthesizing new neural network certifiers (SAIL), automatically generating data-analysis optimizations (RuleFlow), and assisting theorem proving.
Prior to starting my PhD, I graduated with Bachelors and Masters in Computer Science from IIT Delhi in May 2021, where I was advised by Prof. Sanjiva Prasad (Thesis). I have also had the opportunity of working with Shraddha Barke and Suman Nath at MSR Redmond, Rahul Sharma at MSR India and Prof. Nate Foster at Cornell University during my research internships.
Research Themes
Formal Methods for AI
This line of work focuses on building principled and trustworthy neural network analysis and certification systems using formal methods. My goal is to make neural network certifiers easier to design, mechanically verifiable for soundness, and efficient to execute, by providing a unified infrastructure spanning specification, verification, and compilation.
ConstraintFlow (DSL)
ConstraintFlow is a declarative domain-specific language for specifying neural network certifiers and analyses. It provides a compact and modular way to express abstract interpretation–based and constraint-based certifiers, significantly reducing implementation complexity compared to hand-written implementations.
ProveSound
ProveSound automatically verifies the soundness of neural network certifiers written in ConstraintFlow. It checks that a certifier implementation correctly over-approximates the concrete neural network semantics, providing formal guarantees about the correctness of certification algorithms.
Neuton
Neuton is a compiler and execution framework for ConstraintFlow specifications. It translates high-level declarative certifier specifications into efficient executable implementations, bridging the gap between formally specified analyses and practical large-scale neural network verification.
AI for Formal Methods
This line of work investigates how machine learning and program synthesis techniques can be used to automate and scale formal methods. My focus is on using AI to generate, improve, and adapt formal analysis tools and reasoning procedures, reducing the manual effort traditionally required to design and tune verification systems.
SAIL
SAIL automatically synthesizes new neural network certifiers by leveraging the ConstraintFlow DSL together with its soundness verifier and compiler. It explores the space of certifier designs and produces new certification procedures that are correct by construction.
RuleFlow
RuleFlow automatically generates data-analysis and Pandas optimizations by combining large language models with compiler and program analysis techniques. It targets reusable, rule-based transformations that improve performance while preserving program semantics.
Automated Theorem Proving
This project studies the use of AI and large language models to assist and automate theorem proving. The goal is to support formal reasoning by guiding proof search, generating intermediate lemmas, and helping interact with modern theorem provers.
Other Research
Additional work that does not sit directly under the two main themes is grouped under Other research on the Publications page: e.g. Syndicate (synergistic synthesis of ranking functions and invariants for termination), AgentRx (diagnosing AI agent failures from execution trajectories), and ProFIt (interpreting robustness proofs of deep neural networks).