News
Microsoft Paper Intros Fully Autonomous AI Framework, Turning Devs into Supervisors
GitHub Copilot is one thing, AutoDev is another, completely automating software development with autonomous AI agents that do all the work themselves, turning developers into supervisors.
A recently published paper from five Microsoft researchers, titled, AutoDev: Automated AI-Driven Development, explains the concept:
We present AutoDev, a fully automated AI-driven software development framework, designed for autonomous planning and execution of intricate software engineering tasks. AutoDev enables users to define complex software engineering objectives, which are assigned to AutoDev's autonomous AI Agents to achieve. These AI agents can perform diverse operations on a codebase, including file editing, retrieval, build processes, execution, testing, and git operations. They also have access to files, compiler output, build and testing logs, static analysis tools, and more. This enables the AI Agents to execute tasks in a fully automated manner with a comprehensive understanding of the contextual information required.
As might be expected, that presentation has stirred up a lot of developer angst on Hacker News and elsewhere.
That might be because of snippets in the paper like: "The developer's role within the AutoDev framework transforms from manual actions and validation of AI suggestions to a supervisor overseeing multi-agent collaboration on tasks, with the option to provide feedback. Developers can monitor AutoDev's progress toward goals by observing the ongoing conversation used for communication among agents and the repository."
Notions like that prompted HN comments like: "Maybe ignorant, but if AI can get to a point of fully automating SWEs, hardly any white-collar knowledge based job is safe."
As far as the nuts and bolts of the framework, the figure below illustrates how AutoDev workflow enables an AI Agent to achieve an objective by performing actions in a repository. "The Eval Environment executes the suggested operations, providing the AI Agent with the resulting outcome. In the conversation, purple messages are from the AI agent, while blue messages are responses from the Eval Environment."
The framework improves on tools like GitHub Copilot by enabling autonomous AI agents to execute actions like those listed above, with key features listed as:
- The ability to track and manage user and AI agents conversations through a Conversation Manager
- A library of customized Tools to accomplish a variety of code and SE related objectives
- The ability to schedule various AI agents to work collaboratively towards a common objective through an Agent Scheduler
- The ability to execute code and run tests through an Evaluation Environment
"We've shifted the responsibility of extracting relevant context for software engineering tasks and validating AI-generated code from users (mainly developers) to the AI agents themselves," the paper said. "Agents are now empowered to retrieve context through Retrieval actions and validate their code generation through Build, Execution, Testing, and Validation actions."
The researchers published impressive benchmark numbers and foreshadowed further work "to integrate AutoDev into IDEs as a chatbot experience and incorporate it into CI/CD pipelines and PR review platforms."
About the Author
David Ramel is an editor and writer at Converge 360.