There is a persistent myth in software engineering that AI makes research less important. The thinking goes: if agents can generate code at unprecedented speed, why spend weeks on discovery? Just start building, iterate fast, and course-correct when the output does not match expectations.
This thinking is exactly backwards. AI-native delivery does not reduce the value of research. It dramatically increases it. The quality ceiling of agent-generated output is entirely determined by the quality of the context it receives. Feed an agent shallow, ambiguous, incomplete context and you get shallow, ambiguous, incomplete software – just faster than before. Feed it rich, structured, comprehensive context and you unlock a delivery speed and quality level that traditional methods cannot touch.
At CONFLICT, we invest more in research and discovery now than we did five years ago. Not less. And that investment is the primary reason our AI-native delivery is faster than what traditional agencies produce. This is counterintuitive until you understand the mechanics of why it works.
What Old Discovery Looked Like
The traditional discovery phase has been roughly the same since the early 2000s. Two weeks. Sometimes four. A team of analysts and project managers runs a gauntlet of stakeholder interviews, produces a mountain of Miro boards and whiteboard photos, and synthesizes everything into a requirements document. The document goes through review, gets signed off, and becomes the canonical reference for the build phase.
The process looked something like this:
- Stakeholder interviews. Schedule 8-12 sessions with various stakeholders. Take notes in a shared document. Maybe record the calls, but nobody rewatches them.
- Workshop sessions. Gather people in a room (or a Zoom call) for brainstorming. Fill a Miro board with sticky notes. Cluster them into themes.
- Competitive analysis. A junior team member screenshots competitor products and puts them in a slide deck.
- Requirements synthesis. A business analyst or project manager translates weeks of conversations into a structured requirements document – usually a 30-80 page Word document or a Confluence page with nested tables.
- Sign-off. Stakeholders review the document, argue about scope, and eventually approve a version.
- Handoff. The requirements document gets handed to the engineering team, who read it once and then proceed to build based on their interpretation of what they read.
This process has known failure modes that the industry has studied extensively. The Standish Group’s CHAOS reports have documented for decades that incomplete requirements, lack of user involvement, and changing requirements are among the top causes of project failure. Their data consistently shows that projects with poor requirements gathering fail at rates exceeding 70%, while projects with strong upfront research succeed at dramatically higher rates. The numbers have shifted over the years, but the pattern has not: the quality of what goes into the front end of a project determines the quality of what comes out the other end.
The core problems with traditional discovery are structural, not procedural:
The document is a lossy compression. A requirements document is a compression of hours of conversations, context, nuance, and domain knowledge into a static artifact. It captures what the author thought was important at the time of writing. It loses the reasoning behind decisions, the alternatives that were considered and rejected, the subtle domain context that informed the discussion, and the implicit knowledge that stakeholders carry but never articulate because they assume everyone shares it.
The document decays immediately. The moment a requirements document is signed off, it starts becoming inaccurate. Stakeholder priorities shift. Market conditions change. Technical constraints emerge during implementation. New information surfaces. By week two of the build phase, the document is already partially wrong. By month two, it is a historical artifact that bears only passing resemblance to what is actually being built.
The handoff destroys context. When a requirements document moves from the people who gathered the information to the people who will implement it, context is lost. The engineers were not in the room when the stakeholder explained why the billing system works the way it does. They did not hear the three alternative approaches that were considered before the current direction was chosen. They have the conclusion without the reasoning, which means they cannot make intelligent decisions when they encounter situations the document does not cover.
The document is not machine-readable. This was irrelevant five years ago. It is now the most important problem on the list. A requirements document is written for human consumption. An agent cannot efficiently use it as implementation context. The format is wrong, the granularity is wrong, and the structure is wrong for machine consumption. It is like handing a recipe to a CNC machine and expecting it to manufacture the dish.
What Evolved Discovery Looks Like
Our discovery process at CONFLICT shares some surface-level similarities with the traditional approach – we still talk to stakeholders, still run workshops, still analyze the competitive landscape. But the mechanics are fundamentally different, because we design the entire process around a specific outcome: building a living, machine-readable knowledge system that serves both human decision-making and agent execution.
Deep Sessions, Not Surface Interviews
We run longer, deeper sessions with stakeholders than traditional discovery calls. A typical session is 90 minutes to two hours, not the 30-45 minute interviews that most agencies schedule. We go deeper because we are not just capturing requirements – we are capturing domain knowledge, decision-making context, organizational constraints, and the implicit models that stakeholders carry in their heads.
Eric Evans, in his foundational work Domain-Driven Design, introduced the concept of ubiquitous language – a shared vocabulary that development teams and domain experts use to communicate without translation loss. Evans argued that building this shared language is one of the most important activities in software development, because misalignment between how the domain talks about concepts and how the code models those concepts is a primary source of bugs, missed requirements, and architectural drift.
We take this further. We are not just building a ubiquitous language for human communication. We are building a ubiquitous language that agents can use. Every domain concept, every business rule, every constraint and relationship needs to be captured with enough precision and structure that an agent can reason about it correctly. This requires deeper conversations than a surface-level requirements interview.
Everything Is Recorded and Catalogued
Every discovery session is video-recorded. Not as a backup that nobody watches – as a primary source of knowledge that feeds directly into our knowledge system.
We use PlanOpticon to process these recordings. It extracts transcripts, identifies key decisions and action items, maps relationships between concepts, and builds a knowledge graph that connects information across sessions. When a stakeholder in session three references a decision made in session one, that connection is captured and queryable. When a domain concept appears in multiple sessions with slightly different definitions, the inconsistency is flagged for resolution.
This is not note-taking. This is systematic knowledge extraction. The difference matters because notes are a summary – they capture what the note-taker thought was important. Systematic extraction captures the full information landscape and makes all of it accessible, including the details that did not seem important at the time but become critical during implementation.
Building Federated Knowledge
The concept of federated knowledge is central to how we think about discovery. Peter Drucker, often called the father of modern management, wrote extensively about knowledge management – the idea that an organization’s most valuable asset is not its physical infrastructure or its financial capital but the knowledge held by its people and encoded in its processes. Drucker argued that making knowledge accessible, reusable, and actionable is the defining management challenge of the knowledge economy.
We apply this principle to project discovery by building a federated knowledge system – a structured, queryable system that connects knowledge from multiple sources into a unified whole. The sources include:
- Stakeholder session recordings and transcripts. The raw source material from discovery conversations, processed into structured knowledge graphs.
- Existing documentation. Technical specifications, API documentation, database schemas, architectural decision records, internal wikis – anything that captures existing knowledge about the system or domain.
- Code and infrastructure. The existing codebase itself is a knowledge source. Its structure, patterns, naming conventions, and architecture encode decisions and constraints that may not appear in any document.
- Market and competitive intelligence. Industry data, competitor analysis, user research, and market trends that inform product decisions.
- Organizational context. Team structure, deployment processes, compliance requirements, vendor relationships, and operational constraints that shape what can be built and how.
The “federated” part is critical. These knowledge sources are not merged into a single monolithic document. They retain their structure, their provenance, and their update cadence. The federation layer provides a unified query interface across all of them. When an engineer – or an agent – needs to understand the billing domain, the federation layer pulls relevant knowledge from the stakeholder sessions where billing was discussed, the existing billing code, the compliance documentation, and the integration specifications for the payment provider. All of it. In context. With provenance.
This is fundamentally different from the traditional discovery output of a requirements document plus a Confluence wiki plus a Miro board that nobody can find. The federated approach produces a knowledge system that grows, stays current, and serves as a live source of truth throughout the project.
Machine-Readable Context as the Unlock
Here is where the investment in discovery connects to AI-native delivery speed.
Agents need context to produce correct output. The more relevant, structured, and precise the context, the better the output. This is not a theoretical claim – it is the empirical reality of working with LLMs at scale. An agent that receives a well-structured specification with rich domain context, clear interface contracts, explicit constraints, and referenced architectural decisions will produce implementation that is dramatically closer to correct on the first pass than an agent receiving a vague prompt.
The context that comes out of evolved discovery is designed from the start to be machine-readable. It is structured, not narrative. It is precise, not approximate. It is connected, not isolated. It includes provenance, so an agent can trace a requirement back to the stakeholder conversation that produced it. It includes relationships, so an agent can understand how changing one component affects others.
This machine-readable context feeds directly into our spec-driven development process. When we write specifications for agent execution – the detailed technical specs that define what an agent should build, how it should integrate with the existing system, and what quality criteria it must meet – we are drawing on a rich knowledge system, not a stale requirements document. The specifications are better because the knowledge behind them is better. The agent output is better because the specifications are better.
The causal chain is direct: deeper discovery produces richer knowledge, richer knowledge produces better specifications, better specifications produce better agent output, and better agent output means faster delivery with fewer iteration cycles.
Why Machine-Readable Context Changes Everything
To understand why machine-readable context is the unlock for AI-native delivery, consider what happens without it.
A traditional agency adopts coding agents. The agents are fast at producing code. But the agents receive their context through a requirements document that a human has partially summarized into a prompt. The agent produces code that matches the prompt, which matches the summary, which matches the author’s interpretation of the requirements document, which matches the analyst’s synthesis of the stakeholder conversations. Four layers of lossy compression between the domain knowledge and the implementation.
The result: the code runs but does not do what the stakeholders actually need. A round of review catches the most obvious mismatches. Changes are made. Another round catches more. The iteration cycle eats up all the time the agent saved on initial implementation.
Now consider the same project with machine-readable context. The agent receives a specification that draws directly from the federated knowledge system. The domain concepts are defined with precision. The business rules are explicit. The constraints are enumerated. The integration contracts are exact. The architectural context includes the reasoning behind decisions, not just the decisions themselves. The agent produces code that matches the specification, which draws directly from the knowledge system, which is sourced from the stakeholder sessions themselves. Two layers between domain knowledge and implementation, and neither layer involves lossy human summarization.
The result: the code runs and does what the stakeholders need, or is close enough that the review cycle is a refinement, not a rearchitecting. The agent’s speed advantage is preserved because it is not consumed by rework.
This is why we say that the speed of AI-native delivery comes from investing more in research, not less. The research is what produces the context that makes the speed possible.
The Living Knowledge System
Discovery in the traditional model is a phase. It has a start date and an end date. It produces a deliverable. When it is over, the team moves on to design and build.
Discovery in our model is not a phase you finish. It is a living knowledge system that you front-load and continuously feed.
Front-loading means investing heavily at the start of the project to build the initial knowledge base. This is where we spend more time than traditional agencies, and where clients sometimes push back. Why are you still in discovery when the competitor would already be coding? Because when we start coding, our agents will have context that makes them effective. When the competitor starts coding, their agents will have context that makes them fast at producing the wrong thing.
Continuously feeding means the knowledge system is updated throughout the project lifecycle. New stakeholder conversations are recorded and processed. Architecture decisions are documented and connected to the domain knowledge they address. Implementation discoveries – the things you learn only when you start building – are fed back into the knowledge system so that subsequent specifications reflect reality, not the initial assumptions.
This continuous feeding is what prevents the knowledge decay that kills traditional requirements documents. The knowledge system is not a snapshot. It is a stream. And because it is structured and machine-readable, the cost of updating it is low enough that it actually happens, unlike the requirements document that everyone agrees should be updated but nobody ever does.
How This Connects to HiVE
Our HiVE methodology – High-Velocity Engineering – is built on the foundation of evolved discovery. The connection is direct and structural.
HiVE is spec-driven. Every piece of work that an agent executes starts with a specification that defines what to build, how it fits into the existing system, what quality criteria it must meet, and what domain constraints it must respect. The quality of those specifications is directly dependent on the quality of the knowledge system they draw from.
HiVE is agent-executed. Agents implement the specifications, run the tests, and produce the artifacts. The agents’ effectiveness is directly dependent on the context they receive, which is directly dependent on the machine-readable knowledge system that evolved discovery produces.
HiVE is human-reviewed. Senior engineers review agent output against the specifications and the domain knowledge. Their review is more effective because they have access to the same knowledge system, including the reasoning and context behind requirements, not just the requirements themselves.
The entire HiVE workflow – specification, execution, review – depends on a foundation of rich, structured, living knowledge. Evolved discovery is how that foundation is built.
Without evolved discovery, HiVE would be spec-driven development with mediocre specifications, which produces mediocre results faster. With evolved discovery, HiVE is spec-driven development with exceptional specifications grounded in comprehensive domain knowledge, which produces exceptional results faster.
The discovery investment is not separate from the delivery methodology. It is the first and most important step in the delivery methodology.
What This Means for Clients
If you are evaluating engineering partners, here is what evolved discovery means for you in practice.
Expect more upfront investment in research. We will spend more time in discovery than agencies that are eager to start writing code. This is a feature, not a bug. The upfront investment pays for itself multiple times over in reduced rework, faster agent-driven implementation, and a final product that actually matches what you need.
Expect to participate more deeply. Traditional discovery asks for a few hours of your time across a couple of weeks. Evolved discovery asks for sustained engagement from your domain experts. The depth of our conversations with your team directly determines the quality of the knowledge system, which directly determines the quality of the software.
Expect a knowledge asset, not just software. At the end of a project, you do not just get an application. You get a structured knowledge system that captures your domain, your decisions, your architecture, and the reasoning behind it all. This knowledge system has value beyond the current project. It accelerates future development, whether with us or with another team. It is institutional knowledge that does not walk out the door when people leave.
Expect faster delivery where it counts. The calendar time from project kickoff to production deployment will be competitive with or faster than traditional agencies, despite the deeper discovery investment. This is because the build phase is dramatically faster when agents have rich context. The time saved on rework, misalignment, and iteration cycles more than compensates for the additional discovery time.
The Structural Advantage
The industry is converging on a recognition that AI-native development is context-dependent development. The agents are powerful, but their power is unlocked by context. The teams that will build the best software in the coming years are not the teams with the most advanced tools. They are the teams that have invested the most in building the knowledge systems that feed those tools.
This is not something you can bolt on after the fact. You cannot run a shallow two-week discovery, produce a thin requirements document, and then try to enrich the context during the build phase. By then, the architectural decisions have been made based on incomplete understanding, the domain model has been built on assumptions instead of knowledge, and the cost of correcting course is measured in weeks, not hours.
Evolved discovery is not a nice-to-have. It is the structural foundation of AI-native delivery. We invest more in it because it is the highest-leverage investment we can make. And after thirteen years of building software and dozens of AI-native engagements, we can say with confidence: the projects that invest in research up front are the projects that deliver on time, on budget, and on target. Every time.

