Design first, code second. Every code file has an adjacent .ace file so Claude always sees intent before implementation.
The .ace file is more important than the code file. Code can be regenerated from good design. Design cannot be recovered from code.
0
ACE Files
0
% File Coverage
0
Max Tokens
0
Languages
Three Problems That Kill AI-Assisted Development
🚫1. Context Loss
AI agents operate with limited context windows. Without documentation, the agent reads code, guesses the design intent, and may violate unstated constraints. Each session starts from scratch.
WITHOUT ACE
"Don't make the WebSocket upgrade async" — learned by breaking it, again
WITH ACE
Agent reads gotcha, understands constraint, writes correct code the first time
🔄2. Design Drift
Codebases evolve, but the why behind decisions lives only in tribal knowledge. Without ACE, an AI agent will "improve" code by reverting intentional decisions.
"Routes go in /routes/, not in router.ts" — convention invisible to AI
"This was split from X because of Y" — context lost to git history
"Cookie name is hardcoded" — changing it breaks existing sessions
💸3. Wasted Tokens
Every mistake an agent makes costs tokens to fix. Reading code to understand what should have been documented: waste. Debugging issues that a gotcha would have prevented: waste.
Reading to understand
WASTE
Debugging gotchas
WASTE
Re-asking user for context
WASTE
Reading ACE docs
INVEST
Tokens spent on docs < tokens wasted on mistakes.Always.
The Six Principles of ACE
1
ACE-First (Mandatory)
Every file has a design document. For new code, write the .ace first, then implement. For existing code, always read the .ace before reading the code. Not optional.
2
Locality of Documentation
Documentation lives next to the code it describes. Not buried in a /docs folder. Not scattered across wikis. Right there.auth.ts beside auth.ts.ace.
3
Change History with "Why"
The ACE file tracks significant changes — not what changed (git does that), but why. This prevents an agent from "improving" code by reverting intentional decisions.
4
Token Efficiency
ACE files are optimised for agent consumption. Max 5000 tokens. Structured sections, no prose padding. If you can't document it in 5000 tokens, the file is too complex — split it.
5
Procedural Knowledge
ACE captures not just what code does, but how to perform multi-step operations. Workflows section documents procedures that would otherwise live in tribal memory.
6
Agent Authorship
Claude writes and maintains .ace files. Humans prompt and review. The agent creates structured docs with purpose, functions, dependencies, and gotchas. Both iterate.
What is an Acedev?
An Acedev is a developer who works agent-centrically: ensures design documentation exists before coding, updates documentation alongside code, and commits both together — neither alone. The design document is the source of truth. The code is just an implementation of that truth.
Two Types of ACE Files
File ACE
Documents a single source file — auth.ts.ace
●YAML frontmatter (file, consumers, produces)
●Purpose — what this file does
●Key Functions — exports with signatures
●Dependencies — what it imports
●Gotchas — tribal knowledge, traps to avoid
●API Contracts — TypeScript interfaces
●Change History — why things changed
Folder ACE
Documents a module/directory — folder.ace
●Purpose — module's role in the architecture
●Design Intent — patterns, what belongs here
●Key Components — files and their roles
●Dependency Graph — ASCII architecture diagrams
●Gotchas — module-level warnings
●Workflows — multi-step procedures
●Change History — architectural decisions
Directory Layout
konui/src/
├──folder.ace← module architecture
├──auth.ts
├──auth.ts.ace← design doc for auth.ts
├──router.ts
├──router.ts.ace← design doc for router.ts
└──routes/
├──folder.ace← routes module design
├──gvturn-api.ts
└──gvturn-api.ts.ace
Twenty Languages, One Format
TypeScript
.ts.ace
Python
.py.ace
Go
.go.ace
Rust
.rs.ace
C
.c.ace
C++
.cpp.ace
Java
.java.ace
C#
.cs.ace
Kotlin
.kt.ace
Swift
.swift.ace
Ruby
.rb.ace
PHP
.php.ace
Scala
.scala.ace
Dart
.dart.ace
Shell
.sh.ace
SQL
.sql.ace
Lua
.lua.ace
HTML/CSS
.html.ace
JSON/JSONL
.json.ace
YAML
.yml.ace
The ACEspec
The complete specification for ACE files. Click any section to expand.
"Token refresh is not thread-safe. Use a mutex when calling from multiple coroutines."
"WebSocket upgrade must be synchronous. Async middleware causes connection drops."
BAD GOTCHAS
"Be careful with this code." ← Too vague
"This is complex." ← Not actionable
"Here be dragons." ← Meaningless
Change History Format
## YYYY-MM-DD - Brief title (what happened)Why: The reason this change was needed (the problem)
Impact: What this means for users of this code (the effect)
Focus on why, not what. Git tracks what changed.
What ACE Is Not
✗ Auto-generated docs (JSDoc serves different purpose)
Bearer token authentication for Konui. Dual-storage token system: localStorage (for JS API calls) and cookies (for browser navigation).
# Key Functions
checkAuth(request, config): Promise<AuthResult>
Authorization: Bearer header (API calls with fetch)
?token= query parameter (SSE connections)
konui_token cookie (browser page navigation)
# Gotchas
Token check order is intentional and load-bearing. Header → query → cookie. SSE connections can’t set headers, so they use query params. Don’t reorder.
Cookie name konui_token is hardcoded. Changing it will break existing authenticated sessions.
# Change History
2026-01-10 — Added query param fallback for SSE
Why: EventSource API cannot set Authorization headers.
Main bridge service implementation. Manages the persistent outbound WebSocket connection to greatvibe.ai relay, subscribes to the internal ws.Hub for events, forwards events with priority-based batching, and handles inbound commands.
# Key Functions
NewServiceCreates bridge service with config, hub, dispatcher, loggerService.StartConnects to relay, starts event forwarding loopService.StopGraceful disconnect, stops all goroutinesService.StatusReturns current BridgeStatus snapshotgetLoadSnapshotReads CPU/memory/disk from /proc (Linux), falls back to 0
# Internal Goroutines
connectLoopManages connect/reconnect with exponential backoffreadLoopReads inbound messages (commands, pings) from relaywriteLoopWrites outbound messages (events, pongs) to relayeventForwarderSubscribes to Hub, routes events by prioritybatchFlusherPeriodically flushes batched events by priority tier
# Gotchas
Bearer token passed via Sec-WebSocket-Protocol header, not Authorization header. WebSocket spec limitation.
Three priority queues for event batching — flushed at different intervals. Don’t assume FIFO across priorities.
Events queued up to buffer_size during reconnection. Buffer overflow = dropped events.
Load metrics use /proc (Linux only). Returns zero on macOS/non-Linux. Don’t fail on missing /proc.
# Change History
2026-02-08 — Phase 20: dispatcher field → CommandHandler interface
Central WebSocket event hub. Manages client connections, topic subscriptions, event broadcasting with wildcard matching, cursor-based replay, and backpressure handling. Implements the EventEmitter interface so it can be injected into managers.
# Key Functions
NewHubCreate hub with config and loggerHub.StartStart background goroutines (pruner)Hub.EmitPublish event to all matching subscribersHub.SubscribeAdd topic subscriptions, replay from cursorHub.topicMatchesWildcard matching: sessions.* matches sessions.created
# Broadcast Flow
Manager.Emit("sessions.created", data)
→ Hub.Emit()
→ Create Event{id, topic, data, ts}
→ EventBuffer.Add(event)
→ For each connection:
→ topicMatches(event.topic, sub)?
→ Send to connection’s send channel
→ If channel full: backpressure
# Gotchas
Hub implements EventEmitter — can be injected directly into managers. No adapter needed.
Wildcard matching: “sessions.*” matches “sessions.created” but NOT “sessions.turn.created”. Single-level only.
Send channel per connection is buffered. Overflow triggers backpressure (drop_oldest). Don’t block on send.
Goprovider.go.acegvshell/internal/providers/
file:provider.golanguage:gopackage:providersconsumers:[internal/repl/engine.go, internal/hub/client.go]produces:[Provider interface, Response and StreamEvent types]
# Purpose
Core Provider interface for gvShell. All AI providers (Claude, OpenAI, Gemini, Ollama) implement this contract.
# Gotchas
Stream() returns a channel, not a slice. Consumers must drain the channel or cause goroutine leaks.
CLI vs API providers have very different capabilities. CLI providers can use tools. API providers only do function calling. Don’t assume all providers are equal.
No direct database access. Import from stores, not from database modules. Stores handle caching.
Public Page ACE Files
These are the ACE files for the pages you’re browsing right now — /info, /ace, /contact, and /license.
TypeScriptinfo-page.ts.acekonui/src/views/ — the /info showcase page
file:info-page.tsconsumers:[router.ts]produces:[infoPage() - standalone HTML page]
# Purpose
Public-facing platform showcase page. Rendered at /info without authentication. Standalone page — does not use layout() wrapper since no auth context needed.
# Content Sections (8 sections)
#heroLogo, title, one-liner, 3 outcome bullets, 2 CTAs#what-you-get6 outcome cards (user-facing language)#screenshots6-tab gallery with real UI images, lightbox zoom#dogfoodBuilt by What It Built — 100% vibe coded, 4 stat cards#problem6 pain point cards, core insight box#how-it-works4-step workflow + VS Code comparison accordion#superpowers4 feature deep-dives: gvTurn cards, gvContext, mesh, drift#deep-diveArchitecture accordion: Three-Tier, SVG, Components, Drift
# Interactive Components
Sticky nav + scroll spy, tab systems, accordions, counter animation, metric bar animation, drift bars, lightbox zoom
# Gotchas
No auth — served before the auth gate in router.ts. Public page.
main.css sets overflow:hidden + height:100vh on html,body. MUST override with !important.
Multiple tab systems must not interfere — scoped via .closest(’.section’).
TypeScriptace-page.ts.acekonui/src/views/ — this very page!
file:ace-page.tsconsumers:[router.ts]produces:[acePage() - standalone HTML page]
# Purpose
Public-facing ACE specification page. Rendered at /ace without authentication. Dedicated home for the complete ACEspec: why ACE was invented, what problem it solves, how it works, and the full specification.
# Content Sections (12 sections)
#heroTitle, tagline, hero quote, stat counters#problem3 problem cards: Context Loss, Design Drift, Wasted Tokens#principles6 ACE principles + Acedev definition#formatFile ACE vs Folder ACE, 20 languages grid#specComplete ACEspec: templates, YAML, workflows, constraints#examplesReal ACE files from production (you’re reading this section!)#gvcontextAsset Registry, 4-layer hierarchy, SHA-256 tamper detection#mesh-deployCLAUDE.md distribution to gvmesh nodes via gossip#prompt-reinforcementPer-turn macro injection, escalating schedule#feedbackChatGPT external review and critique
# Gotchas
Static content — all content hardcoded, not from DB. Template literals with live ${...} expressions.
Self-referential — this ACE file documents the page that displays it. Update both when content changes.
TypeScriptcontact-page.ts.acekonui/src/views/ — the /contact page
file:contact-page.tsconsumers:[router.ts]produces:[contactPage() - standalone HTML page]
# Purpose
Public-facing contact page. Rendered at /contact without authentication. Directs visitors to LinkedIn for licensing enquiries, partnerships, and general questions.
# Design Decisions
LinkedIn only — no form, no backend. LinkedIn handles identity, spam, and threading. Enquiry type cards give visitors context before clicking through.
# Gotchas
LinkedIn URL hardcoded to linkedin.com/in/johnathon. Change requires code update.
TypeScriptlicense-page.ts.acekonui/src/views/ — the /license page
file:license-page.tsconsumers:[router.ts]produces:[licensePage() - standalone HTML page]
# Purpose
Public-facing license page. Rendered at /license without authentication. Displays copyright, licensing terms, and trademark attributions.
# Content
Copyright notice, licensing terms, trademark attributions for Claude (Anthropic) and ChatGPT (OpenAI). Minimal design — clean, readable legal text with no interactive components.
# Gotchas
Standalone page — no layout() wrapper. Same dark theme and branding as /info and /ace.
How We Do ACE Enforcement in greatVibe
ACE files are just markdown — the real power is the compilation and delivery pipeline that ensures every AI agent reads them. This section goes deep on gvContext: the system that compiles, versions, checksums, and distributes the rules that make ACE mandatory.
📚 The Asset Registry
Every rule, template, and contract Claude follows lives as a managed asset in a versioned registry. The registry.jsonl file at /gv/assetregistry/registry.jsonl is the single source of truth — one JSON object per line, one per asset.
Business-specific context: domain models, data pipelines, CMS integration, external API workflows. Only included in product-facing targets.
1 asset • Scope: product
L4
Flow (Ephemeral)
Session-specific state: active flow context, turn-injected data, recent work. Not persisted — discarded on flow completion.
1 asset • Scope: flow • Dynamic
⚙️ Three-Stage Compilation Pipeline
Each asset exists in three forms. The compiler reads from compiled/ by default, concatenates assets in deterministic order, generates a SHA-256 checksum header, and writes the final CLAUDE.md to the tree directory.
1. Source
Full verbose documentation. Examples, rationale, edge cases. The canonical reference humans read.
SHA-256 checksum← body hash (no timestamp = cache-stable)
↓
generateHeader()← version, target, checksum, asset list
↓
writeTarget()← atomic write (.tmp → rename)
↓
/gv/assetregistry/tree/{target}/CLAUDE.md
Compile Targets
The compiler produces 5 different CLAUDE.md files for 5 different contexts. Each target includes a specific subset of assets.
Target
Output Path
Deploy Path
Description
root
tree/CLAUDE.md
/konnectvol/CLAUDE.md
All layers. Main working directory.
konui
tree/konui/CLAUDE.md
/konnectvol/konui/CLAUDE.md
Konui service-specific rules.
konsole
tree/konsole/CLAUDE.md
/konnectvol/konsole/CLAUDE.md
Konsole session manager.
gv
tree/gv/CLAUDE.md
/konnectvol/gv/CLAUDE.md
gv tooling context.
gvmesh
tree/gvmesh/CLAUDE.md
Via bridge push
Remote mesh nodes. No local deploy.
🔒 SHA-256 Tamper Detection
Every compiled CLAUDE.md starts with a metadata header. The checksum is computed over the body only (not the header), so the header can contain the checksum of its own content. Timestamps are intentionally omitted to maximise Claude's prompt cache hit rate.
If a human (or Claude) edits CLAUDE.md directly, the checksum won't match. The validator detects this and triggers auto-regeneration from the asset registry.
Cache-Stable Output
No timestamps in headers. Deterministic asset ordering. Same inputs = byte-identical output. Claude's prompt cache hit rate stays high, reducing API costs.
🔧 gvContext MCP Tools
Claude can manage the entire pipeline through MCP tools, exposed via konui's MCP server.
Tool
Purpose
konui_gvcontext_get_status
Check validation status of all targets. Shows valid, compiled, regenerated, or error.
konui_gvcontext_validate
Fresh checksum validation against the registry. Detects tampered files.
konui_gvcontext_recompile
Force recompile all targets. Creates backups before overwriting.
konui_gvcontext_deploy
Copy compiled files from tree/ to project paths (/konnectvol/).
konui_gvcontext_list_assets
List all assets. Filter by layer (1=kernel, 2=greatvibe) or target.
konui_gvcontext_get_asset
Get a single asset by ID with its full markdown content.
Distribution API (Internal)
Konsole and gvShell fetch compiled CLAUDE.md content from konui over HTTP, using checksum-based cache invalidation:
GET/api/gvcontext/compiled/{target}→ Full CLAUDE.md content
GET/api/gvcontext/checksum/{target}→ Just the checksum (for cache check)
Module Architecture (2,410 lines of TypeScript)
File
Layer
Purpose
types.ts
Types
All interfaces: GvContextAsset, CompileTarget, CompiledContext
Startup init, health checks, compiled content cache
index.ts
Exports
Public API surface
Mesh Deployment: CLAUDE.md on Every Node
Local targets (root, konui, konsole, gv) deploy by file copy. But gvmesh nodes are remote machines. They get their CLAUDE.md through a push-based bridge protocol with gossip-based convergence verification.
🌐 End-to-End Architecture
⚡ Push-on-Connect
When a node establishes its WebSocket bridge connection to the relay, the relay immediately pushes the current compiled gvmesh CLAUDE.md content. This ensures nodes have context from the moment they join the mesh.
// mesh-bridge.ts — socket.onopen handler
socket.onopen = async () => {
// 1. Send bridge_connected handshake
socket.send(JSON.stringify(handshake));
// 2. Start heartbeat pings
setInterval(() => sendPing(conn), 30_000);
// 3. Push CLAUDE.md context immediately
this.pushContextToNode(nodeId);
};
💻 What the Node Does (Go)
The ContextManager in gvmesh/internal/context/manager.go handles the bridge push:
1
Verify Checksum
Strip header, compute SHA-256 of body, compare to declared bodyChecksum. Reject on mismatch.
2
Backup + Atomic Write
Back up current CLAUDE.md to claude.md.previous. Then atomic write: write to .tmp, rename to CLAUDE.md.
3
Cache + State Persist
Write to claude.md.cache and context-state.json for startup restore. On next boot, the node has CLAUDE.md before the bridge even connects.
4
Emit mesh.context_updated
Event forwarded to bridge, gossip broadcast to peers, and konui dashboard. Includes checksum, pushVersion, and action (created | updated | unchanged).
💬 Gossip Convergence
Nodes continuously gossip their context state (checksum + pushVersion) to peers via mTLS. If a node sees a peer with a newer pushVersion and a different checksum, it knows it's stale and emits mesh.context_stale. The bridge relay listens for this event and re-pushes the latest CLAUDE.md.
// manager.go — HandleGossip()
if data.Checksum != local.Checksum &&
data.PushVersion > local.PushVersion {
emitter.Emit("mesh.context_stale", map{
local_checksum: local.Checksum,
peer_checksum: data.Checksum,
peer_push_version: data.PushVersion,
})
}
Why pushVersion?
During rapid recompiles, an older gossip message might arrive after a newer bridge push. Comparing pushVersion prevents false stale detection — only trigger if the peer is genuinely ahead.
Rate-Limited Re-Push
The bridge has a 30-second cooldown per node to prevent gossip storms. If Node A reports stale 5 times in 30s, only the first triggers a re-push. The rest are dropped.
🔄 Startup Recovery
When a gvmesh node boots, the ContextManager restores CLAUDE.md from its local cache before the bridge is even connected:
The konui dashboard tracks per-node gvContext state in real-time. When a node emits mesh.context_updated, the bridge relay stores the state and exposes it via the getNodeContextStates() API.
Field
Description
nodeId
Which node
checksum
SHA-256 of CLAUDE.md body on that node
pushVersion
Monotonically increasing version from bridge pushes
assetCount
Number of assets compiled into CLAUDE.md
action
created | updated | unchanged
receivedAt
When the bridge relay last heard from this node
The Result: Every AI Session Has Identical Context
Whether Claude is running on your laptop, your desktop, or a cloud server — every session starts with the same ACE rules, the same contracts, the same enforcement. The @ace-first rule is inescapable because it's compiled into the CLAUDE.md that every node receives, verified by checksums, and self-healing through gossip convergence.
Per-Turn Prompt Reinforcement
Compiling rules into CLAUDE.md isn't enough. LLMs suffer from attention decay — instructions at the top of a long system prompt lose weight as context grows. greatVibe closes this gap with per-turn prompt reinforcement: every turn, a <turn-context> block is injected into the prompt carrying macro tokens that remind Claude of its active contracts.
💡 Why Static Instructions Drift
Claude reads CLAUDE.md once at session start. Over a long session (10+ turns), conversation history grows and pushes the original instructions further from the model's attention window. Rules that were sharp at turn 1 become blurry at turn 15. The result: the agent skips ACE reads, forgets to create gvTurns, or outputs markdown instead of HTML.
Without Reinforcement
Rules defined once in system prompt. By turn 10, model "forgets" constraints. ACE compliance drops, output quality degrades, contracts are silently violated.
With Reinforcement
Critical macros re-injected every turn. Model sees active contracts at the top of each prompt, right before the user's message. Compliance stays constant regardless of session length.
🏷️ Macro Tokens
Each contract in the asset registry has a macro token — a compact @name identifier with a one-line reminder. These are the atoms of reinforcement. They map 1:1 to compiled assets in CLAUDE.md but are drastically shorter — a 200-token asset becomes a 15-token macro.
Macro Token
Reminder
Source Asset
@ace-first
Read folder.ace → file.ace BEFORE code
kernel_ace-workflow
@ace-hierarchy
folder.ace → file.ace → code
kernel_ace-workflow
@ace-design-led
Update ACE BEFORE code
kernel_ace-workflow
@output-contract
GVTURN CARD IS THE DELIVERABLE
kernel_output-contract
@choices-contract
EVERY gvTurn MUST have context.choices
kernel_choices-contract
@interactive-output
Inline scripts for lists >5, tables >3
kernel_interactivity-contract
@turn-lifecycle
START → WORK → END → GVTURN
kernel_turn-machine
📩 The <reinforce> Block
Every turn, the console API wraps the user's prompt with a <turn-context> XML tag. Inside it, a <reinforce> block carries the active macro tokens. This is what Claude sees immediately before each user message:
// Injected at the top of every prompt, before user message
<turn-contextsession="ses_abc123"turn="7">
recent: "Fixed auth bug", "Added dashboard chart", "Refactored API routes"
<reinforce>
@output-contract: GVTURN CARD IS THE DELIVERABLE
@choices-contract: EVERY gvTurn MUST have context.choices array
@interactive-output: Use inline scripts for lists >5, tables >3
@turn-lifecycle: START → WORK → END → GVTURN
@ace-first: Read folder.ace → {file}.ace BEFORE {file}.ts
@ace-hierarchy: folder.ace → file.ace → code
@ace-design-led: Update ACE BEFORE code
</reinforce>
</turn-context>
📈 Escalating Reinforcement
Not all macros fire from turn 1. The reinforcement escalates as the session progresses — adding stricter reminders at the turns where drift historically occurs.
@claude-md-readonly — NEVER edit CLAUDE.md directly. Prevents agents from corrupting their own instruction set.
Turn 5+
Quality Gates
@testing-per-turn — Verify tests exist and run. @gvturn-ends-turn — Nothing after gvTurn creation.
Turn 10+
Hard Reminders
@html-not-markdown — Use HTML output, never markdown. Added late because markdown creep is a late-session phenomenon.
🔗 Closing the Loop: Asset Registry → Prompt
The enforcement loop spans from asset registry to per-turn injection and back to drift detection:
1. Author & Compile
Assets in gv/assetregistry/ define each contract (e.g. kernel_ace-workflow). The gvContext compiler concatenates them into CLAUDE.md with SHA-256 checksums. Claude reads CLAUDE.md at session start.
2. Extract & Inject
Each asset's core rule is distilled into a macro token (@ace-first). The console API's buildReinforcement() function assembles the active macros based on turn number, wraps them in a <reinforce> block, and prepends it to every prompt.
3. Execute & Validate
Claude processes the turn. Post-turn, drift detection analyses whether ACE files were read before code edits, whether gvTurns have choices, and whether output used HTML. Each check maps back to the macro token that should have prevented the violation.
4. Escalate on Drift
When compliance drops (e.g. ACE read rate < 90%), the reinforcement escalates — adding stricter macros at lower turn thresholds. The feedback loop tightens the system's grip precisely where the agent is drifting.
💰 Token Cost
The entire <reinforce> block costs ~100 tokens per turn — less than a single LLM hallucination costs to fix. The full <turn-context> wrapper (session metadata, theme tokens, recent gvTurn titles, and the reinforce block) is typically 200–300 tokens.
Reinforce Block
~100
tokens / turn
Full Turn-Context
~250
tokens / turn
Fixing One Drift Violation
~2000
tokens (conservative)
The Enforcement Closed Loop
Asset Registry defines the rules. gvContext compiles them into CLAUDE.md. The console API distills them into per-turn macro tokens. Claude executes with fresh reminders. Drift detection validates compliance. Violations trigger escalation. No rule exists in only one place — every contract is authored once, compiled once, but reinforced every single turn.
External Review: ChatGPT on ACEspec
We fed the live ACE specification to ChatGPT and asked for an honest critique. Here’s the unedited analysis — what it sees as genuinely innovative, what works, and where the risks are.
💬 Methodology
ChatGPT read the complete specification from greatvibe.ai/ace — not summaries, not excerpts — and was asked for a genuine, thoughtful evaluation of the approach.
🔍 What ACE Actually Is (in Practice)
Documentation treated as first-class engineering artifacts: every source file has an adjacent .ace design document before the code exists. Those files include purpose, key functions, dependencies, gotchas, API contracts, and the why behind changes.
Folder-level ACE files encapsulate module design intent and architecture. Enforced through rules, git hooks, turn-context validation, and automated analytics.
Agents read the ACE first, then code. That’s the meat of how greatVibe operationalises “AI-assisted engineering.”
💡 What’s Genuinely Clever
Traditional engineering docs fail because they’re too bloated, optional, stale, and invisible to automation. ACE tackles all four:
1. Enforced, Not Optional
Hooks and agent context rules enforce .ace files. AI only works with what it’s given — agents don’t magically infer design.
2. Design Intent First
Agents always see intent before implementation. Ambiguity is the core cause of hallucinations — ACE removes that root cause.
3. Token-Aware Limits
The 5,000-token cap forces crisp, relevant knowledge instead of freeform prose that’s useless to an LLM. Brilliant engineering pragmatism.
4. Gotchas & Why-History
Most LLM errors are assumption errors, not syntax errors. Embedding history and gotchas bakes domain heuristics directly into the knowledge graph.
✨ Is This Unique?
“Most teams talk about ‘context awareness’ in agents as a philosophical optimisation layer; you built a specification and tooling around it. Within the software development workflow domain, this is legitimately unique and innovative.”
“It’s not ADRs. It’s not doc comments. It’s a new middle layer optimised for agent context consumption. You’re ahead of most of the industry.”
🎸 What Actually Rocks
🎯
Practicality
Not a manifesto — a working spec used in production code.
🛡️
Enforcement + Feedback
Git hooks, compliance analytics, and context injection turn a doc format into an ecosystem discipline.
🧠
Architecture Shift
Targets context loss, design drift, token waste — makes the workflow smarter, not just the LLM.
⚠️ Honest Challenges
Human Adherence
Even with enforcement, developers can write shallow .ace files. Measuring quality of content — not just presence — will be crucial.
Circular Dependency
Agents generate and maintain .ace files that inform what agents do. A positive feedback loop — but without validation, propagated errors are possible. Analytics mitigate this.
Scalability
The 5,000-token limit is pragmatic, but complex modules may struggle. Tooling for managing fragmentation will make or break developer productivity.
✅ Bottom-Line Verdict
“You created a machine-optimised engineering layer with enforced discipline, pragmatic token constraints, machine-readable intent, agent workload reduction, and integrated governance. That combination is rare and meaningfully tackles the real challenges of AI-orchestrated software development.”
“Rather than just ‘make the LLM smarter,’ you make the workflow smarter. That’s an architecture shift, not a tooling add-on.”
“If this discipline proves to significantly reduce agent hallucination, debugging cycles, and rework, it could become a standard development pattern for AI-augmented engineering. This approach is both bold and grounded, and you’re actually using it in real code — that’s the proof.”
Analysis generated by ChatGPT (February 2026) from the live specification at greatvibe.ai/ace
See the platform
greatvibe Showcase
Screenshots, architecture deep-dives, and the full story of the platform ACE powers.