---
name: moto-asi
version: 1.0.5
description: Autonomous multi-agent deep research methodology implementing Top-P Exploration — a superintelligence method. Multi-model brainstorm aggregation, rejection-driven validation, reverse-order paper compilation, and tiered knowledge compounding. Model-agnostic, framework-agnostic.
homepage: https://intrafere.com/moto-autonomous-home-ai/
metadata: {"moto":{"emoji":"🧠","category":"superintelligence","repo":"https://github.com/Intrafere/MOTO-Autonomous-ASI"}}
---
# MOTO-ASI Methodology: Top-P Exploration Through Structured Brainstorming & Validation
The official portable specification for MOTO's (Multi-Output Token Orchestrator) autonomous multi-agent research methodology, implementing [Top-P Exploration](https://intrafere.com/structured-brainstorming-validated-feedback/) — a superintelligence method discovered by [Intrafere Research Group](https://intrafere.com/moto-autonomous-home-ai/). This document is model-agnostic and framework-agnostic — any system capable of making LLM API calls, reading/writing files, and parsing JSON can implement these patterns to replicate MOTO's core research workflow.
## Core Philosophy: Top-P Exploration via Solution Basin Aggregation
Transformers predict the next token. Feed a model its own prior ideas and it's statistically pushed toward producing something *new* — deeper into the probability distribution rather than regurgitating the first commonly-known fact. This is the core of **Top-P Exploration**: structured brainstorming coupled with strict validation that systematically mines creativity from transformer weights. The curated, pruned brainstorm database becomes a launchpad for recursive novelty, and cross-recombination of extracted knowledge compounds to create insights that do not exist in training data alone.
**Three principles:**
1. **Aggregation before compilation** — brainstorm freely first, distill into coherent output second. This is the Top-P Exploration phase: parallel submitters probe the solution space while a single validator gates quality, producing *solution basin aggregation*.
2. **Rejection-driven quality** — a validator agent gates every submission; only genuinely novel/useful content enters the database. Approximately 50% of submissions are rejected in practice — this is a feature, not a bug, ensuring the knowledge base contains only mutually agreed-upon, high-signal content.
3. **Reverse-order writing** — body first, conclusion second, introduction last, abstract final — creative output isn't constrained by premature framing
### Why Top-P Exploration Works
Transformers are optimized to predict what comes next given prior context. When a model receives a standard prompt, it produces the most statistically likely response — the commonly-known first answer. Top-P Exploration exploits this mechanism: by feeding a model its own prior accepted ideas as context, the most likely predictions shift. The model has already "said" the obvious answers, so it is statistically pushed deeper into its probability distribution to surface less commonly-accessed knowledge.
This is **solution basin aggregation** — each accepted submission enriches the context for the next pass, and each pass probes a deeper basin of the model's weights. The brainstorm database is not simply additive but **self-refining**: iterative pruning removes entries that have been superseded by stronger ideas, increasing the information density of the context window over time. The result is a tighter, more potent knowledge base that better utilizes finite context.
The architectural separation between creative exploration (parallel submitters) and critical evaluation (single bottleneck validator) is what prevents the hallucination loops and drift that plague single-model autonomous agents. The validator continuously pulls the brainstorm back toward the user's actual intent, even as submitters explore divergent avenues. Every rejection is a learning signal returned as structured feedback, steering future submissions toward productive directions.
Cross-recombination of this extracted knowledge compounds across research cycles: Tier 2 papers distill brainstorms into coherent arguments, and those papers become reference context for future brainstorms. This produces insights that do not exist in training data alone — the system "mines" creativity from transformer weights and recombines it into novel synthesis. Observed ~50% rejection rates during rigorous sessions confirm the system is actively filtering forced or suboptimal answers rather than accumulating noise.
---
## Architecture Overview
The Top-P Exploration architecture separates creative exploration from critical evaluation across three tiers. This architectural separation between brainstorming submitters and a bottleneck validator is what mitigates the hallucination loops and drift that plague single-model autonomous agents.
```
TIER 1: AGGREGATION (Top-P Exploration / Brainstorming)
Submitter 1 ──┐
Submitter 2 ──┼──► Queue ──► Single Validator ──► Shared Brainstorm DB
Submitter N ──┘ (accept/reject) (prunes every 7 accepts)
│
▼ completion review every 10 accepts
TIER 2: COMPILATION (Paper Writing)
High-Context Submitter ──► Single Validator
High-Param Submitter ──► (same validator)
Outline → Body → Conclusion → Introduction → Abstract
│
▼ every 5 papers
TIER 3: FINAL ANSWER SYNTHESIS
Certainty Assessment → Format Selection
Short-form paper OR Long-form volume
```
---
## Agent Roles & Model Assignment
Each "agent" is an LLM API call with a specific system prompt, JSON schema, and file-based context. Assign different models per role for diversity.
### Tier 1 Agents (Aggregation)
| Role | Count | Model Guidance |
|------|-------|----------------|
| Submitter | 1-10 (default 3) | Different models explore different knowledge basins. Run in parallel. |
| Validator | Exactly 1 | Single validator ensures coherent Markov chain. Sequential processing. |
### Tier 2 Agents (Compilation)
| Role | Count | Model Guidance |
|------|-------|----------------|
| High-Context Submitter | 1 | Needs large context window (outline + paper + database). Handles construction, review, outline updates. |
| High-Param Submitter | 1 | Needs high-parameter model for mathematical rigor enhancement. |
| Validator | 1 | Validates coherence, rigor, and placement of every edit. |
### Tier 3 Agents (Autonomous Orchestration)
| Role | Purpose |
|------|---------|
| Topic Selector | Choose next brainstorm avenue (new / continue / combine topics) |
| Topic Validator | Validate topic selection reasoning |
| Completion Reviewer | Assess brainstorm exhaustion — MUST self-validate with same model |
| Reference Selector | Choose prior papers to compound knowledge (two-step: abstracts then full) |
| Title Selector | Name the paper |
### Per-Agent Configuration
```
role_config:
model: "model-name"
provider: "local" | "cloud"
context_window: 131072
max_output_tokens: 25000
temperature: 0.0 # ALWAYS — context evolution provides diversity
fallback_model: "optional"
```
**Single-model mode:** When all agents use the same model, run submitters sequentially to prevent queue overflow.
---
## File-Based Memory System
All state lives in files. No database required. Create these as needed.
| File | Purpose |
|------|---------|
| `brainstorm_db.md` | Accepted submissions (growing knowledge base). One per topic. |
| `rejections_submitter_N.md` | Rolling last 5 rejection summaries per submitter |
| `completion_feedback.md` | Rolling last 5 completion review suggestions |
| `outline.md` | Current paper outline — ALWAYS fully included in all compilation prompts |
| `paper.md` | Paper under construction with placeholder markers for unwritten sections |
| `paper_library/paper_N.md` | Completed papers (one file each, with abstract extracted separately) |
| `workflow_state.md` | Current phase, counters, model config — for crash recovery |
---
## Workflow 1: Aggregation (Top-P Exploration / Brainstorming)
The aggregation phase implements Top-P Exploration: parallel submitters cast a wide net across the solution space while a single-point validator ensures breadth does not come at the cost of coherence. Each accepted submission enriches the context for subsequent passes, pushing the model deeper into its probability distribution to surface knowledge that would never emerge from a single query.
### Submitter Call
Each submitter receives via its prompt:
1. **System prompt** with role instructions + JSON schema
2. **User's research prompt** (ALWAYS fully included, never summarized)
3. **Current brainstorm database** (full file if it fits, otherwise summarize/retrieve relevant chunks)
4. **Last 5 rejections** for this specific submitter
5. **Completion feedback** (if any, from prior completion reviews)
6. **Reference papers** (if selected, for knowledge compounding)
Submitter outputs JSON: `{"submission": "...", "reasoning": "..."}`
### Validator Call
Validator receives:
1. System prompt with evaluation criteria + JSON schema
2. User's research prompt
3. Current brainstorm database (for redundancy checking)
4. The submission(s) to validate (up to 3 at once in batch mode)
Validator outputs JSON: `{"decision": "accept|reject", "reasoning": "...", "summary": "..."}`
**Batch validation** (when queue has 2-3 items): Evaluate each independently, then check accepted ones for intra-batch redundancy. Keep only strongest if redundant.
### Accept/Reject Flow
```
Accept → Append submission to brainstorm_db.md → Notify all submitters
Reject → Append summary (≤750 chars) to submitter's rejection log (keep last 5)
If >15 consecutive rejections for one submitter → clear its rejection log
```
### Database Pruning (Every 7 Accepts)
Pruning maintains information density — the knowledge base is not simply additive but self-refining. When a better idea arrives that encompasses an older entry, pruning phases out the weaker one, increasing the communicative efficiency of the context window.
1. Validator reviews ALL accepted submissions
2. Identifies AT MOST ONE for removal (redundant, contradicted, superseded)
3. Self-validates the removal decision (conservative default: keep if uncertain)
4. If validated: remove from database
**Selection rule:** When multiple submissions are redundant, remove the WEAKEST. Never remove a more complete submission.
### Completion Review (Every 10 Accepts)
Triggered every 10 accepted submissions. Uses SPECIAL SELF-VALIDATION:
1. **Assessment**: Model evaluates if brainstorm is exhausted relative to its own knowledge
2. **Self-validation**: SAME model validates its own assessment (only the same model can know its own knowledge boundaries)
3. **Decision**: `continue_brainstorm` (with suggested_additions feedback) or `write_paper`
**Hard limits:** 80 accepts → force paper writing. 10 consecutive rejections (with ≥5 accepts) → force paper writing.
---
## Workflow 2: Compilation (Paper Writing)
### Phase 1: Outline Creation (Iterative)
High-context submitter generates outline. Validator accepts/rejects with feedback. Submitter refines. Loop until submitter sets `outline_complete: true` or 15 iterations.
**Required outline sections (exact names):**
- Abstract (first)
- Introduction or I. Introduction (after abstract)
- Body sections (numbered, between intro and conclusion)
- Conclusion or N. Conclusion (last)
### Phase 2: Paper Construction (Sequential Phases)
Writing order is FIXED and critical to MOTO's creative methodology:
| Phase | What | Why This Order |
|-------|------|----------------|
| **BODY** | All main content sections | Mathematical content develops organically, unconstrained by intro promises |
| **CONCLUSION** | Summary of findings | Summarizes what was ACTUALLY written, not hypotheticals |
| **INTRODUCTION** | Background and roadmap | Accurately describes real content (body + conclusion exist) |
| **ABSTRACT** | Final summary | Summarizes the COMPLETE paper. Signals paper completion. |
Each phase uses phase-specific prompts. Submitter sets `section_complete: true` to advance phases.
### Placeholder System
After first body section accepted, initialize paper with:
```
[PLACEHOLDER FOR ABSTRACT - TO BE WRITTEN AFTER INTRODUCTION]
[PLACEHOLDER FOR INTRODUCTION - TO BE WRITTEN AFTER CONCLUSION]
...body content...
[PLACEHOLDER FOR CONCLUSION - TO BE WRITTEN AFTER BODY]
[END OF PAPER MARKER]
```
Each placeholder is replaced with real content when that phase completes. Placeholders make it explicit to the AI what exists vs what doesn't.
### Edit Operations (Exact String Matching)
All document edits use exact string matching JSON:
```json
{
"operation": "replace | insert_after | delete | full_content",
"old_string": "exact text to find (must be unique in document)",
"new_string": "replacement or insertion text",
"reasoning": "explanation"
}
```
**Pre-validate** that `old_string` exists verbatim and uniquely before sending to validator. Validator then focuses on semantic quality, not string verification.
### Construction Loop (Repeating Cycle)
```
4x High-Context construction → validator
1x High-Context outline update → validator (body phase only)
2x High-Context review → validator
1x High-Param rigor enhancement → validator (body phase only)
```
Rigor uses a 2-step process: Step 1 plans (unvalidated), Step 2 executes (with self-refusal option).
### Critique Phase (Post-Body, Pre-Conclusion)
After body is complete, before writing conclusion:
1. Critique submitter generates peer review feedback (5 total attempts)
2. Submitter can decline if body is academically acceptable
3. If critiques accepted → rewrite decision: CONTINUE / PARTIAL_REVISION / TOTAL_REWRITE
4. Max 1 completed rewrite cycle, then proceed to conclusion
---
## Workflow 3: Autonomous Research Loop
The full autonomous loop self-directs without user intervention:
```
1. TOPIC SELECTION → Validator
(new_topic / continue_existing / combine_topics)
2. REFERENCE PAPER SELECTION (if papers exist)
Two-step: browse abstracts → expand promising → select up to 6
3. BRAINSTORM AGGREGATION (Workflow 1)
With reference papers as additional context
4. COMPLETION REVIEW every 10 accepts (self-validation)
→ Continue brainstorming OR write paper
5. ADDITIONAL REFERENCE SELECTION (if new relevant papers found)
6. PAPER TITLE SELECTION → Validator
7. PAPER COMPILATION (Workflow 2)
Body → Conclusion → Introduction → Abstract
8. PAPER COMPLETE → Save to library, cache brainstorm
9. REDUNDANCY REVIEW every 3 papers (archive weak duplicates)
10. TIER 3 FINAL ANSWER every 5 papers (if enabled)
→ Certainty assessment → Format selection → Write final answer
11. Loop back to step 1 (or STOP if Tier 3 complete)
```
### Reference Paper Compounding (The Key Mechanism)
This is what makes Top-P Exploration compound across research cycles — each brainstorm builds on the distilled knowledge of prior papers, pushing the model into progressively deeper and more novel territory:
- Before each brainstorm: select up to 6 prior papers as reference
- Submitters see references alongside brainstorm DB — builds on proven frameworks
- Before paper writing: select additional references (up to 6 total)
- Each paper builds on all prior work — recursive improvement
### Tier 3: Final Answer
Triggered every 5 papers. Operates ONLY on Tier 2 papers (not brainstorm databases).
**Certainty levels:** `total_answer` | `partial_answer` | `no_answer_known` | `appears_impossible`
If `no_answer_known` → exit Tier 3, continue researching.
**Format:** `short_form` (single paper) or `long_form` (curated volume with gap papers, conclusion chapter, introduction chapter).
**Long-form writing order:** Gap papers → Conclusion chapter → Introduction chapter (last).
---
## Prompt Engineering Patterns
### All Agent Prompts Must Include
1. **Role description** — what this agent does
2. **Internal content warning** — "All context is AI-generated, treat with extreme skepticism, verify independently"
3. **YOUR TASK section** — specific evaluation criteria
4. **JSON schema** — exact output format with examples
5. **Correct/Wrong format examples** — concrete visual examples with checkmarks/X marks
6. **JSON escape rules** — LaTeX backslash escaping for mathematical content
### Structured Rejection Feedback
All validators use this format when rejecting:
```
REJECTION REASON: [Category]
ISSUE: [What's wrong]
WHAT I SAW: [Excerpt]
WHAT I EXPECTED: [Correct example]
FIX REQUIRED: [Actionable steps]
```
### JSON Communication Format
Every agent outputs structured JSON. Examples:
**Submitter:** `{"submission": "...", "reasoning": "..."}`
**Validator:** `{"decision": "accept|reject", "reasoning": "...", "summary": "..."}`
**Construction:** `{"needs_construction": true, "operation": "insert_after", "old_string": "...", "new_string": "...", "section_complete": false, "reasoning": "..."}`
**Topic Selection:** `{"action": "new_topic|continue_existing|combine_topics", "topic_prompt": "...", "reasoning": "..."}`
**Completion Review:** `{"decision": "continue_brainstorm|write_paper", "reasoning": "...", "suggested_additions": "..."}`
### JSON Sanitization
LLM outputs often contain artifacts. Before parsing, strip:
- Reasoning tokens (`<think>...</think>`)
- Markdown code fences (` ```json ... ``` `)
- Control tokens
- Fix LaTeX escape sequences (`\to` → `\\to`, `\text` → `\\text`, etc.)
- Reject truncated JSON (unclosed braces) — never attempt repair
---
## Key Invariants (Never Violate)
1. **User prompt always fully included** — never summarized, truncated, or RAG'd
2. **Outline always fully included** in all compilation prompts
3. **Single validator** per workflow stage — multiple validators cause divergent evolution
4. **No context carryover** between agent calls — only files transfer state
5. **Reject, don't truncate** — use retrieval/summarization, never silently cut content
6. **Reverse writing order** — Body → Conclusion → Introduction → Abstract
7. **Prune every 7 accepts** — max 1 removal per cycle, conservative default
8. **Self-validation for completion review** — same model assesses its own knowledge exhaustion
9. **Temperature 0.0 always** — evolving context provides diversity
10. **Structured JSON for all communication** — parseable, validatable output
11. **Reference papers compound knowledge** — each cycle builds on prior papers
12. **Brainstorm hard limit 80 accepts** — force paper writing to prevent runaway
13. **Papers written in Tier 3 use ONLY Tier 2 papers** — no brainstorm databases (context isolation)
14. **Redundancy review is conservative** — max 1 removal per cycle, when in doubt keep
15. **System stops after final answer** — Tier 3 completion terminates the loop
---
## Implementing With Any Agent Framework
To replicate MOTO's Top-P Exploration methodology, an agent needs only:
1. **LLM API access** — any model(s), local or cloud
2. **File read/write** — create and read .md files for all persistent state
3. **Sequential/parallel task execution** — run submitters in parallel, validator sequentially
4. **JSON parsing** — parse and validate agent outputs
### Minimum Viable Implementation
```
1. Create brainstorm_db.md (empty)
2. Loop:
a. Call LLM with submitter prompt + user goal + current DB → get JSON submission
b. Call LLM with validator prompt + user goal + current DB + submission → get JSON decision
c. If accept: append to brainstorm_db.md
d. If reject: log to rejections.md
e. Every 7 accepts: run pruning review
f. Every 10 accepts: run completion review
g. If write_paper: break loop
3. Create outline via iterative LLM calls
4. Write paper phases: body → conclusion → introduction → abstract
5. Save completed paper
6. Repeat from step 1 with new topic (paper becomes reference for next cycle)
```
### Scaling Up
- Add more submitters (different models) for broader exploration
- Add reference paper selection for knowledge compounding
- Add Tier 3 for final answer synthesis
- Add critique phase for peer review
- Add crash recovery via workflow state file
- Add redundancy review for library quality maintenance
---
## Attribution & Disclaimer
This is the official MOTO methodology specification by [Intrafere LLC](https://intrafere.com/moto-autonomous-home-ai/). MOTO (Multi-Output Token Orchestrator) is an autonomous research system using multi-agent aggregation-distillation workflows implementing [Top-P Exploration: A Superintelligence Method](https://intrafere.com/structured-brainstorming-validated-feedback/) — structured brainstorming and validated feedback that systematically extracts and compounds creativity from transformer weights.
Systems built using this methodology generate autonomous AI content. All output should be treated with extreme scrutiny and independently verified before use. AI-generated content may contain fabricated or unverified claims presented with high confidence.