Fiction Forge
Novel-Writing Engine
The Spark
Define the soul of your story. Choose a genre, target length, and plant the seed of a premise.
Determines the cast size and story complexity.
Forging the premise...
The Character Forge
Define your main cast, and let the AI generate the supporting mid-level and minor characters based on your story's length.
Forging core and supporting characters...
Core Cast
Supporting Cast (Mid-Level & Minor)
The Worldbuilder
Construct the Iceberg: Surface, Shallows, Deep, and Abyss.
Plumbing the depths...
The Architect
Fractal Outlining: From skeleton to full chapter beats.
Drafting skeleton...
The Forge
The ghostwriter loop. Outline on the left, manuscript on the right.
Generating Beat...
The Bindery
Design your book's cover, craft the perfect title, and write the back-cover blurb before publishing.
No Cover Art
Painting masterpiece...
How to Use Fiction Forge
Welcome to the ultimate AI novel-writing engine. Follow the 6 phases to build your masterpiece.
Phase 1: The Spark
Define your genre, word count, and premise. The engine will dynamically interview you about the world based on your initial prompt.
Phase 2: Character Forge
Cast sizes scale automatically. We generate psychological profiles, MBTI traits, and can paint visual portraits via the Flux model.
Phase 3: Worldbuilder
Using the 4-tier Iceberg theory, we establish Surface, Shallows, Deep, Abyss plot subtexts to give your story dimension.
Phase 4: Architect
Fractal outlining maps a 7-point skeleton, calculates chapter counts, and mathematically divides the story into micro-beats.
Phase 5: The Forge
The infinite ghostwriter loop. The engine feeds on your outline tracker and rolling memory cache to write seamlessly. Let it run!
Phase 6: The Bindery
Generate a title, blurb, and AI cover art. Finally, hit Export to dynamically compile the entire manuscript into print-ready PDF format.
v1.5 - The Co-Writer (Mid Tier) Overhaul
MATH CALIBRATION: Scaled Mid Tier pacing down from 1,000 words/beat to 800 words/beat (2,500 words per chapter). This aligns perfectly with the natural output limitations of rapid models like Haiku and Flash, eliminating prose padding and rambling.
CLAUDE HAIKU PROMOTION: DeepSeek V3 was still handling structural logic for the Mid Tier, risking the same timeouts. Claude 3.5 Haiku has been promoted to the primary structural brain for Mid Tier due to its lightning-fast reasoning and low cost.
v1.4 - The Gemini Flash Promotion
MATH CALIBRATION: Adjusted Premium tier chapter calculations to estimate 3,400 words per chapter. This corrects a pacing bug where the engine would over-generate text (giving you 180k words on a 150k word target).
WORKER BEE SWAP: DeepSeek V3 was suffering from severe API latency and timeouts under high load. Gemini 2.5 Flash has been promoted to the primary prose-generating worker bee to drastically improve generation speed and eliminate loop interruptions.
v1.3 - The Director / Worker Bee Architecture
DEEPSEEK INTEGRATION: Re-architected the Masterwork tier to split brain functions. Expensive, high-logic "Director" models (like Gemini 3.1 Pro) handle planning, world-building, and macro-structuring. The heavy lifting of raw prose generation is now dynamically outsourced to ultra-efficient "Worker Bee" models like DeepSeek V3.
MASSIVE COST REDUCTION: By combining this model-routing split with the Rolling Context Window, Masterwork prose generation costs have plummeted to roughly 18-26 credits per beat—saving approximately 93% (1/15th the cost) compared to previous legacy generations that forced premium models for all tasks.
v1.2 - The Masterwork LLM Optimization
HARDWIRED LLM ROUTING: Premium Tier explicitly routes Logic/World building to Gemini 3.1 Pro Preview and Prose/Characters to Claude 4.6 Sonnet for maximum literary quality. Mid-Tier balances cost and speed using DeepSeek V3 and Claude 3.5 Haiku.
ROLLING CONTEXT WINDOW (ANTI-SNOWBALL): To prevent exponential token bloat, the Forge Engine restricts the injection of previous prose to exactly 800 words. Combined with the Master Outline and Cache, inputs are strictly capped around ~1,450 tokens. This guarantees a flat-rate cost structure per beat.
FRACTAL BEAT SCALING: Output instructions adapt dynamically to the AI Tier. Premium forces 4 beats per chapter at ~700 words each (highly detailed, sensory prose). Mid-Tier forces 3 beats per chapter at ~1,000 words each (plot-driven, utilitarian pacing).
TOKEN & CREDIT ECONOMICS: A full 50,000-word novel scales predictably:
• Premium (Sonnet): 18 chapters / 72 beats. ~20 credits/beat. Total cost: ~1,440 credits.
• Mid-Tier (Haiku/DeepSeek): 17 chapters / 51 beats. ~7 credits/beat. Total cost: ~357 credits.
v1.1 - The Gutenberg Print Update
NATIVE DARK MODE PDF: Added explicit rendering toggles to bypass default browser print settings, enabling true dark mode PDFs with #121212 backgrounds and #e0e0e0 text.
MANUSCRIPT REFLOW: Refactored text parsing engine to discard pre-wrap logic and dynamically wrap newlines into semantic HTML paragraphs with text-indent and justification. Resolves ragged edges on mobile PDF viewers.
ANTI-RASTERIZATION: Stripped all CSS filters and backdrop-blurs during the print compilation phase to prevent mobile browsers from treating text as low-res screenshot images.
v1.0 - The Masterpiece Architecture
STATE & STORAGE: Migrated from standard localStorage (1MB cloud limit) to IndexedDB for manuscript payloads. This allows up to ~500,000+ words to be cached locally without triggering quota limits, while the structural metadata (outlines/characters) syncs to the cloud. Library architecture uses 'Active' and 'Basement' logic for cold storage.
LLM ROUTING: Dynamic EngineManager dynamically routes tasks. Draft Tier strictly hits free models (GPT-4o Mini, Gemini, Llama 3). Co-Writer Tier routes logic/prose to DeepSeek V3 and Claude 3.5 Haiku. Masterwork Tier leverages GPT-5 and Claude Opus capabilities. Advanced JSON regex extraction claws recover broken model responses.
PHASE 1 (SPARK): Procedural contextual generation. Options feed recursively back into a deep-dive questionnaire engine to exponentially expand worldbuilding lore.
PHASE 2 (CHARACTERS): Math-based algorithmic scaling. Target wordcount dictates cast size (e.g. >50k words spawns up to 13 unique characters). Deep profile mapping via MBTI traits, interconnected motivations, and native image generation calls via Flux via MiniappsAI.
PHASE 3 (WORLDBUILDER): Thematic scaffolding using the 4-tier Iceberg Theory (Surface, Shallows, Deep, Abyss) to force models to account for subtext during prose generation.
PHASE 4 (ARCHITECT): True fractal outlining. Creates a 7-point skeleton -> Estimates total chapters via (TargetWords / 3000) -> Batches chapter chunks to the LLM to write granular micro-beats. It maintains a rolling [CACHE] state (Location/Time/Status) passing forward to preserve agent memory.
PHASE 5 (FORGE): Infinite Ghostwriter Async Loop. Injects previous 800 words + current cache + specific beat text into prompt. Uses dynamic tier-based chunking (e.g., Free tier generates micro 300-word sections, Masterwork generates massive 1500-word blocks). Implements continuity-editor boundary-smoothing for fragmented models.
PHASE 6 (BINDERY): Auto-generates final metadata (blurb/title/cover). Injects data into DOM and replaces standard HTML tree with native print-CSS classes (break-before: page, text-indent, orphans/widows management) bypassing standard render limits directly into browser PDF engine.
