Fiction Forge
Motore per la scrittura di romanzi
La scintilla
Definisci l’anima della tua storia. Scegli un genere, una lunghezza obiettivo e pianta il seme di un’idea di base.
Determina la dimensione del cast e la complessità della storia.
Forjando la premessa...
La fucina dei personaggi
Definisci il tuo cast principale e lascia che l’IA generi i personaggi secondari di medio livello e minori in base alla lunghezza della tua storia.
Forjando personaggi principali e secondari...
Cast principale
Cast di supporto (medio livello e minori)
Il Costruttore di Mondi
Costruisci l’Iceberg: Superficie, Bassifondi, Profondità e Abisso.
Sondando le profondità...
L'Architetto
Scaletta frattale: dallo scheletro ai punti completi dei capitoli.
Redigendo lo scheletro...
La Fucina
Il ciclo del ghostwriter. Scaletta a sinistra, manoscritto a destra.
Generating Beat...
La Legatoria
Progetta la copertina del tuo libro, crea il titolo perfetto e scrivi il testo di quarta di copertina prima della pubblicazione.
No Cover Art
Painting masterpiece...
Come usare Fiction Forge
Benvenuto nel motore AI definitivo per la scrittura di romanzi. Segui le 6 fasi per costruire il tuo capolavoro.
Phase 1: The Spark
Definisci il tuo genere, il numero di parole e la premessa. Il motore ti intervisterà dinamicamente sul mondo in base al tuo prompt iniziale.
Phase 2: Character Forge
Le dimensioni del cast si adattano automaticamente. Generiamo profili psicologici, tratti MBTI e possiamo creare ritratti visivi tramite il modello Flux.
Phase 3: Worldbuilder
Usando la teoria dell’Iceberg a 4 livelli, stabiliamo i sottotesti di trama Superficie, Acque basse, Profondo e Abisso per dare dimensione alla tua storia.
Phase 4: Architect
La scaletta frattale mappa uno scheletro a 7 punti, calcola il numero di capitoli e divide matematicamente la storia in micro-battute.
Phase 5: The Forge
Il loop infinito del ghostwriter. Il motore si nutre del tuo tracker di scaletta e della cache di memoria dinamica per scrivere senza interruzioni. Lascialo andare!
Phase 6: The Bindery
Genera un titolo, una descrizione e una copertina AI. Infine, premi Esporta per compilare dinamicamente l’intero manoscritto in un PDF pronto per la stampa.
v1.5 - Il Co-Writer (fascia intermedia) completamente rinnovato
MATH CALIBRATION: Scaled Mid Tier pacing down from 1,000 words/beat to 800 words/beat (2,500 words per chapter). This aligns perfectly with the natural output limitations of rapid models like Haiku and Flash, eliminating prose padding and rambling.
CLAUDE HAIKU PROMOTION: DeepSeek V3 was still handling structural logic for the Mid Tier, risking the same timeouts. Claude 3.5 Haiku has been promoted to the primary structural brain for Mid Tier due to its lightning-fast reasoning and low cost.
v1.4 - La promozione Gemini Flash
MATH CALIBRATION: Adjusted Premium tier chapter calculations to estimate 3,400 words per chapter. This corrects a pacing bug where the engine would over-generate text (giving you 180k words on a 150k word target).
WORKER BEE SWAP: DeepSeek V3 was suffering from severe API latency and timeouts under high load. Gemini 2.5 Flash has been promoted to the primary prose-generating worker bee to drastically improve generation speed and eliminate loop interruptions.
v1.3 - L’architettura Director / Worker Bee
DEEPSEEK INTEGRATION: Re-architected the Masterwork tier to split brain functions. Expensive, high-logic "Director" models (like Gemini 3.1 Pro) handle planning, world-building, and macro-structuring. The heavy lifting of raw prose generation is now dynamically outsourced to ultra-efficient "Worker Bee" models like DeepSeek V3.
MASSIVE COST REDUCTION: By combining this model-routing split with the Rolling Context Window, Masterwork prose generation costs have plummeted to roughly 18-26 credits per beat—saving approximately 93% (1/15th the cost) compared to previous legacy generations that forced premium models for all tasks.
v1.2 - L’ottimizzazione Masterwork LLM
HARDWIRED LLM ROUTING: Premium Tier explicitly routes Logic/World building to Gemini 3.1 Pro Preview and Prose/Characters to Claude 4.6 Sonnet for maximum literary quality. Mid-Tier balances cost and speed using DeepSeek V3 and Claude 3.5 Haiku.
ROLLING CONTEXT WINDOW (ANTI-SNOWBALL): To prevent exponential token bloat, the Forge Engine restricts the injection of previous prose to exactly 800 words. Combined with the Master Outline and Cache, inputs are strictly capped around ~1,450 tokens. This guarantees a flat-rate cost structure per beat.
FRACTAL BEAT SCALING: Output instructions adapt dynamically to the AI Tier. Premium forces 4 beats per chapter at ~700 words each (highly detailed, sensory prose). Mid-Tier forces 3 beats per chapter at ~1,000 words each (plot-driven, utilitarian pacing).
TOKEN & CREDIT ECONOMICS: A full 50,000-word novel scales predictably:
• Premium (Sonnet): 18 chapters / 72 beats. ~20 credits/beat. Total cost: ~1,440 credits.
• Mid-Tier (Haiku/DeepSeek): 17 chapters / 51 beats. ~7 credits/beat. Total cost: ~357 credits.
v1.1 - L’aggiornamento Gutenberg Print
NATIVE DARK MODE PDF: Added explicit rendering toggles to bypass default browser print settings, enabling true dark mode PDFs with #121212 backgrounds and #e0e0e0 text.
MANUSCRIPT REFLOW: Refactored text parsing engine to discard pre-wrap logic and dynamically wrap newlines into semantic HTML paragraphs with text-indent and justification. Resolves ragged edges on mobile PDF viewers.
ANTI-RASTERIZATION: Stripped all CSS filters and backdrop-blurs during the print compilation phase to prevent mobile browsers from treating text as low-res screenshot images.
v1.0 - L’architettura Masterpiece
STATE & STORAGE: Migrated from standard localStorage (1MB cloud limit) to IndexedDB for manuscript payloads. This allows up to ~500,000+ words to be cached locally without triggering quota limits, while the structural metadata (outlines/characters) syncs to the cloud. Library architecture uses 'Active' and 'Basement' logic for cold storage.
LLM ROUTING: Dynamic EngineManager dynamically routes tasks. Draft Tier strictly hits free models (GPT-4o Mini, Gemini, Llama 3). Co-Writer Tier routes logic/prose to DeepSeek V3 and Claude 3.5 Haiku. Masterwork Tier leverages GPT-5 and Claude Opus capabilities. Advanced JSON regex extraction claws recover broken model responses.
PHASE 1 (SPARK): Procedural contextual generation. Options feed recursively back into a deep-dive questionnaire engine to exponentially expand worldbuilding lore.
PHASE 2 (CHARACTERS): Math-based algorithmic scaling. Target wordcount dictates cast size (e.g. >50k words spawns up to 13 unique characters). Deep profile mapping via MBTI traits, interconnected motivations, and native image generation calls via Flux via MiniappsAI.
PHASE 3 (WORLDBUILDER): Thematic scaffolding using the 4-tier Iceberg Theory (Surface, Shallows, Deep, Abyss) to force models to account for subtext during prose generation.
PHASE 4 (ARCHITECT): True fractal outlining. Creates a 7-point skeleton -> Estimates total chapters via (TargetWords / 3000) -> Batches chapter chunks to the LLM to write granular micro-beats. It maintains a rolling [CACHE] state (Location/Time/Status) passing forward to preserve agent memory.
PHASE 5 (FORGE): Infinite Ghostwriter Async Loop. Injects previous 800 words + current cache + specific beat text into prompt. Uses dynamic tier-based chunking (e.g., Free tier generates micro 300-word sections, Masterwork generates massive 1500-word blocks). Implements continuity-editor boundary-smoothing for fragmented models.
PHASE 6 (BINDERY): Auto-generates final metadata (blurb/title/cover). Injects data into DOM and replaces standard HTML tree with native print-CSS classes (break-before: page, text-indent, orphans/widows management) bypassing standard render limits directly into browser PDF engine.
