虚构熔炉
小说写作引擎
火花
定义你故事的灵魂。选择体裁、目标篇幅,并种下故事前提的种子。
决定角色数量和故事复杂度。
正在锻造故事前提…
角色锻造炉
确定你的主角阵容,然后让 AI 根据故事长度生成配角和次要角色。
正在锻造核心与配角角色…
核心角色
配角阵容(中层与次要角色)
世界构筑者
构筑冰山:表层、浅层、深处与深渊。
探入深渊……
建筑师
分形列纲:从骨架到完整章节节拍。
正在起草骨架……
熔炉
代笔循环。左侧是大纲,右侧是手稿。
Generating Beat...
装订室
在出版前设计书籍封面、打造完美标题,并撰写封底文案。
No Cover Art
Painting masterpiece...
如何使用 Fiction Forge
欢迎使用终极 AI 小说写作引擎。按照 6 个阶段来打造你的杰作。
Phase 1: The Spark
定义你的体裁、字数和故事前提。引擎会根据你的初始提示,动态地就世界设定对你进行采访。
Phase 2: Character Forge
角色阵容规模会自动调整。我们会生成心理画像、MBTI 性格类型,并可通过 Flux 模型生成视觉肖像。
Phase 3: Worldbuilder
使用四层“冰山理论”,我们为情节建立表层、浅层、深层与深渊层的潜台词结构,让你的故事更具层次。
Phase 4: Architect
分形大纲会构建七点骨架,计算章节数量,并以数学方式将故事划分为微情节点。
Phase 5: The Forge
无限代笔循环。引擎会利用你的大纲追踪器和滚动记忆缓存进行无缝写作。让它尽情运行吧!
Phase 6: The Bindery
生成标题、简介和 AI 封面艺术。最后点击“导出”,即可动态编译整部手稿为可供印刷的 PDF 格式。
v1.5 - 合著者(中档)大改版
MATH CALIBRATION: Scaled Mid Tier pacing down from 1,000 words/beat to 800 words/beat (2,500 words per chapter). This aligns perfectly with the natural output limitations of rapid models like Haiku and Flash, eliminating prose padding and rambling.
CLAUDE HAIKU PROMOTION: DeepSeek V3 was still handling structural logic for the Mid Tier, risking the same timeouts. Claude 3.5 Haiku has been promoted to the primary structural brain for Mid Tier due to its lightning-fast reasoning and low cost.
v1.4 - Gemini Flash 推广
MATH CALIBRATION: Adjusted Premium tier chapter calculations to estimate 3,400 words per chapter. This corrects a pacing bug where the engine would over-generate text (giving you 180k words on a 150k word target).
WORKER BEE SWAP: DeepSeek V3 was suffering from severe API latency and timeouts under high load. Gemini 2.5 Flash has been promoted to the primary prose-generating worker bee to drastically improve generation speed and eliminate loop interruptions.
v1.3 - 导演 / 工蜂 架构
DEEPSEEK INTEGRATION: Re-architected the Masterwork tier to split brain functions. Expensive, high-logic "Director" models (like Gemini 3.1 Pro) handle planning, world-building, and macro-structuring. The heavy lifting of raw prose generation is now dynamically outsourced to ultra-efficient "Worker Bee" models like DeepSeek V3.
MASSIVE COST REDUCTION: By combining this model-routing split with the Rolling Context Window, Masterwork prose generation costs have plummeted to roughly 18-26 credits per beat—saving approximately 93% (1/15th the cost) compared to previous legacy generations that forced premium models for all tasks.
v1.2 - 杰作级 LLM 优化
HARDWIRED LLM ROUTING: Premium Tier explicitly routes Logic/World building to Gemini 3.1 Pro Preview and Prose/Characters to Claude 4.6 Sonnet for maximum literary quality. Mid-Tier balances cost and speed using DeepSeek V3 and Claude 3.5 Haiku.
ROLLING CONTEXT WINDOW (ANTI-SNOWBALL): To prevent exponential token bloat, the Forge Engine restricts the injection of previous prose to exactly 800 words. Combined with the Master Outline and Cache, inputs are strictly capped around ~1,450 tokens. This guarantees a flat-rate cost structure per beat.
FRACTAL BEAT SCALING: Output instructions adapt dynamically to the AI Tier. Premium forces 4 beats per chapter at ~700 words each (highly detailed, sensory prose). Mid-Tier forces 3 beats per chapter at ~1,000 words each (plot-driven, utilitarian pacing).
TOKEN & CREDIT ECONOMICS: A full 50,000-word novel scales predictably:
• Premium (Sonnet): 18 chapters / 72 beats. ~20 credits/beat. Total cost: ~1,440 credits.
• Mid-Tier (Haiku/DeepSeek): 17 chapters / 51 beats. ~7 credits/beat. Total cost: ~357 credits.
v1.1 - 古腾堡印刷更新
NATIVE DARK MODE PDF: Added explicit rendering toggles to bypass default browser print settings, enabling true dark mode PDFs with #121212 backgrounds and #e0e0e0 text.
MANUSCRIPT REFLOW: Refactored text parsing engine to discard pre-wrap logic and dynamically wrap newlines into semantic HTML paragraphs with text-indent and justification. Resolves ragged edges on mobile PDF viewers.
ANTI-RASTERIZATION: Stripped all CSS filters and backdrop-blurs during the print compilation phase to prevent mobile browsers from treating text as low-res screenshot images.
v1.0 - 杰作架构
STATE & STORAGE: Migrated from standard localStorage (1MB cloud limit) to IndexedDB for manuscript payloads. This allows up to ~500,000+ words to be cached locally without triggering quota limits, while the structural metadata (outlines/characters) syncs to the cloud. Library architecture uses 'Active' and 'Basement' logic for cold storage.
LLM ROUTING: Dynamic EngineManager dynamically routes tasks. Draft Tier strictly hits free models (GPT-4o Mini, Gemini, Llama 3). Co-Writer Tier routes logic/prose to DeepSeek V3 and Claude 3.5 Haiku. Masterwork Tier leverages GPT-5 and Claude Opus capabilities. Advanced JSON regex extraction claws recover broken model responses.
PHASE 1 (SPARK): Procedural contextual generation. Options feed recursively back into a deep-dive questionnaire engine to exponentially expand worldbuilding lore.
PHASE 2 (CHARACTERS): Math-based algorithmic scaling. Target wordcount dictates cast size (e.g. >50k words spawns up to 13 unique characters). Deep profile mapping via MBTI traits, interconnected motivations, and native image generation calls via Flux via MiniappsAI.
PHASE 3 (WORLDBUILDER): Thematic scaffolding using the 4-tier Iceberg Theory (Surface, Shallows, Deep, Abyss) to force models to account for subtext during prose generation.
PHASE 4 (ARCHITECT): True fractal outlining. Creates a 7-point skeleton -> Estimates total chapters via (TargetWords / 3000) -> Batches chapter chunks to the LLM to write granular micro-beats. It maintains a rolling [CACHE] state (Location/Time/Status) passing forward to preserve agent memory.
PHASE 5 (FORGE): Infinite Ghostwriter Async Loop. Injects previous 800 words + current cache + specific beat text into prompt. Uses dynamic tier-based chunking (e.g., Free tier generates micro 300-word sections, Masterwork generates massive 1500-word blocks). Implements continuity-editor boundary-smoothing for fragmented models.
PHASE 6 (BINDERY): Auto-generates final metadata (blurb/title/cover). Injects data into DOM and replaces standard HTML tree with native print-CSS classes (break-before: page, text-indent, orphans/widows management) bypassing standard render limits directly into browser PDF engine.
