Forkify Command Line: AI Conversation Context Manager
A powerful command-line tool for managing conversational context windows with Claude AI. Forkify enables intelligent document processing and "forking" of conversation branches, allowing you to explore multiple conversational paths from a shared context trunk.
Overview
Forkify Command Line redefines AI document analysis by enabling sophisticated context window management with Anthropic's Claude AI. The name "Forkify" comes from its core ability to "fork" conversation contexts—creating branches that explore different paths while maintaining a shared base trunk.
The application manages the entire document lifecycle:
- Loading documents into Claude's cache (
input-docs) - Processing them into AI-optimized summaries (
processed-docs) - Storing conversation outputs for future reference (
output-docs)
This command-line tool provides a lightweight yet powerful solution for context-aware document processing and conversation management.
Key Features
- Context Window Management: Precisely control how much conversational history is included in Claude's attention window
- Document Processing Pipeline: Three-stage pipeline with raw documents, processed summaries, and conversation outputs
- Conversation Branching: Fork conversations to explore different paths while maintaining a shared context trunk
- System Prompt Customization: Choose different system prompts for analysis, QA, or content generation tasks
- Response Length Control: Adjust response length from extremely short (128 tokens) to very detailed (4096+ tokens)
- Token Usage Monitoring: Track token usage and associated costs across all conversations
- Document Source Tracking: View which documents are being used in the current conversation
- Session Persistence: Automatically save and restore conversation state between sessions
Conversation Branching
The core innovation of Forkify is its conversation branching capability:
┌── Branch A: Explore technical details
│
Main Conversation ──┼── Branch B: Focus on business implications
│
└── Branch C: Investigate edge cases
How it works:
- You establish a base context "trunk" with your initial documents and conversation
- Create branches to explore different tangents or conversation paths
- Each branch maintains its own context window but shares the base trunk
- Switch between branches without losing your place in either conversation
This approach allows you to:
- Explore multiple analytical angles from the same document set
- Try different prompting strategies without starting over
- Maintain separate conversations that share foundational context
- Preserve trunk attention mechanisms while exploring new directions
Working with Documents
Forkify implements a three-stage document processing pipeline:
- Input Documents (
input-docs/): Raw documents loaded into Claude's context - Processed Documents (
processed-docs/): AI-optimized summaries and analysis - Output Documents (
output-docs/): Conversation history and generated content
This approach ensures efficient token usage while maintaining comprehensive context.
Sample Session
> /sw research_project
Created new conversation 'research_project'
> I need to analyze @@research_paper.pdf
Processing document 'research_paper.pdf'...
Document analysis complete.
> What are the key findings in this paper?
The key findings from the research paper include:
[Claude generates response analyzing the document]
> /branch technical_details
Created new branch 'technical_details' from 'research_project'
> Let's focus on the implementation details in section 3
[Claude responds with technical analysis]
> /sw research_project
Switched to 'research_project' conversation
> Let's discuss the business implications instead
[Claude responds with business analysis while maintaining original context]
Command Reference
Forkify includes a comprehensive set of commands:
| Command | Description |
| ------- | ----------- |
| /sw <name> | Create new conversation or switch to existing |
| /reload | Reprocess all documents for current conversation |
| /docs | Show document sources for current conversation |
| /p <type> | Switch system prompt type (analysis|qa|generation) |
| /m to /xxl | Control response length from medium to extremely long |
| /usage | Display total token usage and costs |
Development
The tool is built with Python 3.8+ and structured around the following directories:
input-docs/- Place your input documents hereprocessed-docs/- Contains processed document dataoutput-docs/- Contains output generated from conversationssessions/- Contains conversation session data
The architecture focuses on efficient context management, intelligent document processing, and seamless conversation branching.