Key Features
- AI Integration: Uses a KoboldCpp server with the Google Gemma model.
- Flexible Configuration: Can be configured to use OpenAI models through a simple YAML configuration file.
- OS Interaction: Capable of interacting with the operating system, suggesting and executing terminal commands upon user confirmation, see example sessions.
- Optional Wolfram|Alpha Integration: Automatically attempts to augment the LLM’s context via communication with the official Wolfram|Alpha Short Answers API.
- Introspective Contextual Augmentation (ICA): Enhances AI responses by automatically gathering and incorporating relevant contextual information through introspective reasoning. See Introspective Contextual Augmentation for more details.
- Multi-Platform: Runs on Linux, macOS, and Windows, adapting to each environment automatically by collecting context of the running system.
- Dual-Mode Operation: Functions both as a GUI application and a stand-alone terminal tool for quick command generation and embedding of answers in scripts.
- Pastime Mode: Engage in human-like conversations with an AI companion, optionally impersonating specific characters or personalities.
- Intelligent Caching: Implements a sophisticated request caching system for both Wolfram|Alpha and LLM queries, optimizing API usage and response times.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.