Using LLM Mode
Master natural language commands with the AI assistant.
Setup
- Get an API key from platform.openai.com
- Add to your
.env:
OPENAI_API_KEY=sk-your-key-here
OPENAI_MODEL=gpt-4.1-nano
- Restart the server
Basic Usage
Enable LLM Mode
In Telegram:
/llm
You’ll see:
LLM mode enabled! Send natural language commands.
Send a Request
Just type what you want to do:
show me the git status
The AI will:
- Figure out the right command (
git status) - Run it
- Show you the output
Disable LLM Mode
/llm
Example Workflows
Git Operations
You: show me recent commits
AI: Running: git log --oneline -10
[shows commits]
You: what changed in the last one?
AI: Running: git show HEAD --stat
[shows changes]
You: create a branch called feature-auth
AI: Running: git checkout -b feature-auth
Switched to a new branch 'feature-auth'
Code Exploration
You: find all config files
AI: Running: find . -name "*.config.*" -o -name "config.*"
[shows files]
You: what's in the main config?
AI: Running: cat config/default.json
[shows contents]
You: find where the API endpoint is defined
AI: Running: grep -r "api" --include="*.js" -l
[shows files]
Project Management
You: run the tests
AI: Running: npm test
[shows test output]
You: are there any failing tests?
AI: Based on the output, all 42 tests passed.
You: build the project
AI: Running: npm run build
[shows build output]
File Operations
You: list files in the src folder
AI: Running: ls -la src/
[shows files]
You: show me the first 20 lines of index.js
AI: Running: head -20 src/index.js
[shows code]
You: find TODO comments
AI: Running: grep -rn "TODO" --include="*.js"
[shows TODOs with line numbers]
Tips for Better Results
Be Specific
❌ "show files"
✅ "show all JavaScript files in the src directory"
Provide Context
❌ "what's wrong?"
✅ "the build failed, what's the error in the output?"
Ask Follow-ups
The AI remembers the conversation:
You: show me the package.json
AI: [shows file]
You: what version of express is installed?
AI: Based on the package.json, express version is 4.21.0
Conversation Management
Clear History
If the AI gets confused or you want to start fresh:
/llm_clear
Check Mode
Send /llm to see if LLM mode is enabled. It toggles on/off.
Model Selection
Choose a model based on your needs:
| Model | Speed | Cost | Best For |
|---|---|---|---|
gpt-4.1-nano | Fast | Low | Quick commands |
gpt-4o-mini | Medium | Medium | Balanced |
gpt-4o | Slow | High | Complex analysis |
Set in .env:
OPENAI_MODEL=gpt-4o-mini
Limitations
- Commands timeout after 2 minutes
- Long outputs are truncated
- Max 10 commands per conversation turn
- Requires internet connection