A CLI tool for analyzing code and errors using LLM (Language Model) capabilities, with a Godspeed backend for handling LLM integration.
- Analyze code and error logs with AI assistance.
- Interactive chat sessions for real-time debugging.
- Continue previous sessions to maintain conversation history.
- Automated repomap generation for project context.
- Save and load analysis context for future reference.
The CLI supports various options for its commands. For example:
# Specify a directory to work in and continue an existing session
code-help --chat --directory /path/to/project --continue
# You can also specify files in the command line itself :)
code-help chat --file <filename1> <filename2> ......
All available commands and their subcommands can be found simply from code-help --help which will help you get started :)
frontend/
- CLI application written in TypeScript.backend/
- Godspeed server with LLM integration.
- Navigate to the backend directory:
cd backend
npm install
godspeed serve
-
Ensure you have
tokenjs.yaml
configured for your datasource. -
Configure your LLM settings as per token.js docs:
type: tokenjs
config:
provider: "your-provider"
model: "your-model"
For example, using an ollama model:
type: tokenjs
config:
provider: openai-compatible
baseURL: http://localhost:11434/v1
models:
- name: <model_name>
config:
temperature: 0.7
max_tokens: 1000
For more details, refer to the tokenjs docs: https://docs.tokenjs.ai/providers
- Navigate to the frontend directory:
cd frontend
npm install
npm run build
- Launch the CLI tool:
code-help
All session history and project context are automatically saved under the .superdebugger
folder in your specified directory.
The backend exposes the following routes:
-
POST /api/code/context
Processes code analysis with LLM.
Request Body:{ "analysisContext": { "files": [ { "name": "file.ts", "content": "file content" } ], "errorLog": "Full error log here", "projectContext": "Formatted project context", "timestamp": "ISO8601 timestamp" }, "prompt": "User question" }
Responses:
200 - Returns answer, model and usage stats
400 - Invalid request
503 - Service unavailable -
POST /repomap
Generates a repository map based on the provided Git repository URL.
Request Body:{ "gitUrl": "https://github.com/example/repo.git" }
Response:
Returns a structured repo map.
Repomap functionality is based on a pagerank algorithm. Special thanks to Paul Gauthier Aider-AI for his python code, which was ported to TypeScript for this project.