This project demonstrates the concept of an AI middleware system, showcasing how artificial intelligence can be integrated with project management tools like Pivotal Tracker. It serves as a proof of concept for using natural language processing to interact with structured data from external APIs.
- Natural Language Processing: Accepts user queries in plain natural language.
- AI Interpretation: Uses AI to interpret the intent of the query.
- API Integration: Fetches relevant data from Pivotal Tracker based on the interpreted query.
- Intelligent Response: Generates a response using AI, combining the query intent with the fetched data.
- Prompting: Effective prompting strategies for Anthropic LLM models.
- Seamless integration between AI language models and external APIs
- Flexible query handling for different types of project management questions
- Demonstration of how AI can enhance data retrieval and interpretation
Set up the environment:
mv .template.env .env
Edit .env
and fill in your API keys and other required values. Then run one of the following commands:
This command returns the standard response without the pivotal tracker api call
python main.py "Hi, how are you?"
This command initiates an api call to the pivotal tracker and returns any unstarted tasks
python main.py "What unstarted tasks do we have?"
This command initiates an api call to the pivotal tracker and returns any started tasks
python main.py "List the current sprint's started stories."
- User input is processed by an AI model to determine the query type.
- Based on the AI's interpretation, the middleware fetches relevant data from Pivotal Tracker.
- Another AI call combines the fetched data with the original query to generate a human-readable response.
This example also demonstrates effective prompting techniques for Anthropic models.
middleware/prompts/default.prompt
: Shows how to write prompt instructions for Anthropic LLM model to determine user intent and provide a standard response or request clarification.middleware/prompts/detail.prompt
: Contains prompt instructions that force the LLM to respond precisely and in detail, based solely on data obtained from the API call.