A Demo showing RedisAI-MLflow integration
This is a self contained demo built on Flask server that servers both frontend and backend. The server needs RedisAI instance running on the default port and on the same host. Follow the below steps for setting up and running the demo. But a detailed walk through is available at medium.com
- Run a RedisAI instance. You can follow the documentation for this but for quickly spinning up a RedisAI server, use the official docker image
docker run -p 6379:6379 redisai/redisai:latest
- Clone this repository and make a conda environment from the given
yml
file and activate it
conda env create -f env.yml
conda activate redisai_mlflow_demo
- Train the model and save it as
torchscript
model in MLflow. MLflow runs thetrain.py
file which mock the training for the sake of this example and save the trainedtorchscript
model.
mlflow run . --no-conda
- Deploy the trained model into RedisAI. For this, you need the Run ID from the previous step.
mlflow deployments create -t redisai -m runs:/<RUN ID PLACEHOLDER>/model --name gptmodel
- Now that MLflow has deployed the model in RedisAI, we can bring up our Flask application. It is a self contained application that serves both frontend and backend
cd server
flask run
- Go to the URL
http://127.0.0.1:5000/
and you should see a UI similar to the one given below. Click on theStart
button on top right and see the AI in action