Search through the Tigris documentation and blog with the power of AI!
Clone the docs and blog repos.
cd var
git clone https://github.com/tigrisdata/tigris-blog
git clone https://github.com/tigrisdata/tigris-os-docs
node ingest.ts
import * as lancedb from "@lancedb/lancedb";
import "@lancedb/lancedb/embedding/openai";
import { LanceSchema, getRegistry } from "@lancedb/lancedb/embedding";
import { Utf8 } from "apache-arrow";
const func = getRegistry()
.get("openai")
?.create({ model: "text-embedding-3-small" });
const contentSchema = LanceSchema({
text: func.sourceField(new Utf8()),
vector: func.vectorField(),
title: new Utf8(),
url: new Utf8(),
});
const tbl = await db.openTable("content", contentSchema);
const query = magic_get_query_somehow(); // fill this in
const actual = await tbl.search(query).limit(25).toArray();
console.log(`found ${actual.length} results:`);
for (const result of actual) {
console.log(`* ${result.title}: ${result.url}\n${result.text}\n`);
}
Envvars you need:
Name | Description |
---|---|
AWS_ACCESS_KEY_ID |
Tigris access key ID |
AWS_SECRET_ACCESS_KEY |
Tigris secret access key |
BUCKET_NAME |
xe-stream-lancedb |
OPENAI_API_KEY |
API key for the OpenAI API (needed for generating embeddings with text-embedding-3-small ) |
Put them in .env
.
Install the dependencies:
npm install
Run the server:
node index.js
You'll see an index page like this on port 3000:
Enter in some words and you'll see results!
And that's how easy it is to have your own data in a vector database on Tigris! You can use the basis of this for a RAG chatbot pipeline.