diff --git a/.gitignore b/.gitignore
index b73165d..50f7c60 100644
--- a/.gitignore
+++ b/.gitignore
@@ -86,7 +86,6 @@ target/
# Jupyter Notebook
.ipynb_checkpoints
-notebooks/
# IPython
profile_default/
diff --git a/notebooks/Vanilla_RAG_v2.ipynb b/notebooks/Vanilla_RAG_v2.ipynb
new file mode 100644
index 0000000..e480cb3
--- /dev/null
+++ b/notebooks/Vanilla_RAG_v2.ipynb
@@ -0,0 +1,723 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "mz33G3t6gbOl"
+ },
+ "source": [
+ "# RAG\n",
+ "\n",
+ "Retrieval-Augmented Generation (RAG) is a technique that combines the strengths of pre-trained language models with the ability to retrieve information from a large corpus of documents. RAG **enables the language model to produce more informed, accurate, and contextually relevant answers** than by relying on its pre-trained knowledge alone.\n",
+ "\n",
+ "At Cohere, all RAG calls come with... **precise citations**! 🎉\n",
+ "The model cites which groups of words, in the RAG chunks, were used to generate the final answer. \n",
+ "These citations make it easy to check where the model’s generated response claims are coming from and they help users gain visibility into the model reasoning. \n",
+ "\n",
+ "RAG consists of 3 steps:\n",
+ "- Step 1: Indexing and given a user query, retrieve the relevant chunks from the index\n",
+ "- Step 2: Optionally, rerank the retrieved chunks\n",
+ "- Step 3: Generate the model final answer with **precise citations**, given the retrieved and reranked chunks\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "nSB0pnt0gbOo"
+ },
+ "source": [
+ "## Step 0 - Imports & Getting some data\n",
+ "\n",
+ "In this example, we'll use a recent piece of text, that wasn't in the training data: the Wikipedia page of the movie \"Dune 2\". \n",
+ "\n",
+ "In practice, you would typically do RAG on much longer text, that doesn't fit in the context window of the model."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "id": "rACbepFGgbOo"
+ },
+ "outputs": [],
+ "source": [
+ "# pip install cohere\n",
+ "\n",
+ "import cohere\n",
+ "\n",
+ "co = cohere.ClientV2(api_key=\"YOUR_COHERE_API_KEY\") # Get your free API key: https://dashboard.cohere.com/api-keys"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "QdvbqfFrgbOq",
+ "outputId": "3882c95c-46bf-4dcc-99a2-453b3c2fc7c4"
+ },
+ "outputs": [],
+ "source": [
+ "# we'll get some wikipedia data\n",
+ "# ! pip install wikipedia -qq\n",
+ "\n",
+ "import wikipedia"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "xP-bWt9XgbOq",
+ "outputId": "72276fb2-0d6b-415d-af74-452a013ae84b"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "The text has roughly 5896 words.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# let's get the wikipedia article about Dune Part Two\n",
+ "article = wikipedia.page('Dune Part Two')\n",
+ "text = article.content\n",
+ "print(f\"The text has roughly {len(text.split())} words.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "-1aJ7hKGgbOr"
+ },
+ "source": [
+ "## Step 1 - Indexing and given a user query, retrieve the relevant chunks from the index\n",
+ "\n",
+ "We index the document in a vector database. This requires getting the documents, chunking them, embedding, and indexing them in a vector database. Then we retrieved relevant results based on the users' query.\n",
+ "\n",
+ "### We split the document into chunks of roughly 512 words"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "ZUph1JX41665",
+ "outputId": "6c63a93f-6999-47af-e704-d4a88727bc75"
+ },
+ "outputs": [],
+ "source": [
+ "# For chunking let's use langchain to help us split the text\n",
+ "! pip install -qU langchain-text-splitters -qq\n",
+ "\n",
+ "from langchain_text_splitters import RecursiveCharacterTextSplitter"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "uhXW7iHC1-Q6",
+ "outputId": "d68ac348-4b73-4c6a-a445-6c510bdb0881"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "The text has been broken down in 108 chunks.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Create basic configurations to chunk the text\n",
+ "text_splitter = RecursiveCharacterTextSplitter(\n",
+ " chunk_size=512,\n",
+ " chunk_overlap=50,\n",
+ " length_function=len,\n",
+ " is_separator_regex=False,\n",
+ ")\n",
+ "\n",
+ "# Split the text into chunks with some overlap\n",
+ "chunks_ = text_splitter.create_documents([text])\n",
+ "chunks = [c.page_content for c in chunks_]\n",
+ "print(f\"The text has been broken down in {len(chunks)} chunks.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "P8g0sE2hgbOs"
+ },
+ "source": [
+ "### Embed every text chunk\n",
+ "\n",
+ "Cohere embeddings are state-of-the-art."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "KEarMPEqgbOs",
+ "outputId": "7da0e06d-f637-4470-8e01-6de8249be64b"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "We just computed 108 embeddings.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Because the texts being embedded are the chunks we are searching over, we set the input type as search_doc\n",
+ "model = \"embed-english-v3.0\"\n",
+ "\n",
+ "def batch_embed(texts, batch_size=96):\n",
+ " all_embeddings = []\n",
+ " for i in range(0, len(texts), batch_size):\n",
+ " batch = texts[i:i+batch_size]\n",
+ " response = co.embed(\n",
+ " texts=batch,\n",
+ " model=model,\n",
+ " input_type=\"search_document\",\n",
+ " embedding_types=['float']\n",
+ " )\n",
+ " all_embeddings.extend(response.embeddings.float)\n",
+ " return all_embeddings\n",
+ "\n",
+ "embeddings = batch_embed(chunks)\n",
+ "print(f\"We just computed {len(embeddings)} embeddings.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "HM6vKeypgbOs"
+ },
+ "source": [
+ "### Store the embeddings in a vector database\n",
+ "\n",
+ "We use the simplest vector database ever: a python dictionary using `np.array()`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {
+ "id": "sdW7M8HLvB-9"
+ },
+ "outputs": [],
+ "source": [
+ "# We use the simplest vector database ever: a python dictionary\n",
+ "! pip install numpy -qq"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {
+ "id": "H2srFH-IgbOs"
+ },
+ "outputs": [],
+ "source": [
+ "import numpy as np\n",
+ "vector_database = {i: np.array(embedding) for i, embedding in enumerate(embeddings)}\n",
+ "# { 0: array([...]), 1: array([...]), 2: array([...]), ..., 10: array([...]) }"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "q6NGVurZgbOs"
+ },
+ "source": [
+ "## Given a user query, retrieve the relevant chunks from the vector database\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "eC05yJQ7jlek"
+ },
+ "source": [
+ "### Define the user question"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {
+ "id": "Y2HTxspKgbOs"
+ },
+ "outputs": [],
+ "source": [
+ "query = \"Name everyone involved in writing the script, directing, and producing 'Dune: Part Two'?\"\n",
+ "\n",
+ "# Note: the relevant passage in the wikipedia page we're looking for is:\n",
+ "# \"[...] Dune: Part Two was originally scheduled to be released on October 20, 2023, but was delayed to November 17, 2023, before moving forward two weeks to November 3, 2023, to adjust to changes in release schedules from other studios. It was later postponed by over four months to March 15, 2024, due to the 2023 Hollywood labor disputes. After the strikes were resolved, the film moved once more up two weeks to March 1, 2024. [...]\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "9oULg1tOjjOW"
+ },
+ "source": [
+ "### Embed the user question\n",
+ "\n",
+ "Cohere embeddings are state-of-the-art."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "yrUuS6vXgbOs",
+ "outputId": "0c64a930-f817-43c2-d775-1d9145cb304e"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "query_embedding: [-0.068603516, -0.02947998, -0.06274414, -0.015449524, -0.033294678, 0.0056877136, -0.047210693, 0.04714966, -0.024871826, 0.008148193, '...']\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Because the text being embedded is the search query, we set the input type as search_query\n",
+ "response = co.embed(\n",
+ " texts=[query],\n",
+ " model=model,\n",
+ " input_type=\"search_query\",\n",
+ " embedding_types=['float']\n",
+ ")\n",
+ "query_embedding = response.embeddings.float[0]\n",
+ "print(\"query_embedding: \", query_embedding[:10] + [\"...\"])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "8K8B87CGgbOt"
+ },
+ "source": [
+ "### Retrieve the most relevant chunks from the vector database\n",
+ "\n",
+ "We use cosine similarity to find the most similar chunks"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "nik3es32gbOt",
+ "outputId": "a1c30024-52e1-42c7-8836-a2c590559aca"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "similarity scores: [0.6880419226352857, 0.3923392545434105, 0.6386815248269733, 0.43618145115797535, 0.35780784367307705, 0.3923210354894715, 0.3406860631010847, 0.2957357005793907, 0.4255159555931599, 0.14503223489330883, 0.3997846789722547, 0.3743933002525113, 0.4083288113267294, 0.26194266616864986, 0.31982912150076953, 0.3531164824914523, 0.23203650717727414, 0.497148799385201, 0.34487158221487835, 0.28824423308146263, 0.5782290303802001, 0.5489319961327446, 0.778415400224538, 0.5236541350938922, 0.5481962732642285, 0.7135549552247115, 0.5205999648021981, 0.5872268968480292, 0.26489409375108064, 0.6410858426784221, 0.5367759890829636, 0.6816158875713925, 0.39144361423311, 0.4827300300834518, 0.4502725966061075, 0.2462873309174519, 0.443300705486635, 0.39120172781317253, 0.20620678706300988, 0.4371599199261209, 0.37572638119299867, 0.4614621126654143, 0.29906070488176123, 0.36477353501504023, 0.36024747243912564, 0.3929914083901088, 0.23114430973151348, 0.4646991530089856, 0.1187422018303958, 0.4217084543208151, 0.36510789830148477, 0.26769253214700306, 0.36598547319281477, 0.3792997473359336, 0.4596196436425069, 0.43869634271638114, 0.20089603894036867, 0.42191807980549356, 0.4387291730859993, 0.4488941445207048, 0.1299006327023146, 0.3832393275291005, 0.14269299802866453, 0.5386575762368317, 0.29891892602821285, 0.4128023121085502, 0.15622874884471297, 0.5037411178038066, 0.3403791753246366, 0.4518658626088099, 0.3588444212378783, 0.39024780342329135, 0.48874362388129355, 0.2970250944589432, 0.4890213355379532, 0.2601457789171118, 0.5098062623890619, 0.41745989801233, 0.18695460607524733, 0.5309005848922221, 0.5226886534429271, 0.5478914338259878, 0.5004863890173412, 0.1968363843203233, 0.4210806632601615, 0.5341083271502703, 0.28640413417722466, 0.31654174678868613, 0.21754347066657245, 0.46012838286736324, 0.14089744261122422, 0.20637564356989352, 0.5713764499847603, 0.3808494424653835, 0.5832486106225127, 0.41007417198464485, 0.2604390796614135, 0.3801380463864371, 0.2867894923131921, 0.3135170641699137, 0.3470364583530195, 0.21980436849993726, 0.1414052619528618, 0.468511284019416, 0.4496650446352944, 0.5239361429406763, 0.19838970836663622, 0.42025656379414894]\n",
+ "Here are the indices of the top 10 chunks after retrieval: [22 25 0 31 29 2 27 94 20 92]\n",
+ "Here are the top 10 chunks after retrieval: \n",
+ "== Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there's a logical place to stop the [first] movie before the book is over\".\n",
+ "== On October 26, 2021, Legendary officially greenlit Dune: Part Two, with a spokesperson for the company stating, \"We would not have gotten to this point without the extraordinary vision of Denis and the amazing work of his talented crew, the writers, our stellar cast, our partners at Warner Bros., and of course the fans! Here's to more Dune.\" Production work had occurred back-to-back with the first film, as Villeneuve and his wife Lapointe immediately took a flight to Budapest in order to begin\n",
+ "== Dune: Part Two is a 2024 American epic science fiction film directed and co-produced by Denis Villeneuve, who co-wrote the screenplay with Jon Spaihts. The sequel to Dune (2021), it is the second of a two-part adaptation of the 1965 novel Dune by Frank Herbert. It follows Paul Atreides as he unites with the Fremen people of the desert planet Arrakis to wage war against House Harkonnen. Timothée Chalamet, Rebecca Ferguson, Josh Brolin, Stellan Skarsgård, Dave Bautista, Zendaya, Charlotte Rampling, and Javier\n",
+ "== Between the release of Dune and the confirmation of Dune: Part Two, Villeneuve started working the script in a way that production could begin immediately once the film was greenlit. By February 2021, Roth created a full treatment for the sequel, with writing beginning that August. He confirmed that Feyd-Rautha would appear in the film, and stated he will be a \"very important character\". In March 2022, Villeneuve had mostly finished writing the screenplay. Craig Mazin and Roth wrote additional literary\n",
+ "== Eric Roth was hired to co-write the screenplay in April 2017 for the Dune films, and Jon Spaihts was later confirmed to be co-writing the script alongside Roth and Villeneuve. Game of Thrones language creator David Peterson was confirmed to be developing languages for the film in April 2019. Villeneuve and Peterson had created the Chakobsa language, which was used by actors on set. In November 2019, Spaihts stepped down as show-runner for Dune: Prophecy to focus on Dune: Part Two. In June 2020, Greig\n",
+ "== Development began after Legendary Entertainment acquired film and television rights for the Dune franchise in 2016. Villeneuve signed on as director in 2017, intending to make a two-part adaptation of the novel due to its complexity. Production contracts were only secured for the first film before the second film was greenlit by Legendary in October 2021, subject to the success of the first. Principal photography took place in Budapest, Italy, Jordan, and Abu Dhabi between July and December 2022.\n",
+ "== theatrical experience is at the very heart of the cinematic language for me\". With Dune: Part Two being greenlit, Villeneuve said that his primary concern was to complete the filming as soon as possible, with the earliest he expected to start in the last quarter of 2022. He noted that production would be expedited by the work already done for the first film.\n",
+ "== Richard Roeper, writing for the Chicago Sun-Times, gave the film three stars out of four, praising the technical and narrative aspects, saying, \"Even as we marvel at the stunning and immersive and Oscar-level cinematography, editing, score, visual effects, production design and sound in Denis Villeneuve's Dune: Part Two, we're reminded at every turn that this is an absolutely bat-bleep [sic] crazy story.\"\n",
+ "== In November 2016, Legendary Pictures obtained the film and TV rights for the Dune franchise, based on the eponymous 1965 novel by Frank Herbert. Vice chair of worldwide production for Legendary Mary Parent began discussing with Denis Villeneuve about directing a film adaptation, quickly hiring him after realizing his passion for Dune. In February 2018, Villeneuve was confirmed to be hired as director, and intended to adapt the novel as a two-part film series. Villeneuve ultimately secured a two-film deal\n",
+ "== The film \"largely received rave reviews from critics\", and was praised for its visual effects and cast performances. Some reviews considered it one of the greatest science fiction films ever made. On the review aggregator website Rotten Tomatoes, 92% of 442 critics' reviews are positive, with an average rating of 8.3/10. The website's consensus reads: \"Visually thrilling and narratively epic, Dune: Part Two continues Denis Villeneuve's adaptation of the beloved sci-fi series in spectacular form.\"\n"
+ ]
+ }
+ ],
+ "source": [
+ "def cosine_similarity(a, b):\n",
+ " return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))\n",
+ "\n",
+ "# Calculate similarity between the user question & each chunk\n",
+ "similarities = [cosine_similarity(query_embedding, chunk) for chunk in embeddings]\n",
+ "print(\"similarity scores: \", similarities)\n",
+ "\n",
+ "# Get indices of the top 10 most similar chunks\n",
+ "sorted_indices = np.argsort(similarities)[::-1]\n",
+ "\n",
+ "# Keep only the top 10 indices\n",
+ "top_indices = sorted_indices[:10]\n",
+ "print(\"Here are the indices of the top 10 chunks after retrieval: \", top_indices)\n",
+ "\n",
+ "# Retrieve the top 10 most similar chunks\n",
+ "top_chunks_after_retrieval = [chunks[i] for i in top_indices]\n",
+ "print(\"Here are the top 10 chunks after retrieval: \")\n",
+ "for t in top_chunks_after_retrieval:\n",
+ " print(\"== \" + t)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "qzcpds3VgbOt"
+ },
+ "source": [
+ "## Step 2 - Rerank the chunks retrieved from the vector database\n",
+ "\n",
+ "We rerank the 10 chunks retrieved from the vector database. Reranking boosts retrieval accuracy.\n",
+ "\n",
+ "Reranking lets us go from 10 chunks retrieved from the vector database, to the 3 most relevant chunks."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "2J4LywVygbOt",
+ "outputId": "7a4c89bf-fc5e-409f-9304-fce006b9d8bf"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Here are the top 3 chunks after rerank: \n",
+ "== Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there's a logical place to stop the [first] movie before the book is over\".\n",
+ "== Dune: Part Two is a 2024 American epic science fiction film directed and co-produced by Denis Villeneuve, who co-wrote the screenplay with Jon Spaihts. The sequel to Dune (2021), it is the second of a two-part adaptation of the 1965 novel Dune by Frank Herbert. It follows Paul Atreides as he unites with the Fremen people of the desert planet Arrakis to wage war against House Harkonnen. Timothée Chalamet, Rebecca Ferguson, Josh Brolin, Stellan Skarsgård, Dave Bautista, Zendaya, Charlotte Rampling, and Javier\n",
+ "== On October 26, 2021, Legendary officially greenlit Dune: Part Two, with a spokesperson for the company stating, \"We would not have gotten to this point without the extraordinary vision of Denis and the amazing work of his talented crew, the writers, our stellar cast, our partners at Warner Bros., and of course the fans! Here's to more Dune.\" Production work had occurred back-to-back with the first film, as Villeneuve and his wife Lapointe immediately took a flight to Budapest in order to begin\n"
+ ]
+ }
+ ],
+ "source": [
+ "response = co.rerank(\n",
+ " query=query,\n",
+ " documents=top_chunks_after_retrieval,\n",
+ " top_n=3,\n",
+ " model=\"rerank-english-v3.0\",\n",
+ ")\n",
+ "\n",
+ "# top_chunks_after_rerank = [result.document['text'] for result in response]\n",
+ "\n",
+ "top_chunks_after_rerank = [top_chunks_after_retrieval[result.index] for result in response.results]\n",
+ "\n",
+ "print(\"Here are the top 3 chunks after rerank: \")\n",
+ "for t in top_chunks_after_rerank:\n",
+ " print(\"== \" + t)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "KuPL0VUXgbOt"
+ },
+ "source": [
+ "## Step 3 - Generate the model final answer, given the retrieved and reranked chunks"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {
+ "id": "oCNXWH8GgbOt"
+ },
+ "outputs": [],
+ "source": [
+ "# preamble containing instructions about the task and the desired style for the output.\n",
+ "preamble = \"\"\"\n",
+ "## Task & Context\n",
+ "You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.\n",
+ "\n",
+ "## Style Guide\n",
+ "Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "BevatShtgbOt",
+ "outputId": "af71f4a9-787a-4ee3-9598-20692fb3bf16"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Final answer:\n",
+ "*Dune: Part Two* is a 2024 American epic science fiction film directed and co-produced by Denis Villeneuve, who co-wrote the screenplay with Jon Spaihts. The film was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers. Kevin J. Anderson was the creative consultant.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# retrieved documents\n",
+ "documents = [\n",
+ " {\"data\": {\"title\": \"chunk 0\", \"snippet\": top_chunks_after_rerank[0]}},\n",
+ " {\"data\": {\"title\": \"chunk 1\", \"snippet\": top_chunks_after_rerank[1]}},\n",
+ " {\"data\": {\"title\": \"chunk 2\", \"snippet\": top_chunks_after_rerank[2]}},\n",
+ " ]\n",
+ "\n",
+ "# get model response\n",
+ "response = co.chat(\n",
+ " model=\"command-r-08-2024\",\n",
+ " messages=[{\"role\" : \"system\", \"content\" : preamble},\n",
+ " {\"role\" : \"user\", \"content\" : query}],\n",
+ " documents=documents, \n",
+ " temperature=0.3\n",
+ ")\n",
+ "\n",
+ "print(\"Final answer:\")\n",
+ "print(response.message.content[0].text)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "20wcn-EjlXZd"
+ },
+ "source": [
+ "Note: this is indeed the answer you'd expect, and here was the passage of text in wikipedia explaining it!\n",
+ "\n",
+ "\" [...] Dune: Part Two was originally scheduled to be released on October 20, 2023, but was delayed to November 17, 2023, before moving forward two weeks to November 3, 2023, to adjust to changes in release schedules from other studios. It was later postponed by over four months to March 15, 2024, due to the 2023 Hollywood labor disputes. After the strikes were resolved, the film moved once more up two weeks to March 1, 2024. [...]\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "RoSVDXSsgbOt"
+ },
+ "source": [
+ "## Bonus: Citations come for free with Cohere! 🎉\n",
+ "\n",
+ "At Cohere, all RAG calls come with... precise citations! 🎉\n",
+ "The model cites which groups of words, in the RAG chunks, were used to generate the final answer. \n",
+ "These citations make it easy to check where the model’s generated response claims are coming from. \n",
+ "They help users gain visibility into the model reasoning, and sanity check the final model generation. \n",
+ "These citations are optional — you can decide to ignore them.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "BVTuQdmDgbOt",
+ "outputId": "f843b262-d8bb-45ba-cbfb-9915da104eda"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Citations that support the final answer:\n",
+ "start=22 end=61 text='2024 American epic science fiction film' sources=[DocumentSource(type='document', id='doc:1', document={'id': 'doc:1', 'snippet': 'Dune: Part Two is a 2024 American epic science fiction film directed and co-produced by Denis Villeneuve, who co-wrote the screenplay with Jon Spaihts. The sequel to Dune (2021), it is the second of a two-part adaptation of the 1965 novel Dune by Frank Herbert. It follows Paul Atreides as he unites with the Fremen people of the desert planet Arrakis to wage war against House Harkonnen. Timothée Chalamet, Rebecca Ferguson, Josh Brolin, Stellan Skarsgård, Dave Bautista, Zendaya, Charlotte Rampling, and Javier', 'title': 'chunk 1'})]\n",
+ "start=62 end=106 text='directed and co-produced by Denis Villeneuve' sources=[DocumentSource(type='document', id='doc:1', document={'id': 'doc:1', 'snippet': 'Dune: Part Two is a 2024 American epic science fiction film directed and co-produced by Denis Villeneuve, who co-wrote the screenplay with Jon Spaihts. The sequel to Dune (2021), it is the second of a two-part adaptation of the 1965 novel Dune by Frank Herbert. It follows Paul Atreides as he unites with the Fremen people of the desert planet Arrakis to wage war against House Harkonnen. Timothée Chalamet, Rebecca Ferguson, Josh Brolin, Stellan Skarsgård, Dave Bautista, Zendaya, Charlotte Rampling, and Javier', 'title': 'chunk 1'})]\n",
+ "start=112 end=153 text='co-wrote the screenplay with Jon Spaihts.' sources=[DocumentSource(type='document', id='doc:1', document={'id': 'doc:1', 'snippet': 'Dune: Part Two is a 2024 American epic science fiction film directed and co-produced by Denis Villeneuve, who co-wrote the screenplay with Jon Spaihts. The sequel to Dune (2021), it is the second of a two-part adaptation of the 1965 novel Dune by Frank Herbert. It follows Paul Atreides as he unites with the Fremen people of the desert planet Arrakis to wage war against House Harkonnen. Timothée Chalamet, Rebecca Ferguson, Josh Brolin, Stellan Skarsgård, Dave Bautista, Zendaya, Charlotte Rampling, and Javier', 'title': 'chunk 1'})]\n",
+ "start=167 end=189 text='produced by Villeneuve' sources=[DocumentSource(type='document', id='doc:0', document={'id': 'doc:0', 'snippet': 'Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there\\'s a logical place to stop the [first] movie before the book is over\".', 'title': 'chunk 0'})]\n",
+ "start=191 end=202 text='Mary Parent' sources=[DocumentSource(type='document', id='doc:0', document={'id': 'doc:0', 'snippet': 'Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there\\'s a logical place to stop the [first] movie before the book is over\".', 'title': 'chunk 0'})]\n",
+ "start=208 end=219 text='Cale Boyter' sources=[DocumentSource(type='document', id='doc:0', document={'id': 'doc:0', 'snippet': 'Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there\\'s a logical place to stop the [first] movie before the book is over\".', 'title': 'chunk 0'})]\n",
+ "start=226 end=240 text='Tanya Lapointe' sources=[DocumentSource(type='document', id='doc:0', document={'id': 'doc:0', 'snippet': 'Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there\\'s a logical place to stop the [first] movie before the book is over\".', 'title': 'chunk 0'})]\n",
+ "start=242 end=255 text='Brian Herbert' sources=[DocumentSource(type='document', id='doc:0', document={'id': 'doc:0', 'snippet': 'Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there\\'s a logical place to stop the [first] movie before the book is over\".', 'title': 'chunk 0'})]\n",
+ "start=257 end=270 text='Byron Merritt' sources=[DocumentSource(type='document', id='doc:0', document={'id': 'doc:0', 'snippet': 'Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there\\'s a logical place to stop the [first] movie before the book is over\".', 'title': 'chunk 0'})]\n",
+ "start=272 end=283 text='Kim Herbert' sources=[DocumentSource(type='document', id='doc:0', document={'id': 'doc:0', 'snippet': 'Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there\\'s a logical place to stop the [first] movie before the book is over\".', 'title': 'chunk 0'})]\n",
+ "start=285 end=296 text='Thomas Tull' sources=[DocumentSource(type='document', id='doc:0', document={'id': 'doc:0', 'snippet': 'Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there\\'s a logical place to stop the [first] movie before the book is over\".', 'title': 'chunk 0'})]\n",
+ "start=298 end=319 text='Richard P. Rubinstein' sources=[DocumentSource(type='document', id='doc:0', document={'id': 'doc:0', 'snippet': 'Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there\\'s a logical place to stop the [first] movie before the book is over\".', 'title': 'chunk 0'})]\n",
+ "start=321 end=334 text='John Harrison' sources=[DocumentSource(type='document', id='doc:0', document={'id': 'doc:0', 'snippet': 'Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there\\'s a logical place to stop the [first] movie before the book is over\".', 'title': 'chunk 0'})]\n",
+ "start=340 end=355 text='Herbert W. Gain' sources=[DocumentSource(type='document', id='doc:0', document={'id': 'doc:0', 'snippet': 'Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there\\'s a logical place to stop the [first] movie before the book is over\".', 'title': 'chunk 0'})]\n",
+ "start=367 end=387 text='executive producers.' sources=[DocumentSource(type='document', id='doc:0', document={'id': 'doc:0', 'snippet': 'Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there\\'s a logical place to stop the [first] movie before the book is over\".', 'title': 'chunk 0'})]\n",
+ "start=388 end=405 text='Kevin J. Anderson' sources=[DocumentSource(type='document', id='doc:0', document={'id': 'doc:0', 'snippet': 'Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there\\'s a logical place to stop the [first] movie before the book is over\".', 'title': 'chunk 0'})]\n",
+ "start=414 end=434 text='creative consultant.' sources=[DocumentSource(type='document', id='doc:0', document={'id': 'doc:0', 'snippet': 'Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there\\'s a logical place to stop the [first] movie before the book is over\".', 'title': 'chunk 0'})]\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(\"Citations that support the final answer:\")\n",
+ "for cite in response.message.citations:\n",
+ " print(cite)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 23,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "IueXaIJggbOu",
+ "outputId": "c816af51-74be-42c9-e94e-9820bbf95f79"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "*Dune: Part Two* is a 2024 American epic science fiction film[1] directed and co-produced by Denis Villeneuve[1], who co-wrote the screenplay with Jon Spaihts.[1] The film was produced by Villeneuve[0], Mary Parent[0], and Cale Boyter[0], with Tanya Lapointe[0], Brian Herbert[0], Byron Merritt[0], Kim Herbert[0], Thomas Tull[0], Richard P. Rubinstein[0], John Harrison[0], and Herbert W. Gain[0] serving as executive producers.[0] Kevin J. Anderson[0] was the creative consultant.[0]\n",
+ "\n",
+ "Source documents:\n",
+ "[0] title: chunk 0, snippet: Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there's a logical place to stop the [first] movie before the book is over\".\n",
+ "[1] title: chunk 1, snippet: Dune: Part Two is a 2024 American epic science fiction film directed and co-produced by Denis Villeneuve, who co-wrote the screenplay with Jon Spaihts. The sequel to Dune (2021), it is the second of a two-part adaptation of the 1965 novel Dune by Frank Herbert. It follows Paul Atreides as he unites with the Fremen people of the desert planet Arrakis to wage war against House Harkonnen. Timothée Chalamet, Rebecca Ferguson, Josh Brolin, Stellan Skarsgård, Dave Bautista, Zendaya, Charlotte Rampling, and Javier\n"
+ ]
+ }
+ ],
+ "source": [
+ "def insert_inline_citations(text, citations, field='text'):\n",
+ " sorted_citations = sorted(citations, key=lambda c: c.start, reverse=True)\n",
+ " \n",
+ " for citation in sorted_citations:\n",
+ " source_ids = [source.id.split(':')[-1] for source in citation.sources]\n",
+ " citation_text = f\"[{','.join(source_ids)}]\"\n",
+ " text = text[:citation.end] + citation_text + text[citation.end:]\n",
+ " \n",
+ " return text\n",
+ "\n",
+ "def list_sources(citations, fields=['text']):\n",
+ " unique_sources = set()\n",
+ " for citation in citations:\n",
+ " for source in citation.sources:\n",
+ " source_data = tuple((field, source.document[field]) for field in fields if field in source.document)\n",
+ " unique_sources.add((source.id.split(':')[-1], source_data))\n",
+ " \n",
+ " footnotes = []\n",
+ " for source_id, source_data in sorted(unique_sources):\n",
+ " footnote = f\"[{source_id}] \" + \", \".join(f\"{key}: {value}\" for key, value in source_data)\n",
+ " footnotes.append(footnote)\n",
+ " \n",
+ " return \"\\n\".join(footnotes)\n",
+ "\n",
+ "# Use the functions\n",
+ "cited_text = insert_inline_citations(response.message.content[0].text, response.message.citations)\n",
+ "\n",
+ "# Print the result with inline citations\n",
+ "print(cited_text)\n",
+ "\n",
+ "# Print footnotes\n",
+ "if response.message.citations:\n",
+ " print(\"\\nSource documents:\")\n",
+ " print(list_sources(response.message.citations, fields=['title','snippet']))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "metadata": {
+ "id": "Kp4c_HkYIEn_"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "*Dune: Part Two* is a 2024 American epic science fiction film[1] directed and co-produced by Denis Villeneuve[1], who co-wrote the screenplay with Jon Spaihts.[1] The film was produced by Villeneuve[0], Mary Parent[0], and Cale Boyter[0], with Tanya Lapointe[0], Brian Herbert[0], Byron Merritt[0], Kim Herbert[0], Thomas Tull[0], Richard P. Rubinstein[0], John Harrison[0], and Herbert W. Gain[0] serving as executive producers.[0] Kevin J. Anderson[0] was the creative consultant.[0]\n",
+ "\n",
+ "Source documents:\n",
+ "[0] snippet: Dune: Part Two was produced by Villeneuve, Mary Parent, and Cale Boyter, with Tanya Lapointe, Brian Herbert, Byron Merritt, Kim Herbert, Thomas Tull, Jon Spaihts, Richard P. Rubinstein, John Harrison, and Herbert W. Gain serving as executive producers and Kevin J. Anderson as creative consultant. Legendary CEO Joshua Grode confirmed in April 2019 that they plan to make a sequel, adding that \"there's a logical place to stop the [first] movie before the book is over\"., title: chunk 0\n",
+ "[1] snippet: Dune: Part Two is a 2024 American epic science fiction film directed and co-produced by Denis Villeneuve, who co-wrote the screenplay with Jon Spaihts. The sequel to Dune (2021), it is the second of a two-part adaptation of the 1965 novel Dune by Frank Herbert. It follows Paul Atreides as he unites with the Fremen people of the desert planet Arrakis to wage war against House Harkonnen. Timothée Chalamet, Rebecca Ferguson, Josh Brolin, Stellan Skarsgård, Dave Bautista, Zendaya, Charlotte Rampling, and Javier, title: chunk 1\n"
+ ]
+ }
+ ],
+ "source": [
+ "def insert_inline_citations(text, citations, field='text'):\n",
+ " sorted_citations = sorted(citations, key=lambda c: c.start, reverse=True)\n",
+ " \n",
+ " for citation in sorted_citations:\n",
+ " source_ids = [source.id.split(':')[-1] for source in citation.sources]\n",
+ " citation_text = f\"[{','.join(source_ids)}]\"\n",
+ " text = text[:citation.end] + citation_text + text[citation.end:]\n",
+ " \n",
+ " return text\n",
+ "\n",
+ "def list_sources(citations):\n",
+ " unique_sources = set()\n",
+ " for citation in citations:\n",
+ " for source in citation.sources:\n",
+ " source_data = tuple((key, value) for key, value in source.document.items() if key != 'id')\n",
+ " unique_sources.add((source.id.split(':')[-1], source_data))\n",
+ " \n",
+ " footnotes = []\n",
+ " for source_id, source_data in sorted(unique_sources):\n",
+ " footnote = f\"[{source_id}] \" + \", \".join(f\"{key}: {value}\" for key, value in source_data)\n",
+ " footnotes.append(footnote)\n",
+ " \n",
+ " return \"\\n\".join(footnotes)\n",
+ "\n",
+ "# Use the functions\n",
+ "cited_text = insert_inline_citations(response.message.content[0].text, response.message.citations)\n",
+ "\n",
+ "# Print the result with inline citations\n",
+ "print(cited_text)\n",
+ "\n",
+ "# Print footnotes\n",
+ "if response.message.citations:\n",
+ " print(\"\\nSource documents:\")\n",
+ " print(list_sources(response.message.citations))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "provenance": []
+ },
+ "kernelspec": {
+ "display_name": "base",
+ "language": "python",
+ "name": "base"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
diff --git a/notebooks/agents/Multi_Step_Tool_Use_Spotify_v2.ipynb b/notebooks/agents/Multi_Step_Tool_Use_Spotify_v2.ipynb
new file mode 100644
index 0000000..bb1960b
--- /dev/null
+++ b/notebooks/agents/Multi_Step_Tool_Use_Spotify_v2.ipynb
@@ -0,0 +1,605 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Multi-Step Tool Use with Spotify Dataset"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This example demonstrates an agent that performs analysis on a Spotify tracks dataset (via a Python interpreter tool) while also having access to a web search tool."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Setup"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import json\n",
+ "import os\n",
+ "\n",
+ "import cohere\n",
+ "\n",
+ "co = cohere.ClientV2(\"COHERE_API_KEY\") # Get your free API key: https://dashboard.cohere.com/api-keys"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "! pip install tavily-python --q\n",
+ "\n",
+ "from tavily import TavilyClient\n",
+ "tavily_client = TavilyClient(api_key=\"TAVILY_API_KEY\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Step 1: Define the tools"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Here, we define the web search tool, which uses the Tavily Python client to perform web searches.\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# here's a web search engine\n",
+ "def web_search(query: str) -> list[dict]:\n",
+ " response = tavily_client.search(query, max_results=3)['results']\n",
+ " return {\"results\": response}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# the LLM is equipped with a description of the web search engine\n",
+ "web_search_tool = {\n",
+ " \"type\": \"function\",\n",
+ " \"function\": {\n",
+ " \"name\": \"web_search\",\n",
+ " \"description\": \"Returns a list of relevant document snippets for a textual query retrieved from the internet\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"query\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"Query to search the internet with\"\n",
+ " }\n",
+ " },\n",
+ " \"required\": [\"query\"]\n",
+ " }\n",
+ " }\n",
+ "}\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Here, we define the Python interpreter tool, which uses the `exec` function to execute Python code."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# here's a python console, which can be used to access the spreadsheet, but also more generally to code and plot stuff\n",
+ "import io, contextlib\n",
+ "def python_interpreter(code: str) -> list[dict]:\n",
+ " output = io.StringIO()\n",
+ " try:\n",
+ " # Redirect stdout to capture print statements\n",
+ " with contextlib.redirect_stdout(output):\n",
+ " exec(code, globals())\n",
+ " except Exception as e:\n",
+ " return {\n",
+ " \"error\": str(e),\n",
+ " \"executed_code\": code\n",
+ " }\n",
+ " # Get stdout\n",
+ " return {\n",
+ " \t\t\"console_output\": output.getvalue(),\n",
+ " \"executed_code\": code\n",
+ " \t}\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# the LLM is equipped with a description of a python console\n",
+ "python_interpreter_tool = {\n",
+ " \"type\": \"function\",\n",
+ " \"function\": {\n",
+ " \"name\": \"python_interpreter\",\n",
+ " \"description\": \"Executes python code and returns the result. The code runs in a static sandbox without internet access and without interactive mode, so print output or save output to a file.\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"code\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"Python code to execute\"\n",
+ " }\n",
+ " },\n",
+ " \"required\": [\"code\"]\n",
+ " }\n",
+ " }\n",
+ "}\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "functions_map = {\n",
+ " \"web_search\": web_search,\n",
+ " \"python_interpreter\": python_interpreter,\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We'll also need the spotify_data dataset, which contains information about Spotify tracks such as the track information, release information, popularity metrics, and musical characteristics. You can find the [dataset here](https://github.com/cohere-ai/notebooks/blob/main/notebooks/guides/advanced_rag/spotify_dataset.csv)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "/Users/meor/anaconda3/lib/python3.11/site-packages/pandas/core/arrays/masked.py:60: UserWarning: Pandas requires version '1.3.6' or newer of 'bottleneck' (version '1.3.5' currently installed).\n",
+ " from pandas.core import (\n"
+ ]
+ },
+ {
+ "data": {
+ "text/html": [
+ "
\n",
+ "\n",
+ "
\n",
+ " \n",
+ " \n",
+ " | \n",
+ " track_name | \n",
+ " artist(s)_name | \n",
+ " artist_count | \n",
+ " released_year | \n",
+ " released_month | \n",
+ " released_day | \n",
+ " in_spotify_playlists | \n",
+ " in_spotify_charts | \n",
+ " streams | \n",
+ " in_apple_playlists | \n",
+ " ... | \n",
+ " key | \n",
+ " mode | \n",
+ " danceability | \n",
+ " valence | \n",
+ " energy | \n",
+ " acousticness | \n",
+ " instrumentalness | \n",
+ " liveness | \n",
+ " speechiness | \n",
+ " release_date | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " \n",
+ " 0 | \n",
+ " Seven (feat. Latto) (Explicit Ver.) | \n",
+ " Latto, Jung Kook | \n",
+ " 2 | \n",
+ " 2023 | \n",
+ " 7 | \n",
+ " 14 | \n",
+ " 553 | \n",
+ " 147 | \n",
+ " 141381703.0 | \n",
+ " 43 | \n",
+ " ... | \n",
+ " B | \n",
+ " Major | \n",
+ " 80 | \n",
+ " 89 | \n",
+ " 83 | \n",
+ " 31 | \n",
+ " 0 | \n",
+ " 8 | \n",
+ " 4 | \n",
+ " 2023-07-14 | \n",
+ "
\n",
+ " \n",
+ " 1 | \n",
+ " LALA | \n",
+ " Myke Towers | \n",
+ " 1 | \n",
+ " 2023 | \n",
+ " 3 | \n",
+ " 23 | \n",
+ " 1474 | \n",
+ " 48 | \n",
+ " 133716286.0 | \n",
+ " 48 | \n",
+ " ... | \n",
+ " C# | \n",
+ " Major | \n",
+ " 71 | \n",
+ " 61 | \n",
+ " 74 | \n",
+ " 7 | \n",
+ " 0 | \n",
+ " 10 | \n",
+ " 4 | \n",
+ " 2023-03-23 | \n",
+ "
\n",
+ " \n",
+ " 2 | \n",
+ " vampire | \n",
+ " Olivia Rodrigo | \n",
+ " 1 | \n",
+ " 2023 | \n",
+ " 6 | \n",
+ " 30 | \n",
+ " 1397 | \n",
+ " 113 | \n",
+ " 140003974.0 | \n",
+ " 94 | \n",
+ " ... | \n",
+ " F | \n",
+ " Major | \n",
+ " 51 | \n",
+ " 32 | \n",
+ " 53 | \n",
+ " 17 | \n",
+ " 0 | \n",
+ " 31 | \n",
+ " 6 | \n",
+ " 2023-06-30 | \n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
3 rows × 25 columns
\n",
+ "
"
+ ],
+ "text/plain": [
+ " track_name artist(s)_name artist_count \\\n",
+ "0 Seven (feat. Latto) (Explicit Ver.) Latto, Jung Kook 2 \n",
+ "1 LALA Myke Towers 1 \n",
+ "2 vampire Olivia Rodrigo 1 \n",
+ "\n",
+ " released_year released_month released_day in_spotify_playlists \\\n",
+ "0 2023 7 14 553 \n",
+ "1 2023 3 23 1474 \n",
+ "2 2023 6 30 1397 \n",
+ "\n",
+ " in_spotify_charts streams in_apple_playlists ... key mode \\\n",
+ "0 147 141381703.0 43 ... B Major \n",
+ "1 48 133716286.0 48 ... C# Major \n",
+ "2 113 140003974.0 94 ... F Major \n",
+ "\n",
+ " danceability valence energy acousticness instrumentalness liveness \\\n",
+ "0 80 89 83 31 0 8 \n",
+ "1 71 61 74 7 0 10 \n",
+ "2 51 32 53 17 0 31 \n",
+ "\n",
+ " speechiness release_date \n",
+ "0 4 2023-07-14 \n",
+ "1 4 2023-03-23 \n",
+ "2 6 2023-06-30 \n",
+ "\n",
+ "[3 rows x 25 columns]"
+ ]
+ },
+ "execution_count": 7,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Display the first few rows of the dataset\n",
+ "import pandas as pd\n",
+ "file_path = './spotify_dataset.csv'\n",
+ "spotify_data = pd.read_csv(file_path)\n",
+ "spotify_data.head(3)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Here is the task that the agent needs to perform:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "message = \"\"\"What's the age and citizenship of the artists who had the top 3 most streamed songs on Spotify in 2023?\n",
+ "\n",
+ "You have access to a dataset with information about Spotify songs from the past 10 years, located at ./spotify_dataset.csv.\n",
+ "You also have access to the internet to search for information not available in the dataset.\n",
+ "You must use the dataset when you can, and if stuck you can use the internet.\n",
+ "Remember to inspect the dataset and get a list of its columnsto understand its structure before trying to query it. Take it step by step.\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Step 2: Run the tool use workflow\n",
+ "\n",
+ "Next, we run the tool use workflow involving for steps:\n",
+ "\n",
+ "- Get the user message\n",
+ "- Model generates tool calls, if any\n",
+ "- Execute tools based on the tool calls generated by the model\n",
+ "- Model either generates more tool calls or returns a response with citations\n",
+ "\n",
+ "Looking at the example output, the agent performs the task in a sequence of 3 steps:\n",
+ "\n",
+ "- Inspect the dataset and get a list of its columns.\n",
+ "- Write and execute Python code to find the top 3 most streamed songs on Spotify in 2023 and their respective artists.\n",
+ "- Search for the age and citizenship of each artist on the internet."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "USER MESSAGE:\n",
+ "What's the age and citizenship of the artists who had the top 3 most streamed songs on Spotify in 2023?\n",
+ "\n",
+ "You have access to a dataset with information about Spotify songs from the past 10 years, located at ./spotify_dataset.csv.\n",
+ "You also have access to the internet to search for information not available in the dataset.\n",
+ "You must use the dataset when you can, and if stuck you can use the internet.\n",
+ "Remember to inspect the dataset and get a list of its columnsto understand its structure before trying to query it. Take it step by step.\n",
+ "\n",
+ "==================================================\n",
+ "\n",
+ "TOOL PLAN:\n",
+ "I will first inspect the dataset to understand its structure and the columns it contains. Then, I will write and execute Python code to find the top 3 most streamed songs on Spotify in 2023 and their respective artists. Finally, I will search for the age and citizenship of each artist online.\n",
+ "\n",
+ "TOOL CALLS:\n",
+ "Tool name: python_interpreter\n",
+ " import pandas as pd\n",
+ " \n",
+ " df = pd.read_csv('spotify_dataset.csv')\n",
+ " \n",
+ " print(df.columns)\n",
+ "None\n",
+ "\n",
+ "TOOL RESULTS:\n",
+ "{'console_output': \"Index(['track_name', 'artist(s)_name', 'artist_count', 'released_year',\\n 'released_month', 'released_day', 'in_spotify_playlists',\\n 'in_spotify_charts', 'streams', 'in_apple_playlists', 'in_apple_charts',\\n 'in_deezer_playlists', 'in_deezer_charts', 'in_shazam_charts', 'bpm',\\n 'key', 'mode', 'danceability', 'valence', 'energy', 'acousticness',\\n 'instrumentalness', 'liveness', 'speechiness', 'release_date'],\\n dtype='object')\\n\", 'executed_code': \"import pandas as pd\\n\\ndf = pd.read_csv('spotify_dataset.csv')\\n\\nprint(df.columns)\"} \n",
+ "\n",
+ "\n",
+ "TOOL PLAN:\n",
+ "I now know the column names of the dataset. I will now find the top 3 most streamed songs on Spotify in 2023 and then find the age and citizenship of the artists of those songs.\n",
+ "\n",
+ "TOOL CALLS:\n",
+ "Tool name: python_interpreter\n",
+ " import pandas as pd\n",
+ " \n",
+ " df = pd.read_csv('spotify_dataset.csv')\n",
+ " \n",
+ " # Filter for songs released in 2023\n",
+ " df_2023 = df[df['released_year'] == 2023]\n",
+ " \n",
+ " # Sort by streams in descending order\n",
+ " df_2023 = df_2023.sort_values(by='streams', ascending=False)\n",
+ " \n",
+ " # Get the top 3 songs\n",
+ " top_3_songs = df_2023.head(3)\n",
+ " \n",
+ " print(\"Top 3 most streamed songs on Spotify in 2023:\")\n",
+ " print(top_3_songs[['track_name', 'artist(s)_name']])\n",
+ "None\n",
+ "\n",
+ "TOOL RESULTS:\n",
+ "{'console_output': 'Top 3 most streamed songs on Spotify in 2023:\\n track_name artist(s)_name\\n12 Flowers Miley Cyrus\\n6 Ella Baila Sola Eslabon Armado, Peso Pluma\\n133 Shakira: Bzrp Music Sessions, Vol. 53 Shakira, Bizarrap\\n', 'executed_code': 'import pandas as pd\\n\\ndf = pd.read_csv(\\'spotify_dataset.csv\\')\\n\\n# Filter for songs released in 2023\\ndf_2023 = df[df[\\'released_year\\'] == 2023]\\n\\n# Sort by streams in descending order\\ndf_2023 = df_2023.sort_values(by=\\'streams\\', ascending=False)\\n\\n# Get the top 3 songs\\ntop_3_songs = df_2023.head(3)\\n\\nprint(\"Top 3 most streamed songs on Spotify in 2023:\")\\nprint(top_3_songs[[\\'track_name\\', \\'artist(s)_name\\']])'} \n",
+ "\n",
+ "\n",
+ "RESPONSE:\n",
+ "The top 3 most streamed songs on Spotify in 2023 were:\n",
+ "1. *Flowers* by Miley Cyrus\n",
+ "2. *Ella Baila Sola* by Eslabon Armado and Peso Pluma\n",
+ "3. *Shakira: Bzrp Music Sessions, Vol. 53* by Shakira and Bizarrap\n",
+ "\n",
+ "Miley Cyrus is 30 years old and American. Eslabon Armado is a Mexican group, and Peso Pluma is a Mexican singer. Shakira is 46 years old and Colombian.\n",
+ "\n",
+ "CITATIONS:\n",
+ "Start: 59 | End: 66 | Text: 'Flowers'\n",
+ "Sources:\n",
+ "python_interpreter_ryqfeye95gay:0\n",
+ "--------------------------------------------------\n",
+ "Start: 71 | End: 82 | Text: 'Miley Cyrus'\n",
+ "Sources:\n",
+ "python_interpreter_ryqfeye95gay:0\n",
+ "--------------------------------------------------\n",
+ "Start: 87 | End: 102 | Text: 'Ella Baila Sola'\n",
+ "Sources:\n",
+ "python_interpreter_ryqfeye95gay:0\n",
+ "--------------------------------------------------\n",
+ "Start: 107 | End: 121 | Text: 'Eslabon Armado'\n",
+ "Sources:\n",
+ "python_interpreter_ryqfeye95gay:0\n",
+ "--------------------------------------------------\n",
+ "Start: 126 | End: 136 | Text: 'Peso Pluma'\n",
+ "Sources:\n",
+ "python_interpreter_ryqfeye95gay:0\n",
+ "--------------------------------------------------\n",
+ "Start: 141 | End: 178 | Text: 'Shakira: Bzrp Music Sessions, Vol. 53'\n",
+ "Sources:\n",
+ "python_interpreter_ryqfeye95gay:0\n",
+ "--------------------------------------------------\n",
+ "Start: 183 | End: 190 | Text: 'Shakira'\n",
+ "Sources:\n",
+ "python_interpreter_ryqfeye95gay:0\n",
+ "--------------------------------------------------\n",
+ "Start: 195 | End: 203 | Text: 'Bizarrap'\n",
+ "Sources:\n",
+ "python_interpreter_ryqfeye95gay:0\n",
+ "--------------------------------------------------\n"
+ ]
+ }
+ ],
+ "source": [
+ "model = \"command-r-plus-08-2024\"\n",
+ "tools = [web_search_tool,python_interpreter_tool]\n",
+ "\n",
+ "# Step 1: get user message\n",
+ "print(f\"USER MESSAGE:\\n{message}\")\n",
+ "print(\"=\"*50)\n",
+ "\n",
+ "messages = [{'role': 'user','content': message}]\n",
+ "\n",
+ "# 2 - Model generates tool calls, if any\n",
+ "res = co.chat(model=model,\n",
+ " messages=messages,\n",
+ " tools=tools,\n",
+ " temperature=0)\n",
+ "\n",
+ "# Keep invoking tools as long as the model generates tool calls\n",
+ "while res.message.tool_calls:\n",
+ " # Tool plan and tool calls\n",
+ " print(\"\\nTOOL PLAN:\")\n",
+ " print(res.message.tool_plan)\n",
+ "\n",
+ " print(\"\\nTOOL CALLS:\")\n",
+ " for tc in res.message.tool_calls:\n",
+ " if tc.function.name == \"python_interpreter\":\n",
+ " print(f\"Tool name: {tc.function.name}\")\n",
+ " tool_call_prettified = print(\"\\n\".join(f\" {line}\" for line_num, line in enumerate(json.loads(tc.function.arguments)[\"code\"].splitlines())))\n",
+ " print(tool_call_prettified)\n",
+ " else:\n",
+ " print(f\"Tool name: {tc.function.name} | Parameters: {tc.function.arguments}\")\n",
+ "\n",
+ " messages.append({'role': 'assistant',\n",
+ " 'tool_calls': res.message.tool_calls,\n",
+ " 'tool_plan': res.message.tool_plan})\n",
+ "\n",
+ " # 3 - Execute tools based on the tool calls generated by the model\n",
+ " print(\"\\nTOOL RESULTS:\")\n",
+ " for tc in res.message.tool_calls:\n",
+ " tool_result = functions_map[tc.function.name](**json.loads(tc.function.arguments))\n",
+ " tool_content = [json.dumps(tool_result)]\n",
+ " print(tool_result, \"\\n\")\n",
+ " \n",
+ " messages.append({'role': 'tool',\n",
+ " 'tool_call_id': tc.id,\n",
+ " 'tool_content': tool_content}) \n",
+ "\n",
+ " # 4 - Model either generates more tool calls or returns a response\n",
+ " res = co.chat(model=model,\n",
+ " messages=messages,\n",
+ " tools=tools,\n",
+ " temperature=0)\n",
+ " \n",
+ "messages.append({\"role\": \"assistant\", \"content\": res.message.content[0].text})\n",
+ "\n",
+ "print(\"\\nRESPONSE:\")\n",
+ "print(res.message.content[0].text)\n",
+ "\n",
+ "if res.message.citations:\n",
+ " print(\"\\nCITATIONS:\")\n",
+ " for citation in res.message.citations:\n",
+ " print(f\"Start: {citation.start} | End: {citation.end} | Text: '{citation.text}'\")\n",
+ " print(\"Sources:\")\n",
+ " if citation.sources:\n",
+ " for source in citation.sources:\n",
+ " print(source.id)\n",
+ " print(\"-\"*50)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "base",
+ "language": "python",
+ "name": "base"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/agents/Vanilla_Tool_Use_v2.ipynb b/notebooks/agents/Vanilla_Tool_Use_v2.ipynb
new file mode 100644
index 0000000..8af952b
--- /dev/null
+++ b/notebooks/agents/Vanilla_Tool_Use_v2.ipynb
@@ -0,0 +1,626 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "YN-eakfxtLGd"
+ },
+ "source": [
+ "# Tool Use\n",
+ "\n",
+ "Tool use allows customers to **connect their large language models to external tools like search engines, APIs, functions, databases**, etc.\n",
+ "\n",
+ "This allows the customer's model to unlock a richer set of behaviors by leveraging data stored in tools, taking actions through APIs, interacting with a vector database, querying a search engine, etc.\n",
+ "\n",
+ "This is particularly valuable for enterprise customers, since a lot of enterprise data lives in external sources.\n",
+ "\n",
+ "Tool Use consists of 4 steps:\n",
+ "- Step 1: the user configures the request to the model\n",
+ "- Step 2: the **model smartly decides which tool(s) to use and how**\n",
+ "- Step 3: the tool calls are executed to mock database\n",
+ "- Step 4: the **model generates a final answer with precise citations based on the tool results**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "us5dkKrLCbXW",
+ "outputId": "94c97f62-77fb-4492-a4e4-d9eeee4e438c"
+ },
+ "outputs": [],
+ "source": [
+ "# ! pip install cohere --quiet"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import cohere\n",
+ "import os\n",
+ "import json\n",
+ "co = cohere.ClientV2(api_key=\"YOUR_COHERE_API_KEY\") # Get your free API key: https://dashboard.cohere.com/api-keys"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "0T7yc1PltLGp"
+ },
+ "source": [
+ "## Step 0: Create a mock database\n",
+ "First, we'll define the mock data that our tools will query. This data represents sales reports and a product catalog."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {
+ "id": "yZffY8xItLGp"
+ },
+ "outputs": [],
+ "source": [
+ "# Mock database containing daily sales reports\n",
+ "sales_database = {\n",
+ " '2023-09-28': {\n",
+ " 'total_sales_amount': 5000,\n",
+ " 'total_units_sold': 100,\n",
+ " },\n",
+ " '2023-09-29': {\n",
+ " 'total_sales_amount': 10000,\n",
+ " 'total_units_sold': 250,\n",
+ " },\n",
+ " '2023-09-30': {\n",
+ " 'total_sales_amount': 8000,\n",
+ " 'total_units_sold': 200,\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "# Mock product catalog\n",
+ "product_catalog = {\n",
+ " 'Electronics': [\n",
+ " {'product_id': 'E1001', 'name': 'Smartphone', 'price': 500, 'stock_level': 20},\n",
+ " {'product_id': 'E1002', 'name': 'Laptop', 'price': 1000, 'stock_level': 15},\n",
+ " {'product_id': 'E1003', 'name': 'Tablet', 'price': 300, 'stock_level': 25},\n",
+ " ],\n",
+ " 'Clothing': [\n",
+ " {'product_id': 'C1001', 'name': 'T-Shirt', 'price': 20, 'stock_level': 100},\n",
+ " {'product_id': 'C1002', 'name': 'Jeans', 'price': 50, 'stock_level': 80},\n",
+ " {'product_id': 'C1003', 'name': 'Jacket', 'price': 100, 'stock_level': 40},\n",
+ " ]\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "6TGWYiOdtLGp"
+ },
+ "source": [
+ "Now, we'll define the tools that simulate querying this database. \n",
+ "You could for example use the API of an enterprise sales platform.\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {
+ "id": "YuIH4us8tLGp"
+ },
+ "outputs": [],
+ "source": [
+ "def query_daily_sales_report(day: str) -> dict:\n",
+ " \"\"\"\n",
+ " Function to retrieve the sales report for the given day\n",
+ " \"\"\"\n",
+ " report = sales_database.get(day, {})\n",
+ " if report:\n",
+ " return {\n",
+ " 'date': day,\n",
+ " 'summary': f\"Total Sales Amount: {report['total_sales_amount']}, Total Units Sold: {report['total_units_sold']}\"\n",
+ " }\n",
+ " else:\n",
+ " return {'date': day, 'summary': 'No sales data available for this day.'}\n",
+ "\n",
+ "\n",
+ "def query_product_catalog(category: str) -> dict:\n",
+ " \"\"\"\n",
+ " Function to retrieve products for the given category\n",
+ " \"\"\"\n",
+ " products = product_catalog.get(category, [])\n",
+ " return {\n",
+ " 'category': category,\n",
+ " 'products': products\n",
+ " }\n",
+ "\n",
+ "\n",
+ "functions_map = {\n",
+ " \"query_daily_sales_report\": query_daily_sales_report,\n",
+ " \"query_product_catalog\": query_product_catalog\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "HZRhTu4ftLGp"
+ },
+ "source": [
+ "## Step 1 - the user configures the request to the model\n",
+ "\n",
+ "The developer provides a few things to the model:\n",
+ "- A preamble containing instructions about the task and the desired style for the output.\n",
+ "- The user request.\n",
+ "- A list of tools to the model.\n",
+ "- (Optionally) a chat history for the model to work with.\n",
+ "\n",
+ "\n",
+ "You can specify one or many tools to the model. Every tool needs to be described with a JSON schema, indicating the tool name, description, and parameters (code snippets below).\n",
+ "\n",
+ "In our example, we provide two tools to the model: `daily_sales_report` and `product_catalog`.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {
+ "id": "aIk-of_OtLGp"
+ },
+ "outputs": [],
+ "source": [
+ "# tool descriptions that the model has access to\n",
+ "# note: Cohere always adds a \"directly_answer\" tool under the hood, so that the model can decide to not leverage any tool, if they're not needed.\n",
+ "tools = [\n",
+ " {\n",
+ " \"type\": \"function\",\n",
+ " \"function\": {\n",
+ " \"name\": \"query_daily_sales_report\",\n",
+ " \"description\": \"Connects to a database to retrieve overall sales volumes and sales information for a given day.\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"day\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"Retrieves sales data for this day, formatted as YYYY-MM-DD.\"\n",
+ " }\n",
+ " },\n",
+ " \"required\": [\"day\"]\n",
+ " }\n",
+ " }\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"function\",\n",
+ " \"function\": {\n",
+ " \"name\": \"query_product_catalog\",\n",
+ " \"description\": \"Connects to a product catalog with information about all the products being sold, including categories, prices, and stock levels.\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"category\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"Retrieves product information data for all products in this category.\"\n",
+ " }\n",
+ " },\n",
+ " \"required\": [\"category\"]\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ "]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "k2AHJRnztLGp"
+ },
+ "source": [
+ "Now let's define the user request. \n",
+ "\n",
+ "In our example we'll use: \"Can you provide a sales summary for 29th September 2023, and also give me the details of all products in the 'Electronics' category that were sold that day, including their prices and stock levels?\"\n",
+ "\n",
+ "Only a langage model with Tool Use can answer this request: it requires looking up information in the right external tools (step 2), and then providing a final answer based on the tool results (step 4)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {
+ "id": "JuDgJ7fjtLGq"
+ },
+ "outputs": [],
+ "source": [
+ "# preamble containing instructions about the task and the desired style for the output.\n",
+ "preamble = \"\"\"\n",
+ "## Task and Context\n",
+ "You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.\n",
+ "\n",
+ "## Style Guide\n",
+ "Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.\n",
+ "\"\"\"\n",
+ "\n",
+ "# user request\n",
+ "message = \"Can you provide a sales summary for 29th September 2023, and also give me some details about the products in the 'Electronics' category, for example their prices and stock levels?\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1NhW-G_JtLGq"
+ },
+ "source": [
+ "## Step 2 – the model smartly decides which tool(s) to use and how\n",
+ "The model intelligently selects the right tool(s) to call -- and the right parameters for each tool call -- based on the content of the user message."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "o79_n99GtLGq",
+ "outputId": "81789d00-01b9-4c17-d1b0-1668d75a2b86"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "The model recommends doing the following tool calls:\n",
+ "\n",
+ "Tool plan:\n",
+ "I will search for the sales summary for 29th September 2023 and the details of products in the 'Electronics' category. \n",
+ "\n",
+ "Tool calls:\n",
+ "Tool name: query_daily_sales_report | Parameters: {\"day\":\"2023-09-29\"}\n",
+ "Tool name: query_product_catalog | Parameters: {\"category\":\"Electronics\"}\n"
+ ]
+ }
+ ],
+ "source": [
+ "messages=[{\"role\": \"system\", \"content\": preamble},\n",
+ " {\"role\": \"user\", \"content\": message}]\n",
+ "\n",
+ "response = co.chat(\n",
+ " model=\"command-r-plus\",\n",
+ " messages=messages,\n",
+ " tools=tools\n",
+ ")\n",
+ "\n",
+ "print(\"The model recommends doing the following tool calls:\\n\")\n",
+ "print(\"Tool plan:\")\n",
+ "print(response.message.tool_plan,\"\\n\")\n",
+ "print(\"Tool calls:\")\n",
+ "for tc in response.message.tool_calls:\n",
+ " print(f\"Tool name: {tc.function.name} | Parameters: {tc.function.arguments}\")\n",
+ " \n",
+ "# append the chat history\n",
+ "messages.append({'role': 'assistant', 'tool_calls': response.message.tool_calls, 'tool_plan': response.message.tool_plan})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "md_9QPcxtLGq"
+ },
+ "source": [
+ "## Step 3 – the tool calls are executed\n",
+ "\n",
+ "You can then execute the appropriate calls, using the tool calls and tool parameters generated by the model. \n",
+ "These tool calls return tool results that will be fed to the model in Step 4."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "1LuDCRpFtLGr",
+ "outputId": "42ead35e-225a-4b9a-c954-b526f2865350"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Tool results that will be fed back to the model in step 4:\n",
+ "{\n",
+ " \"date\": \"2023-09-29\",\n",
+ " \"summary\": \"Total Sales Amount: 10000, Total Units Sold: 250\"\n",
+ "}\n",
+ "{\n",
+ " \"category\": \"Electronics\",\n",
+ " \"products\": [\n",
+ " {\n",
+ " \"product_id\": \"E1001\",\n",
+ " \"name\": \"Smartphone\",\n",
+ " \"price\": 500,\n",
+ " \"stock_level\": 20\n",
+ " },\n",
+ " {\n",
+ " \"product_id\": \"E1002\",\n",
+ " \"name\": \"Laptop\",\n",
+ " \"price\": 1000,\n",
+ " \"stock_level\": 15\n",
+ " },\n",
+ " {\n",
+ " \"product_id\": \"E1003\",\n",
+ " \"name\": \"Tablet\",\n",
+ " \"price\": 300,\n",
+ " \"stock_level\": 25\n",
+ " }\n",
+ " ]\n",
+ "}\n"
+ ]
+ }
+ ],
+ "source": [
+ "tool_content = []\n",
+ "# Iterate over the tool calls generated by the model\n",
+ "for tc in response.message.tool_calls:\n",
+ " # here is where you would call the tool recommended by the model, using the parameters recommended by the model\n",
+ " tool_result= functions_map[tc.function.name](**json.loads(tc.function.arguments))\n",
+ " # store the output in a list\n",
+ " tool_content.append(json.dumps(tool_result))\n",
+ " # append the chat history\n",
+ " messages.append({'role': 'tool', 'tool_call_id': tc.id, 'tool_content': tool_content}) \n",
+ "\n",
+ "print(\"Tool results that will be fed back to the model in step 4:\")\n",
+ "for result in tool_content:\n",
+ " print(json.dumps(json.loads(result), indent=2))\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "8cKlLk18tLGr"
+ },
+ "source": [
+ "## Step 4 - the model generates a final answer based on the tool results\n",
+ "Finally, the developer calls the Cohere model, providing the tools results, in order to generate the model's final answer. \n",
+ "\n",
+ "Bonus: At Cohere, all Tool Use calls come with... **precise citations**! 🎉\n",
+ "The model cites which tool results were used to generate the final answer. \n",
+ "These citations make it easy to check where the model’s generated response claims are coming from. \n",
+ "They help users gain visibility into the model reasoning, and sanity check the final model generation. \n",
+ "These citations are optional — you can decide to ignore them.\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 53,
+ "metadata": {
+ "id": "MKnjXVfXtLGr"
+ },
+ "outputs": [],
+ "source": [
+ "response = co.chat(\n",
+ " model=\"command-r-plus\",\n",
+ " messages=messages,\n",
+ " tools=tools\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 54,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "jlxKTsaztLGr",
+ "outputId": "b2cd8667-bca9-4928-c423-61930b4b49fa"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Final answer:\n",
+ "On 29 September 2023, the total sales amount was $10,000, and 250 units were sold. \n",
+ "\n",
+ "Here are the details for the products in the 'Electronics' category:\n",
+ "- Smartphone: $500, 20 in stock\n",
+ "- Laptop: $1,000, 15 in stock\n",
+ "- Tablet: $300, 25 in stock\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(\"Final answer:\")\n",
+ "print(response.message.content[0].text)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "niMkAr2PN9j4"
+ },
+ "source": [
+ "## Bonus: Citations come for free with Cohere! 🎉\n",
+ "\n",
+ "At Cohere, model generations come with... precise citations! 🎉\n",
+ "The model cites which groups of words, in the tool results, were used to generate the final answer. \n",
+ "These citations make it easy to check where the model’s generated response claims are coming from. \n",
+ "They help users gain visibility into the model reasoning, and sanity check the final model generation. \n",
+ "These citations are optional — you can decide to ignore them."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 57,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "9wuoCUBwtLGr",
+ "outputId": "da3c0dc5-6b87-42ea-d64b-e7e85c40273e"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Citations that support the final answer:\n",
+ "start=26 end=56 text='total sales amount was $10,000' sources=[Source_Tool(id='query_daily_sales_report_6c5v4phf8d4f:0', tool_output={'date': '2023-09-29', 'summary': 'Total Sales Amount: 10000, Total Units Sold: 250'}, type='tool'), Source_Tool(id='query_product_catalog_7395zjrh8m8w:0', tool_output={'date': '2023-09-29', 'summary': 'Total Sales Amount: 10000, Total Units Sold: 250'}, type='tool')]\n",
+ "Start: 26 | End: 56 | Text: 'total sales amount was $10,000'\n",
+ "start=62 end=81 text='250 units were sold' sources=[Source_Tool(id='query_daily_sales_report_6c5v4phf8d4f:0', tool_output={'date': '2023-09-29', 'summary': 'Total Sales Amount: 10000, Total Units Sold: 250'}, type='tool'), Source_Tool(id='query_product_catalog_7395zjrh8m8w:0', tool_output={'date': '2023-09-29', 'summary': 'Total Sales Amount: 10000, Total Units Sold: 250'}, type='tool')]\n",
+ "Start: 62 | End: 81 | Text: '250 units were sold'\n",
+ "start=156 end=185 text='Smartphone: $500, 20 in stock' sources=[Source_Tool(id='query_daily_sales_report_6c5v4phf8d4f:1', tool_output={'category': 'Electronics', 'products': '[{\"name\":\"Smartphone\",\"price\":500,\"product_id\":\"E1001\",\"stock_level\":20},{\"name\":\"Laptop\",\"price\":1000,\"product_id\":\"E1002\",\"stock_level\":15},{\"name\":\"Tablet\",\"price\":300,\"product_id\":\"E1003\",\"stock_level\":25}]'}, type='tool'), Source_Tool(id='query_product_catalog_7395zjrh8m8w:1', tool_output={'category': 'Electronics', 'products': '[{\"name\":\"Smartphone\",\"price\":500,\"product_id\":\"E1001\",\"stock_level\":20},{\"name\":\"Laptop\",\"price\":1000,\"product_id\":\"E1002\",\"stock_level\":15},{\"name\":\"Tablet\",\"price\":300,\"product_id\":\"E1003\",\"stock_level\":25}]'}, type='tool')]\n",
+ "Start: 156 | End: 185 | Text: 'Smartphone: $500, 20 in stock'\n",
+ "start=188 end=215 text='Laptop: $1,000, 15 in stock' sources=[Source_Tool(id='query_daily_sales_report_6c5v4phf8d4f:1', tool_output={'category': 'Electronics', 'products': '[{\"name\":\"Smartphone\",\"price\":500,\"product_id\":\"E1001\",\"stock_level\":20},{\"name\":\"Laptop\",\"price\":1000,\"product_id\":\"E1002\",\"stock_level\":15},{\"name\":\"Tablet\",\"price\":300,\"product_id\":\"E1003\",\"stock_level\":25}]'}, type='tool'), Source_Tool(id='query_product_catalog_7395zjrh8m8w:1', tool_output={'category': 'Electronics', 'products': '[{\"name\":\"Smartphone\",\"price\":500,\"product_id\":\"E1001\",\"stock_level\":20},{\"name\":\"Laptop\",\"price\":1000,\"product_id\":\"E1002\",\"stock_level\":15},{\"name\":\"Tablet\",\"price\":300,\"product_id\":\"E1003\",\"stock_level\":25}]'}, type='tool')]\n",
+ "Start: 188 | End: 215 | Text: 'Laptop: $1,000, 15 in stock'\n",
+ "start=218 end=243 text='Tablet: $300, 25 in stock' sources=[Source_Tool(id='query_daily_sales_report_6c5v4phf8d4f:1', tool_output={'category': 'Electronics', 'products': '[{\"name\":\"Smartphone\",\"price\":500,\"product_id\":\"E1001\",\"stock_level\":20},{\"name\":\"Laptop\",\"price\":1000,\"product_id\":\"E1002\",\"stock_level\":15},{\"name\":\"Tablet\",\"price\":300,\"product_id\":\"E1003\",\"stock_level\":25}]'}, type='tool'), Source_Tool(id='query_product_catalog_7395zjrh8m8w:1', tool_output={'category': 'Electronics', 'products': '[{\"name\":\"Smartphone\",\"price\":500,\"product_id\":\"E1001\",\"stock_level\":20},{\"name\":\"Laptop\",\"price\":1000,\"product_id\":\"E1002\",\"stock_level\":15},{\"name\":\"Tablet\",\"price\":300,\"product_id\":\"E1003\",\"stock_level\":25}]'}, type='tool')]\n",
+ "Start: 218 | End: 243 | Text: 'Tablet: $300, 25 in stock'\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(\"Citations that support the final answer:\")\n",
+ "for citation in response.message.citations:\n",
+ " print(citation)\n",
+ " print(f\"Start: {citation.start} | End: {citation.end} | Text: '{citation.text}'\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 63,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "On 29 September 2023, the total sales amount was $10,000[3,4], and 250 units were sold[3,4]. \n",
+ "\n",
+ "Here are the details for the products in the 'Electronics' category:\n",
+ "- Smartphone: $500, 20 in stock[1,2]\n",
+ "- Laptop: $1,000, 15 in stock[1,2]\n",
+ "- Tablet: $300, 25 in stock[1,2]\n",
+ "\n",
+ "Source tools:\n",
+ "[1]: query_daily_sales_report_6c5v4phf8d4f:1 | {'category': 'Electronics', 'products': '[{\"name\":\"Smartphone\",\"price\":500,\"product_id\":\"E1001\",\"stock_level\":20},{\"name\":\"Laptop\",\"price\":1000,\"product_id\":\"E1002\",\"stock_level\":15},{\"name\":\"Tablet\",\"price\":300,\"product_id\":\"E1003\",\"stock_level\":25}]'}\n",
+ "[2]: query_product_catalog_7395zjrh8m8w:1 | {'category': 'Electronics', 'products': '[{\"name\":\"Smartphone\",\"price\":500,\"product_id\":\"E1001\",\"stock_level\":20},{\"name\":\"Laptop\",\"price\":1000,\"product_id\":\"E1002\",\"stock_level\":15},{\"name\":\"Tablet\",\"price\":300,\"product_id\":\"E1003\",\"stock_level\":25}]'}\n",
+ "[3]: query_daily_sales_report_6c5v4phf8d4f:0 | {'date': '2023-09-29', 'summary': 'Total Sales Amount: 10000, Total Units Sold: 250'}\n",
+ "[4]: query_product_catalog_7395zjrh8m8w:0 | {'date': '2023-09-29', 'summary': 'Total Sales Amount: 10000, Total Units Sold: 250'}\n"
+ ]
+ }
+ ],
+ "source": [
+ "def insert_inline_citations(text, citations):\n",
+ " sorted_citations = sorted(citations, key=lambda c: c.start, reverse=True)\n",
+ " source_index = {}\n",
+ " current_index = 1\n",
+ " \n",
+ " for citation in sorted_citations:\n",
+ " citation_ids = []\n",
+ " for source in citation.sources:\n",
+ " if source.id not in source_index:\n",
+ " source_index[source.id] = current_index\n",
+ " current_index += 1\n",
+ " citation_ids.append(str(source_index[source.id]))\n",
+ " \n",
+ " citation_text = f\"[{','.join(citation_ids)}]\"\n",
+ " text = text[:citation.end] + citation_text + text[citation.end:]\n",
+ " \n",
+ " return text, source_index\n",
+ "\n",
+ "def list_sources(citations, source_index):\n",
+ " unique_sources = {}\n",
+ " for citation in citations:\n",
+ " for source in citation.sources:\n",
+ " if hasattr(source, 'tool_output'):\n",
+ " source_text = str(source.tool_output)\n",
+ " else:\n",
+ " source_text = str(source.document)\n",
+ " unique_sources[source.id] = source_text\n",
+ " \n",
+ " footnotes = []\n",
+ " for source_id, source_text in sorted(unique_sources.items(), key=lambda x: source_index[x[0]]):\n",
+ " index = source_index[source_id]\n",
+ " footnotes.append(f\"[{index}]: {source_id} | {source_text}\")\n",
+ " \n",
+ " return \"Source tools:\\n\" + \"\\n\".join(footnotes)\n",
+ "\n",
+ "# Usage example:\n",
+ "cited_text, source_index = insert_inline_citations(response.message.content[0].text, response.message.citations)\n",
+ "\n",
+ "print(cited_text)\n",
+ "\n",
+ "print(\"\\n\" + list_sources(response.message.citations, source_index))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "RwH2O3FptLGr"
+ },
+ "source": [
+ "Yiha. You've used Cohere for Tool Use. Tool use opens up a wide range of new use cases. Here are a few examples:\n",
+ "\n",
+ "- **Function calling**: It's now possible to ask the model to output a JSON object with specific function parameters.\n",
+ "For instance, this allows your chatbot to interact with your CRM to change the status of a deal, or to engage with a Python interpreter to conduct data science analyses.\n",
+ "\n",
+ "- **Query transformation**: You can transform a user message into a search query for a vector database or any search engine.\n",
+ "For instance, this enables your work assistant to automatically retrieve the appropriate data from your company's documentation by creating the right query for your vector database.\n",
+ "\n",
+ "- **Advanced searches**: You can transform a user message into one-or-many queries, to do multiple subtasks based on the content of the message.\n",
+ "For instance, this allows your chatbot to search across different databases and platforms to retrieve relevant information or to conduct comparative analysis.\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "FXOSzfqRCLBH"
+ },
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "provenance": []
+ },
+ "kernelspec": {
+ "display_name": "base",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
diff --git a/notebooks/guides/getting-started/v2/tutorial_pt1_v2.ipynb b/notebooks/guides/getting-started/v2/tutorial_pt1_v2.ipynb
new file mode 100644
index 0000000..4a2d6d5
--- /dev/null
+++ b/notebooks/guides/getting-started/v2/tutorial_pt1_v2.ipynb
@@ -0,0 +1,122 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Cohere Tutorial\n",
+ "\n",
+ "#### Build your first Cohere application: An onboarding assistant for new hires"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Welcome to the Cohere tutorial – a hands-on introduction to Cohere!\n",
+ "\n",
+ "In this tutorial, you will learn how to use the Cohere API, specifically three endpoints: Chat, Embed, and Rerank.\n",
+ "\n",
+ "This tutorial is split over seven parts, with each part focusing on one use case, as follows:\n",
+ "\n",
+ "- Part 1: Installation and Setup (Pre-requisite)\n",
+ "- Part 2: Text Generation\n",
+ "- Part 3: Chatbots\n",
+ "- Part 4: Semantic Search\n",
+ "- Part 5: Reranking\n",
+ "- Part 6: Retrieval-Augmented Generation (RAG)\n",
+ "- Part 7: Agents with Tool Use\n",
+ "\n",
+ "You'll learn about these use cases by building an onboarding assistant that helps new hires onboard to a fictitious company called Co1t. The assistant can help write introductions, answer user questions about the company, search for information from e-mails, and create meeting appointments.\n",
+ "\n",
+ "We recommend that you follow the parts sequentially. However, feel free to skip to specific parts if you want (apart from Part 1, which is a pre-requisite) because each part also works as a standalone tutorial.\n",
+ "\n",
+ "Total Duration: ~15 minutes"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Installation and Setup"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The Cohere platform lets developers access large language model (LLM) capabilities with a few lines of code. These LLMs can solve a broad spectrum of natural language use cases, including classification, semantic search, paraphrasing, summarization, and content generation.\n",
+ "\n",
+ "Cohere's models can be accessed through the playground, SDK, and CLI tool. We support SDKs in four different languages: Python, Typescript, Java, and Go.\n",
+ "\n",
+ "This tutorial uses the Python SDK and accesses the models through the Cohere platform.\n",
+ "\n",
+ "To get started, first install the Cohere Python SDK."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "! pip install -U cohere"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Next, we'll import the `cohere` library and create a client to be used throughout the examples. We create a client by passing the Cohere API key as an argument. To get an API key, [sign up with Cohere](https://dashboard.cohere.com/welcome/register) and get the API key [from the dashboard](https://dashboard.cohere.com/api-keys)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import cohere\n",
+ "\n",
+ "co = cohere.ClientV2(api_key=\"YOUR_COHERE_API_KEY\") # Get your free API key: https://dashboard.cohere.com/api-keys"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In Part 2, we'll get started with the first use case - text generation."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "base",
+ "language": "python",
+ "name": "base"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/guides/getting-started/v2/tutorial_pt2_v2.ipynb b/notebooks/guides/getting-started/v2/tutorial_pt2_v2.ipynb
new file mode 100644
index 0000000..473566f
--- /dev/null
+++ b/notebooks/guides/getting-started/v2/tutorial_pt2_v2.ipynb
@@ -0,0 +1,538 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Text Generation"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Command is Cohere’s flagship LLM. It generates a response based on a user message or prompt. It is trained to follow user commands and to be instantly useful in practical business applications, like summarization, copywriting, extraction, and question-answering.\n",
+ "\n",
+ "Command R and Command R+ are the most recent models in the Command family. They are the market-leading models that balance high efficiency with strong accuracy to enable enterprises to move from proof of concept into production-grade AI.\n",
+ "\n",
+ "You'll use Chat, the Cohere endpoint for accessing the Command models.\n",
+ "\n",
+ "In this tutorial, you'll learn about:\n",
+ "- Basic text generation\n",
+ "- Prompt engineering\n",
+ "- Parameters for controlling output\n",
+ "- Structured output generation\n",
+ "- Streamed output\n",
+ "\n",
+ "You'll learn these by building an onboarding assistant for new hires."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Setup\n",
+ "\n",
+ "To get started, first we need to install the `cohere` library and create a Cohere client."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# pip install cohere\n",
+ "\n",
+ "import cohere\n",
+ "import json\n",
+ "\n",
+ "co = cohere.ClientV2(api_key\"COHERE_API_KEY\") # Get your free API key: https://dashboard.cohere.com/api-keys"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Basic text generation"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "To get started with Chat, we need to pass two parameters, `model` for the LLM model ID and `messages`, which we add a single user message. We then call the Chat endpoint through the client we created earlier.\n",
+ "\n",
+ "The response contains several objects. For simplicity, what we want right now is the `message.content[0].text` object.\n",
+ "\n",
+ "Here's an example of the assistant responding to a new hire's query asking for help to make introductions."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Sure! Here is a draft of an introduction message: \n",
+ "\n",
+ "\"Hi everyone! My name is [Your Name], and I am thrilled to be joining the Co1t team today. I am excited to get to know you all and contribute to the amazing work being done at this startup. A little about me: [Brief description of your role, experience, and interests]. Outside of work, I enjoy [Hobbies and interests]. I look forward to collaborating with you all and being a part of Co1t's journey. Let's connect and make something great together!\" \n",
+ "\n",
+ "Feel free to edit and personalize the message to your liking. Good luck with your new role at Co1t!\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Add the user message\n",
+ "message = \"I'm joining a new startup called Co1t today. Could you help me write a short introduction message to my teammates.\"\n",
+ "\n",
+ "# Generate the response\n",
+ "response = co.chat(model=\"command-r-plus-08-2024\",\n",
+ " messages=[{\"role\": \"user\", \"content\": message}])\n",
+ " # messages=[cohere.UserMessage(content=message)])\n",
+ "\n",
+ "print(response.message.content[0].text)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Further reading:\n",
+ "- [Chat endpoint API reference](https://docs.cohere.com/v2/reference/chat)\n",
+ "- [Documentation on Chat fine-tuning](https://docs.cohere.com/docs/chat-fine-tuning)\n",
+ "- [Documentation on Command R+](https://docs.cohere.com/docs/command-r-plus)\n",
+ "- [LLM University module on text generation](https://cohere.com/llmu#text-generation)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Prompt engineering"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Prompting is at the heart of working with LLMs. The prompt provides context for the text that we want the model to generate. The prompts we create can be anything from simple instructions to more complex pieces of text, and they are used to encourage the model to produce a specific type of output.\n",
+ "\n",
+ "In this section, we'll look at a couple of prompting techniques.\n",
+ "\n",
+ "The first is to add more specific instructions to the prompt. The more instructions you provide in the prompt, the closer you can get to the response you need.\n",
+ "\n",
+ "The limit of how long a prompt can be is dependent on the maximum context length that a model can support (in the case Command R/R+, it's 128k tokens).\n",
+ "\n",
+ "Below, we'll add one additional instruction to the earlier prompt: the length we need the response to be."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\"Hi everyone, my name is [Your Name], and I am thrilled to join the Co1t team today as a [Your Role], eager to contribute my skills and ideas to the company's growth and success!\"\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Add the user message\n",
+ "message = \"I'm joining a new startup called Co1t today. Could you help me write a one-sentence introduction message to my teammates.\"\n",
+ "\n",
+ "# Generate the response\n",
+ "response = co.chat(model=\"command-r-plus-08-2024\",\n",
+ " messages=[{\"role\": \"user\", \"content\": message}])\n",
+ " # messages=[cohere.UserMessage(content=message)])\n",
+ "\n",
+ "print(response.message.content[0].text)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "All our prompts so far use what is called zero-shot prompting, which means that provide instruction without any example. But in many cases, it is extremely helpful to provide examples to the model to guide its response. This is called few-shot prompting.\n",
+ "\n",
+ "Few-shot prompting is especially useful when we want the model response to follow a particular style or format. Also, it is sometimes hard to explain what you want in an instruction, and easier to show examples.\n",
+ "\n",
+ "Below, we want the response to be similar in style and length to the convention, as we show in the examples."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Ticket title: \"Server Access Permissions Issue\"\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Add the user message\n",
+ "user_input = \"Why can't I access the server? Is it a permissions issue?\"\n",
+ "\n",
+ "# Create a prompt containing example outputs\n",
+ "message=f\"\"\"Write a ticket title for the following user request:\n",
+ "\n",
+ "User request: Where are the usual storage places for project files?\n",
+ "Ticket title: Project File Storage Location\n",
+ "\n",
+ "User request: Emails won't send. What could be the issue?\n",
+ "Ticket title: Email Sending Issues\n",
+ "\n",
+ "User request: How can I set up a connection to the office printer?\n",
+ "Ticket title: Printer Connection Setup\n",
+ "\n",
+ "User request: {user_input}\n",
+ "Ticket title:\"\"\"\n",
+ "\n",
+ "# Generate the response\n",
+ "response = co.chat(model=\"command-r-plus-08-2024\",\n",
+ " messages=[{\"role\": \"user\", \"content\": message}])\n",
+ "\n",
+ "print(response.message.content[0].text)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Further reading:\n",
+ "- [Documentation on prompt engineering](https://docs.cohere.com/docs/crafting-effective-prompts)\n",
+ "- [LLM University module on prompt engineering](https://cohere.com/llmu#prompt-engineering)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Parameters for controlling output"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The Chat endpoint provides developers with an array of options and parameters.\n",
+ "\n",
+ "For example, you can choose from several variations of the Command model. Different models produce different output profiles, such as quality and latency."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\"Hi, I'm [Your Name] and I'm thrilled to join the Co1t team today as a [Your Role], eager to contribute my skills and ideas to help drive innovation and success for our startup!\"\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Add the user message\n",
+ "message = \"I'm joining a new startup called Co1t today. Could you help me write a one-sentence introduction message to my teammates.\"\n",
+ "\n",
+ "# Generate the response\n",
+ "response = co.chat(model=\"command-r-plus-08-2024\",\n",
+ " messages=[{\"role\": \"user\", \"content\": message}])\n",
+ "\n",
+ "print(response.message.content[0].text)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Often, you’ll need to control the level of randomness of the output. You can control this using a few parameters.\n",
+ "\n",
+ "The most commonly used parameter is `temperature`, which is a number used to tune the degree of randomness. You can enter values between 0.0 to 1.0.\n",
+ "\n",
+ "A lower temperature gives more predictable outputs, and a higher temperature gives more \"creative\" outputs.\n",
+ "\n",
+ "Here's an example of setting `temperature` to 0."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "1: \"Revolution Enthusiast\"\n",
+ "\n",
+ "2: \"Revolution Enthusiast\"\n",
+ "\n",
+ "3: \"Revolution Enthusiast\"\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Add the user message\n",
+ "message = \"I like learning about the industrial revolution and how it shapes the modern world. How I can introduce myself in five words or less.\"\n",
+ "\n",
+ "# Generate the response multiple times by specifying a low temperature value\n",
+ "for idx in range(3):\n",
+ " response = co.chat(model=\"command-r-plus-08-2024\",\n",
+ " messages=[{\"role\": \"user\", \"content\": message}],\n",
+ " temperature=0)\n",
+ "\n",
+ " print(f\"{idx+1}: {response.message.content[0].text}\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And here's an example of setting `temperature` to 1."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "1: Here is a suggestion: \n",
+ "\n",
+ "\"Revolution Enthusiast. History Fan.\" \n",
+ "\n",
+ "This introduction highlights your passion for the industrial revolution and its impact on history while keeping within the word limit.\n",
+ "\n",
+ "2: \"Revolution fan.\"\n",
+ "\n",
+ "3: \"IR enthusiast.\"\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Add the user message\n",
+ "message = \"I like learning about the industrial revolution and how it shapes the modern world. How I can introduce myself in five words or less.\"\n",
+ "\n",
+ "# Generate the response multiple times by specifying a low temperature value\n",
+ "for idx in range(3):\n",
+ " response = co.chat(model=\"command-r-plus-08-2024\",\n",
+ " messages=[{\"role\": \"user\", \"content\": message}],\n",
+ " temperature=1)\n",
+ "\n",
+ " print(f\"{idx+1}: {response.message.content[0].text}\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Further reading:\n",
+ "- [Available models for the Chat endpoint](https://docs.cohere.com/docs/models#command)\n",
+ "- [Documentation on predictable outputs](https://docs.cohere.com/v2/docs/predictable-outputs)\n",
+ "- [Documentation on advanced generation parameters](https://docs.cohere.com/docs/advanced-generation-hyperparameters)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Structured output generation"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "By adding the `response_format` parameter, you can get the model to generate the output as a JSON object. By generating JSON objects, you can structure and organize the model's responses in a way that can be used in downstream applications.\n",
+ "\n",
+ "The `response_format` parameter allows you to specify the schema the JSON object must follow. It takes the following parameters:\n",
+ "- `message`: The user message\n",
+ "- `response_format`: The schema of the JSON object"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "The `response_format.schema` parameter is an experimental feature and may change in future releases.\n",
+ "To suppress this warning, set `log_warning_experimental_features=False` when initializing the client.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{'title': 'Unable to Access Server', 'category': 'access', 'status': 'open'}\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Add the user message\n",
+ "user_input = \"Why can't I access the server? Is it a permissions issue?\"\n",
+ "message = f\"\"\"Create an IT ticket for the following user request. Generate a JSON object.\n",
+ "{user_input}\"\"\"\n",
+ "\n",
+ "# Generate the response multiple times by adding the JSON schema\n",
+ "response = co.chat(\n",
+ " model=\"command-r-plus-08-2024\",\n",
+ " messages=[{\"role\": \"user\", \"content\": message}],\n",
+ " response_format={\n",
+ " \"type\": \"json_object\",\n",
+ " \"schema\": {\n",
+ " \"type\": \"object\",\n",
+ " \"required\": [\"title\", \"category\", \"status\"],\n",
+ " \"properties\": {\n",
+ " \"title\": { \"type\": \"string\"},\n",
+ " \"category\": { \"type\" : \"string\", \"enum\" : [\"access\", \"software\"]},\n",
+ " \"status\": { \"type\" : \"string\" , \"enum\" : [\"open\", \"closed\"]}\n",
+ " }\n",
+ " }\n",
+ " },\n",
+ ")\n",
+ "\n",
+ "json_object = json.loads(response.message.content[0].text)\n",
+ "\n",
+ "print(json_object)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Further reading:\n",
+ "- [Documentation on Structured Generations (JSON)](https://docs.cohere.com/docs/structured-outputs-json)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Streaming responses"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "All the previous examples above generate responses in a non-streamed manner. This means that the endpoint would return a response object only after the model has generated the text in full.\n",
+ "\n",
+ "The Chat endpoint also provides streaming support. In a streamed response, the endpoint would return a response object for each token as it is being generated. This means you can display the text incrementally without having to wait for the full completion.\n",
+ "\n",
+ "To activate it, use `co.chat_stream()` instead of `co.chat()`.\n",
+ "\n",
+ "In streaming mode, the endpoint will generate a series of objects. To get the actual text contents, we take objects whose `event_type` is `content-delta`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\"Hi, I'm [Your Name] and I'm thrilled to join the Co1t team today as a [Your Role], passionate about [Your Expertise], and excited to contribute to our shared mission of [Startup's Mission]!\""
+ ]
+ }
+ ],
+ "source": [
+ "# Add the user message\n",
+ "message = \"I'm joining a new startup called Co1t today. Could you help me write a one-sentence introduction message to my teammates.\"\n",
+ "\n",
+ "# Generate the response by streaming it\n",
+ "response = co.chat_stream(model=\"command-r-plus-08-2024\",\n",
+ " messages=[{\"role\": \"user\", \"content\": message}])\n",
+ "\n",
+ "for event in response:\n",
+ " if event:\n",
+ " if event.type == \"content-delta\":\n",
+ " print(event.delta.message.content.text, end=\"\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Further reading:\n",
+ "- [Documentation on streaming responses](https://docs.cohere.com/docs/streaming)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Conclusion"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In this tutorial, you learned about:\n",
+ "- How to get started with a basic text generation\n",
+ "- How to improve outputs with prompt engineering\n",
+ "- How to control outputs using parameter changes\n",
+ "- How to generate structured outputs\n",
+ "- How to stream text generation outputs\n",
+ "\n",
+ "However, we have only done all this using direct text generations. As its name implies, the Chat endpoint can also support building chatbots, which require features to support multi-turn conversations and maintain the conversation state. \n",
+ "\n",
+ "In Part 3, you'll learn how to build chatbots with the Chat endpoint."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "base",
+ "language": "python",
+ "name": "base"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/guides/getting-started/v2/tutorial_pt3_v2.ipynb b/notebooks/guides/getting-started/v2/tutorial_pt3_v2.ipynb
new file mode 100644
index 0000000..197acf3
--- /dev/null
+++ b/notebooks/guides/getting-started/v2/tutorial_pt3_v2.ipynb
@@ -0,0 +1,388 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Chatbots"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "As its name implies, the Chat endpoint enables developers to build chatbots that can handle conversations. At the core of a conversation is a multi-turn dialog between the user and the chatbot. This requires the chatbot to have the state (or “memory”) of all the previous turns to maintain the state of the conversation.\n",
+ "\n",
+ "In this tutorial, you'll learn about:\n",
+ "- Creating a custom preamble\n",
+ "- Creating a single-turn conversation\n",
+ "- Building the conversation memory\n",
+ "- Running a multi-turn conversation\n",
+ "- Viewing the chat history\n",
+ "\n",
+ "You'll learn these by building an onboarding assistant for new hires."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Setup\n",
+ "\n",
+ "To get started, first we need to install the `cohere` library and create a Cohere client."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# pip install cohere\n",
+ "\n",
+ "import cohere\n",
+ "\n",
+ "co = cohere.ClientV2(api_key\"COHERE_API_KEY\") # Get your free API key: https://dashboard.cohere.com/api-keys"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Creating a custom preamble"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "A conversation starts with a system message, or a preamble, to help steer a chatbot’s response toward certain characteristics.\n",
+ "\n",
+ "For example, if we want the chatbot to adopt a formal style, the preamble can be used to encourage the generation of more business-like and professional responses.\n",
+ "\n",
+ "The recommended approach is to use two H2 Markdown headers: \"Task and Context\" and \"Style Guide\" in the exact order.\n",
+ "\n",
+ "In the example below, the preamble provides context for the assistant's task (task and context) and encourages the generation of rhymes as much as possible (style guide)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Sure, here's a rhyme to break the ice,\n",
+ "A warm welcome to the team, so nice,\n",
+ "\n",
+ "Hi, I'm [Your Name], a new face,\n",
+ "Ready to join the Co1t space,\n",
+ "\n",
+ "A journey begins, a path unknown,\n",
+ "But together we'll make our mark, a foundation stone,\n",
+ "\n",
+ "Excited to learn and contribute my part,\n",
+ "Let's create, innovate, and leave a lasting art,\n",
+ "\n",
+ "Looking forward to our adventures yet untold,\n",
+ "With teamwork and passion, let's achieve our goals!\n",
+ "\n",
+ "Cheers to a great start!\n",
+ "Your enthusiastic new mate.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Add the user message\n",
+ "message = \"I'm joining a new startup called Co1t today. Could you help me write a short introduction message to my teammates.\"\n",
+ "\n",
+ "# Create a custom system message\n",
+ "system_message=\"\"\"## Task and Context\n",
+ "You are an assistant who assist new employees of Co1t with their first week.\n",
+ "\n",
+ "## Style Guide\n",
+ "Try to speak in rhymes as much as possible. Be professional.\"\"\"\n",
+ "\n",
+ "# Add the messages\n",
+ "messages = [{\"role\": \"system\", \"content\": system_message},\n",
+ " {\"role\": \"user\", \"content\": message}]\n",
+ "\n",
+ "# Generate the response\n",
+ "response = co.chat(model=\"command-r-plus-08-2024\",\n",
+ " messages=messages)\n",
+ "\n",
+ "print(response.message.content[0].text)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Further reading:\n",
+ "- [Documentation on preambles](https://docs.cohere.com/docs/preambles)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Starting the first conversation turn"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Let's start with the first conversation turn.\n",
+ "\n",
+ "Here, we are also adding a custom preamble or system message for generating a concise response, just to keep the outputs brief for this tutorial."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\"Hello, teammates! I'm thrilled to join the Co1t family today and looking forward to getting to know you all and contributing to our shared success.\"\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Add the user message\n",
+ "message = \"I'm joining a new startup called Co1t today. Could you help me write a short introduction message to my teammates.\"\n",
+ "\n",
+ "# Create a custom system message\n",
+ "system_message=\"\"\"## Task and Context\n",
+ "Generate concise responses, with maximum one-sentence.\"\"\"\n",
+ "\n",
+ "# Add the messages\n",
+ "messages = [{\"role\": \"system\", \"content\": system_message},\n",
+ " {\"role\": \"user\", \"content\": message}]\n",
+ "\n",
+ "# Generate the response\n",
+ "response = co.chat(model=\"command-r-plus-08-2024\",\n",
+ " messages=messages)\n",
+ "\n",
+ "print(response.message.content[0].text)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Building the conversation memory"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now, we want the model to refine the earlier response. This requires the next generation to have access to the state, or memory, of the conversation.\n",
+ "\n",
+ "To do this, we append the `messages` with the model's previous response using the `assistant` role.\n",
+ "\n",
+ "Next, we also append a new user message (for the second turn) to the `messages` list.\n",
+ "\n",
+ "Looking at the response, we see that the model is able to get the context from the chat history. The model is able to capture that \"it\" in the user message refers to the introduction message it had generated earlier."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\"Hey, future Co1t buddies! Stoked to join this awesome team, let's get to know each other and make some startup magic together!\"\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Append the previous response\n",
+ "messages.append({'role' : 'assistant', 'content': response.message.content[0].text})\n",
+ "\n",
+ "# Add the user message\n",
+ "message = \"Make it more upbeat and conversational.\"\n",
+ "\n",
+ "# Append the user message\n",
+ "messages.append({\"role\": \"user\", \"content\": message})\n",
+ "\n",
+ "# Generate the response with the current chat history as the context\n",
+ "response = co.chat(model=\"command-r-plus-08-2024\",\n",
+ " messages=messages)\n",
+ "\n",
+ "print(response.message.content[0].text)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Further reading:\n",
+ "- [Documentation on using the Chat endpoint](https://docs.cohere.com/docs/chat-api)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Running a multi-turn conversation"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "You can continue doing this for any number of turns by continuing to append the chatbot's response and the new user message to the `messages` list."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\"Hi, boss! So excited to dive into my new role at Co1t and eager to learn from your mentorship and guidance. Let's crush it!\"\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Append the previous response\n",
+ "messages.append({\"role\": \"assistant\", \"content\": response.message.content[0].text})\n",
+ "\n",
+ "# Add the user message\n",
+ "message = \"Thanks. Could you create another one for my DM to my manager.\"\n",
+ "\n",
+ "# Append the user message\n",
+ "messages.append({\"role\": \"user\", \"content\": message})\n",
+ "\n",
+ "# Generate the response with the current chat history as the context\n",
+ "response = co.chat(model=\"command-r-plus-08-2024\",\n",
+ " messages=messages)\n",
+ "\n",
+ "print(response.message.content[0].text)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Viewing the chat history"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "To look at the current chat history, you can print the `messages` list, which contains a list of `user` and `assistant` turns in the same sequence as they were created."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{'role': 'system', 'content': '## Task and Context\\nGenerate concise responses, with maximum one-sentence.'} \n",
+ "\n",
+ "{'role': 'user', 'content': \"I'm joining a new startup called Co1t today. Could you help me write a short introduction message to my teammates.\"} \n",
+ "\n",
+ "{'role': 'assistant', 'content': '\"Hello, teammates! I\\'m thrilled to join the Co1t family today and looking forward to getting to know you all and contributing to our shared success.\"'} \n",
+ "\n",
+ "{'role': 'user', 'content': 'Make it more upbeat and conversational.'} \n",
+ "\n",
+ "{'role': 'assistant', 'content': '\"Hey, future Co1t buddies! Stoked to join this awesome team, let\\'s get to know each other and make some startup magic together!\"'} \n",
+ "\n",
+ "{'role': 'user', 'content': 'Thanks. Could you create another one for my DM to my manager.'} \n",
+ "\n",
+ "{'role': 'assistant', 'content': '\"Hi, boss! So excited to dive into my new role at Co1t and eager to learn from your mentorship and guidance. Let\\'s crush it!\"'} \n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Append the previous response\n",
+ "messages.append({\"role\": \"assistant\", \"content\": response.message.content[0].text})\n",
+ "\n",
+ "# View the chat history\n",
+ "for message in messages:\n",
+ " print(message,\"\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Conclusion"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In this tutorial, you learned about:\n",
+ "- How to create a custom preamble\n",
+ "- How to create a single-turn conversation\n",
+ "- How to build the conversation memory\n",
+ "- How to run a multi-turn conversation\n",
+ "- How to view the chat history\n",
+ "\n",
+ "You will use the same method for running a multi-turn conversation when you learn about other use cases such as RAG (Part 6) and tool use (Part 7).\n",
+ "\n",
+ "But to fully leverage these other capabilities, you will need another type of language model that generates text representations, or embeddings.\n",
+ "\n",
+ "In Part 4, you will learn how text embeddings can power an important use case for RAG, which is semantic search."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "base",
+ "language": "python",
+ "name": "base"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/guides/getting-started/v2/tutorial_pt4_v2.ipynb b/notebooks/guides/getting-started/v2/tutorial_pt4_v2.ipynb
new file mode 100644
index 0000000..fea6183
--- /dev/null
+++ b/notebooks/guides/getting-started/v2/tutorial_pt4_v2.ipynb
@@ -0,0 +1,456 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Semantic Search"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Text embeddings are a list of numbers that represent the context or meaning inside a piece of text. This is particularly useful in search or information retrieval applications. With text embeddings, this is called semantic search.\n",
+ "\n",
+ "Semantic search solves the problem faced by the more traditional approach of lexical search, which is great at finding keyword matches, but struggles to capture the context or meaning of a piece of text.\n",
+ "\n",
+ "With Cohere, you can generate text embeddings through the Embed endpoint (Embed v3 being the latest model), which supports over 100 languages.\n",
+ "\n",
+ "In this tutorial, you'll learn about:\n",
+ "- Embedding the documents\n",
+ "- Embedding the query\n",
+ "- Performing semantic search\n",
+ "- Multilingual semantic search\n",
+ "- Changing embedding compression types\n",
+ "\n",
+ "You'll learn these by building an onboarding assistant for new hires."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Setup\n",
+ "\n",
+ "To get started, first we need to install the `cohere` library and create a Cohere client."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# pip install cohere\n",
+ "\n",
+ "import cohere\n",
+ "import numpy as np\n",
+ "\n",
+ "co = cohere.ClientV2(api_key=\"COHERE_API_KEY\") # Get your free API key: https://dashboard.cohere.com/api-keys"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Embedding the documents"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The Embed endpoint takes in texts as input and returns embeddings as output.\n",
+ "\n",
+ "For semantic search, there are two types of documents we need to turn into embeddings.\n",
+ "- The list of documents that we want to search from.\n",
+ "- The query that will be used to search the documents.\n",
+ "\n",
+ "Right now, we are doing the former. We call the Embed endpoint using `co.embed()` and pass the following arguments:\n",
+ "- `model`: Here we choose `embed-english-v3.0`, which generates embeddings of size 1024\n",
+ "- `input_type`: We choose `search_document` to ensure the model treats these as the documents for search\n",
+ "- `texts`: The list of texts (the FAQs)\n",
+ "- `embedding_types`: We choose `float` to get the float embeddings."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 32,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Define the documents\n",
+ "faqs_long = [\n",
+ " {\"text\": \"Joining Slack Channels: You will receive an invite via email. Be sure to join relevant channels to stay informed and engaged.\"},\n",
+ " {\"text\": \"Finding Coffee Spots: For your caffeine fix, head to the break room's coffee machine or cross the street to the café for artisan coffee.\"},\n",
+ " {\"text\": \"Team-Building Activities: We foster team spirit with monthly outings and weekly game nights. Feel free to suggest new activity ideas anytime!\"},\n",
+ " {\"text\": \"Working Hours Flexibility: We prioritize work-life balance. While our core hours are 9 AM to 5 PM, we offer flexibility to adjust as needed.\"},\n",
+ " {\"text\": \"Side Projects Policy: We encourage you to pursue your passions. Just be mindful of any potential conflicts of interest with our business.\"},\n",
+ " {\"text\": \"Reimbursing Travel Expenses: Easily manage your travel expenses by submitting them through our finance tool. Approvals are prompt and straightforward.\"},\n",
+ " {\"text\": \"Working from Abroad: Working remotely from another country is possible. Simply coordinate with your manager and ensure your availability during core hours.\"},\n",
+ " {\"text\": \"Health and Wellness Benefits: We care about your well-being and offer gym memberships, on-site yoga classes, and comprehensive health insurance.\"},\n",
+ " {\"text\": \"Performance Reviews Frequency: We conduct informal check-ins every quarter and formal performance reviews twice a year.\"},\n",
+ " {\"text\": \"Proposing New Ideas: Innovation is welcomed! Share your brilliant ideas at our weekly team meetings or directly with your team lead.\"},\n",
+ "]\n",
+ "\n",
+ "# Embed the documents\n",
+ "doc_emb = co.embed(\n",
+ " model=\"embed-english-v3.0\",\n",
+ " input_type=\"search_document\",\n",
+ " texts=[doc['text'] for doc in faqs_long],\n",
+ " embedding_types=[\"float\"]).embeddings.float"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Further reading:\n",
+ "- [Embed endpoint API reference](https://docs.cohere.com/reference/embed)\n",
+ "- [Documentation on the Embed endpoint](https://docs.cohere.com/docs/embeddings)\n",
+ "- [Documentation on the models available on the Embed endpoint](https://docs.cohere.com/docs/cohere-embed)\n",
+ "- [LLM University module on Text Representation](https://cohere.com/llmu#text-representation)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Embedding the query"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Next, we add a query, which asks about how to stay connected to company updates.\n",
+ "\n",
+ "We choose `search_query` as the `input_type` to ensure the model treats this as the query (instead of documents) for search."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 39,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Add the user query\n",
+ "query = \"Ways to connect with my teammates\"\n",
+ "\n",
+ "# Embed the query\n",
+ "query_emb = co.embed(\n",
+ " model=\"embed-english-v3.0\",\n",
+ " input_type=\"search_query\",\n",
+ " texts=[query],\n",
+ " embedding_types=[\"float\"]).embeddings.float"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Perfoming semantic search"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now, we want to search for the most relevant documents to the query. We do this by computing the similarity between the embeddings of the query and each of the documents.\n",
+ "\n",
+ "There are various approaches to compute similarity between embeddings, and we'll choose the dot product approach. For this, we use the `numpy` library which comes with the implementation.\n",
+ "\n",
+ "Each query-document pair returns a score, which represents how similar the pair is. We then sort these scores in descending order and select the top-most similar pairs, which we choose 2 (this is an arbitrary choice, you can choose any number).\n",
+ "\n",
+ "Here, we show the most relevant documents with their similarity scores."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 46,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Rank: 1\n",
+ "Score: 0.442758615743984\n",
+ "Document: {'text': \"Finding Coffee Spots: For your caffeine fix, head to the break room's coffee machine or cross the street to the café for artisan coffee.\"}\n",
+ "\n",
+ "Rank: 2\n",
+ "Score: 0.32783563708365726\n",
+ "Document: {'text': 'Team-Building Activities: We foster team spirit with monthly outings and weekly game nights. Feel free to suggest new activity ideas anytime!'}\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Compute dot product similarity and display results\n",
+ "def return_results(query_emb, doc_emb, documents):\n",
+ " n = 2 # customize your top N results\n",
+ " scores = np.dot(query_emb, np.transpose(doc_emb))[0]\n",
+ " max_idx = np.argsort(-scores)[:n]\n",
+ " \n",
+ " for rank, idx in enumerate(max_idx):\n",
+ " print(f\"Rank: {rank+1}\")\n",
+ " print(f\"Score: {scores[idx]}\")\n",
+ " print(f\"Document: {documents[idx]}\\n\")\n",
+ " \n",
+ "return_results(query_emb, doc_emb, faqs_long)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Multilingual semantic search"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The Embed endpoint also supports multilingual semantic search via the `embed-multilingual-...` models. This means you can perform semantic search on texts in different languages.\n",
+ "\n",
+ "Specifically, you can do both multilingual and cross-lingual searches using one single model.\n",
+ "\n",
+ "Multilingual search happens when the query and the result are of the same language. For example, an English query of “places to eat” returning an English result of “Bob's Burgers.” You can replace English with other languages and use the same model for performing search.\n",
+ "\n",
+ "Cross-lingual search happens when the query and the result are of a different language. For example, a Hindi query of “खाने की जगह” (places to eat) returning an English result of “Bob's Burgers.”\n",
+ "\n",
+ "In the example below, we repeat the steps of performing semantic search with one difference – changing the model type to the multilingual version. Here, we use the `embed-multilingual-v3.0` model. Here, we are searching a French version of the FAQ list using an English query."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 47,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Rank: 1\n",
+ "Score: 0.442758615743984\n",
+ "Document: {'text': \"Travailler de l'étranger : Il est possible de travailler à distance depuis un autre pays. Il suffit de coordonner avec votre responsable et de vous assurer d'être disponible pendant les heures de travail.\"}\n",
+ "\n",
+ "Rank: 2\n",
+ "Score: 0.32783563708365726\n",
+ "Document: {'text': 'Avantages pour la santé et le bien-être : Nous nous soucions de votre bien-être et proposons des adhésions à des salles de sport, des cours de yoga sur site et une assurance santé complète.'}\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Define the documents\n",
+ "faqs_short_fr = [\n",
+ " {\"text\" : \"Remboursement des frais de voyage : Gérez facilement vos frais de voyage en les soumettant via notre outil financier. Les approbations sont rapides et simples.\"},\n",
+ " {\"text\" : \"Travailler de l'étranger : Il est possible de travailler à distance depuis un autre pays. Il suffit de coordonner avec votre responsable et de vous assurer d'être disponible pendant les heures de travail.\"},\n",
+ " {\"text\" : \"Avantages pour la santé et le bien-être : Nous nous soucions de votre bien-être et proposons des adhésions à des salles de sport, des cours de yoga sur site et une assurance santé complète.\"},\n",
+ " {\"text\" : \"Fréquence des évaluations de performance : Nous organisons des bilans informels tous les trimestres et des évaluations formelles deux fois par an.\"}\n",
+ "]\n",
+ "\n",
+ "# Embed the documents\n",
+ "doc_emb = co.embed(\n",
+ " model=\"embed-multilingual-v3.0\",\n",
+ " input_type=\"search_document\",\n",
+ " texts=[doc['text'] for doc in faqs_short_fr],\n",
+ " embedding_types=[\"float\"]).embeddings.float\n",
+ "\n",
+ "# Add the user query\n",
+ "query = \"What's your remote-working policy?\"\n",
+ "\n",
+ "# Embed the query\n",
+ "query_emb = co.embed(\n",
+ " model=\"embed-multilingual-v3.0\",\n",
+ " input_type=\"search_query\",\n",
+ " texts=[query],\n",
+ " embedding_types=[\"float\"]).embeddings.float\n",
+ "\n",
+ "# Compute dot product similarity and display results\n",
+ "return_results(query_emb, doc_emb, faqs_short_fr)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Further reading:\n",
+ "- [The list of supported languages for multilingual Embed](https://docs.cohere.com/docs/cohere-embed#list-of-supported-languages)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Changing embedding compression types"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Semantic search over large datasets can require a lot of memory, which is expensive to host in a vector database. Changing the embeddings compression type can help reduce the memory footprint.\n",
+ "\n",
+ "A typical embedding model generates embeddings as float32 format (consuming 4 bytes). By compressing the embeddings to int8 format (1 byte), we can reduce the memory 4x while keeping 99.99% of the original search quality.\n",
+ "\n",
+ "We can go even further and use the binary format (1 bit), which reduces the needed memory 32x while keeping 90-98% of the original search quality.\n",
+ "\n",
+ "The Embed endpoint supports the following formats: `float`, `int8`, `unint8`, `binary`, and `ubinary`. You can get these different compression levels by passing the `embedding_types` parameter.\n",
+ "\n",
+ "In the example below, we embed the documents in two formats: `float` and `int8`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 53,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Embed the documents with the given embedding types\n",
+ "doc_emb = co.embed(\n",
+ " model=\"embed-english-v3.0\",\n",
+ " input_type=\"search_document\",\n",
+ " texts=[doc['text'] for doc in faqs_long],\n",
+ " embedding_types=[\"float\",\"int8\"]).embeddings\n",
+ "\n",
+ "# Add the user query\n",
+ "query = \"Ways to connect with my teammates\"\n",
+ "\n",
+ "# Embed the query\n",
+ "query_emb = co.embed(\n",
+ " model=\"embed-english-v3.0\",\n",
+ " input_type=\"search_query\",\n",
+ " texts=[query],\n",
+ " embedding_types=[\"float\",\"int8\"]).embeddings"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Here are the search results of using the `float` embeddings (same as the earlier example)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 56,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Rank: 1\n",
+ "Score: 0.3872984617627964\n",
+ "Document: {'text': 'Team-Building Activities: We foster team spirit with monthly outings and weekly game nights. Feel free to suggest new activity ideas anytime!'}\n",
+ "\n",
+ "Rank: 2\n",
+ "Score: 0.3272549670724577\n",
+ "Document: {'text': 'Proposing New Ideas: Innovation is welcomed! Share your brilliant ideas at our weekly team meetings or directly with your team lead.'}\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Compute dot product similarity and display results\n",
+ "return_results(query_emb.float, doc_emb.float, faqs_long)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And here are the search results of using the `int8` embeddings."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 57,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Rank: 1\n",
+ "Score: 613377\n",
+ "Document: {'text': 'Team-Building Activities: We foster team spirit with monthly outings and weekly game nights. Feel free to suggest new activity ideas anytime!'}\n",
+ "\n",
+ "Rank: 2\n",
+ "Score: 515890\n",
+ "Document: {'text': 'Proposing New Ideas: Innovation is welcomed! Share your brilliant ideas at our weekly team meetings or directly with your team lead.'}\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Compute dot product similarity and display results\n",
+ "return_results(query_emb.int8, doc_emb.int8, faqs_long)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Further reading:\n",
+ "- [Documentation on embeddings compression levels](https://docs.cohere.com/docs/embeddings#compression-levels)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Conclusion"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In this tutorial, you learned about:\n",
+ "- How to embed documents for search\n",
+ "- How to embed queries\n",
+ "- How to perform semantic search\n",
+ "- How to perform multilingual semantic search\n",
+ "- How to change the embedding compression types\n",
+ "\n",
+ "A high-performance and modern search system typically includes a reranking stage, which further boosts the search results.\n",
+ "\n",
+ "In Part 5, you will learn how to add reranking to a search system."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "base",
+ "language": "python",
+ "name": "base"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/guides/getting-started/v2/tutorial_pt5_v2.ipynb b/notebooks/guides/getting-started/v2/tutorial_pt5_v2.ipynb
new file mode 100644
index 0000000..ae98708
--- /dev/null
+++ b/notebooks/guides/getting-started/v2/tutorial_pt5_v2.ipynb
@@ -0,0 +1,502 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Reranking"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Reranking is a technique that leverages embeddings as the last stage of a retrieval process, and is especially useful in RAG systems.\n",
+ "\n",
+ "We can rerank results from semantic search as well as any other search systems such as lexical search. This means that companies can retain an existing keyword-based (also called “lexical”) or semantic search system for the first-stage retrieval and integrate the Rerank endpoint in the second-stage reranking.\n",
+ "\n",
+ "In this tutorial, you'll learn about:\n",
+ "- Reranking lexical/semantic search results\n",
+ "- Reranking semi-structured data\n",
+ "- Reranking tabular data\n",
+ "- Multilingual reranking\n",
+ "\n",
+ "You'll learn these by building an onboarding assistant for new hires."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Setup\n",
+ "\n",
+ "To get started, first we need to install the `cohere` library and create a Cohere client."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# pip install cohere\n",
+ "\n",
+ "import cohere\n",
+ "\n",
+ "co = cohere.ClientV2(api_key=\"COHERE_API_KEY\") # Get your free API key: https://dashboard.cohere.com/api-keys"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Reranking lexical/semantic search results"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Rerank requires just a single line of code to implement.\n",
+ "\n",
+ "Suppose we have a list of search results of an FAQ list, which can come from semantic, lexical, or any other types of search systems. But this list may not be optimally ranked for relevance to the user query.\n",
+ "\n",
+ "This is where Rerank can help. We call the endpoint using `co.rerank()` and pass the following arguments:\n",
+ "- `query`: The user query\n",
+ "- `documents`: The list of documents\n",
+ "- `top_n`: The top reranked documents to select\n",
+ "- `model`: We choose Rerank English 3"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Define the documents\n",
+ "faqs_short = [\n",
+ " {\"text\": \"Reimbursing Travel Expenses: Easily manage your travel expenses by submitting them through our finance tool. Approvals are prompt and straightforward.\"},\n",
+ " {\"text\": \"Working from Abroad: Working remotely from another country is possible. Simply coordinate with your manager and ensure your availability during core hours.\"},\n",
+ " {\"text\": \"Health and Wellness Benefits: We care about your well-being and offer gym memberships, on-site yoga classes, and comprehensive health insurance.\"},\n",
+ " {\"text\": \"Performance Reviews Frequency: We conduct informal check-ins every quarter and formal performance reviews twice a year.\"}\n",
+ "]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "id='2fa5bc0d-28aa-4c99-8355-7de78dbf3c86' results=[RerankResponseResultsItem(document=None, index=2, relevance_score=0.01798621), RerankResponseResultsItem(document=None, index=3, relevance_score=8.463939e-06)] meta=ApiMeta(api_version=ApiMetaApiVersion(version='1', is_deprecated=None, is_experimental=None), billed_units=ApiMetaBilledUnits(input_tokens=None, output_tokens=None, search_units=1.0, classifications=None), tokens=None, warnings=None)\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Add the user query\n",
+ "query = \"Are there fitness-related perks?\"\n",
+ "\n",
+ "# Rerank the documents\n",
+ "results = co.rerank(query=query,\n",
+ " documents=faqs_short,\n",
+ " top_n=2,\n",
+ " model='rerank-english-v3.0')\n",
+ "\n",
+ "print(results)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Rank: 1\n",
+ "Score: 0.01798621\n",
+ "Document: {'text': 'Health and Wellness Benefits: We care about your well-being and offer gym memberships, on-site yoga classes, and comprehensive health insurance.'}\n",
+ "\n",
+ "Rank: 2\n",
+ "Score: 8.463939e-06\n",
+ "Document: {'text': 'Performance Reviews Frequency: We conduct informal check-ins every quarter and formal performance reviews twice a year.'}\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Display the reranking results\n",
+ "def return_results(results, documents): \n",
+ " for idx, result in enumerate(results.results):\n",
+ " print(f\"Rank: {idx+1}\") \n",
+ " print(f\"Score: {result.relevance_score}\")\n",
+ " print(f\"Document: {documents[result.index]}\\n\")\n",
+ " \n",
+ "return_results(results, faqs_short)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Further reading:\n",
+ "- [Rerank endpoint API reference](https://docs.cohere.com/reference/rerank)\n",
+ "- [Documentation on Rerank](https://docs.cohere.com/docs/overview)\n",
+ "- [Documentation on Rerank fine-tuning](https://docs.cohere.com/docs/rerank-fine-tuning)\n",
+ "- [Documentation on Rerank best practices](https://docs.cohere.com/docs/reranking-best-practices)\n",
+ "- [LLM University module on Text Representation](https://cohere.com/llmu#text-representation)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Reranking semi-structured data"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The Rerank 3 model supports multi-aspect and semi-structured data like emails, invoices, JSON documents, code, and tables. By setting the rank fields, you can select which fields the model should consider for reranking.\n",
+ "\n",
+ "In the following example, we'll use an email data example. It is a semi-stuctured data that contains a number of fields – `from`, `to`, `date`, `subject`, and `text`. \n",
+ "\n",
+ "Suppose the new hire now wants to search for any emails about check-in sessions. Let's pretend we have a list of 5 emails retrieved from the email provider's API.\n",
+ "\n",
+ "To perform reranking over semi-structured data, we add an additional parameter, `rank_fields`, which contains the list of available fields.\n",
+ "\n",
+ "The model will rerank based on order of the fields passed in. For example, given rank_fields=['title','author','text'], the model will rerank using the values in title, author, and text sequentially. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Define the documents\n",
+ "emails = [\n",
+ " {\"from\": \"hr@co1t.com\", \"to\": \"david@co1t.com\", \"date\": \"2024-06-24\", \"subject\": \"A Warm Welcome to Co1t!\", \"text\": \"We are delighted to welcome you to the team! As you embark on your journey with us, you'll find attached an agenda to guide you through your first week.\"},\n",
+ " {\"from\": \"it@co1t.com\", \"to\": \"david@co1t.com\", \"date\": \"2024-06-24\", \"subject\": \"Setting Up Your IT Needs\", \"text\": \"Greetings! To ensure a seamless start, please refer to the attached comprehensive guide, which will assist you in setting up all your work accounts.\"},\n",
+ " {\"from\": \"john@co1t.com\", \"to\": \"david@co1t.com\", \"date\": \"2024-06-24\", \"subject\": \"First Week Check-In\", \"text\": \"Hello! I hope you're settling in well. Let's connect briefly tomorrow to discuss how your first week has been going. Also, make sure to join us for a welcoming lunch this Thursday at noon—it's a great opportunity to get to know your colleagues!\"}\n",
+ "]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Rank: 1\n",
+ "Score: 0.1979091\n",
+ "Document: {'from': 'john@co1t.com', 'to': 'david@co1t.com', 'date': '2024-06-24', 'subject': 'First Week Check-In', 'text': \"Hello! I hope you're settling in well. Let's connect briefly tomorrow to discuss how your first week has been going. Also, make sure to join us for a welcoming lunch this Thursday at noon—it's a great opportunity to get to know your colleagues!\"}\n",
+ "\n",
+ "Rank: 2\n",
+ "Score: 9.535461e-05\n",
+ "Document: {'from': 'hr@co1t.com', 'to': 'david@co1t.com', 'date': '2024-06-24', 'subject': 'A Warm Welcome to Co1t!', 'text': \"We are delighted to welcome you to the team! As you embark on your journey with us, you'll find attached an agenda to guide you through your first week.\"}\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Add the user query\n",
+ "query = \"Any email about check ins?\"\n",
+ "\n",
+ "# Rerank the documents\n",
+ "results = co.rerank(query=query,\n",
+ " documents=emails,\n",
+ " top_n=2,\n",
+ " model='rerank-english-v3.0',\n",
+ " rank_fields=[\"from\", \"to\", \"date\", \"subject\", \"body\"])\n",
+ "\n",
+ "return_results(results, emails)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Reranking tabular data"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Many enterprises rely on tabular data, such as relational databases, CSVs, and Excel. To perform reranking, you can transform a dataframe into a list of JSON records and use Rerank 3's JSON capabilities to rank them.\n",
+ "\n",
+ "Here's an example of reranking a CSV file that contains employee information."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "/Users/meor/anaconda3/lib/python3.11/site-packages/pandas/core/arrays/masked.py:60: UserWarning: Pandas requires version '1.3.6' or newer of 'bottleneck' (version '1.3.5' currently installed).\n",
+ " from pandas.core import (\n"
+ ]
+ },
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ "\n",
+ "
\n",
+ " \n",
+ " \n",
+ " | \n",
+ " name | \n",
+ " role | \n",
+ " join_date | \n",
+ " email | \n",
+ " status | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " \n",
+ " 0 | \n",
+ " Rebecca Lee | \n",
+ " Senior Software Engineer | \n",
+ " 2024-07-01 | \n",
+ " rebecca@co1t.com | \n",
+ " Full-time | \n",
+ "
\n",
+ " \n",
+ " 1 | \n",
+ " Emma Williams | \n",
+ " Product Designer | \n",
+ " 2024-06-15 | \n",
+ " emma@co1t.com | \n",
+ " Full-time | \n",
+ "
\n",
+ " \n",
+ " 2 | \n",
+ " Michael Jones | \n",
+ " Marketing Manager | \n",
+ " 2024-05-20 | \n",
+ " michael@co1t.com | \n",
+ " Full-time | \n",
+ "
\n",
+ " \n",
+ " 3 | \n",
+ " Amelia Thompson | \n",
+ " Sales Representative | \n",
+ " 2024-05-20 | \n",
+ " amelia@co1t.com | \n",
+ " Part-time | \n",
+ "
\n",
+ " \n",
+ " 4 | \n",
+ " Ethan Davis | \n",
+ " Product Designer | \n",
+ " 2024-05-25 | \n",
+ " ethan@co1t.com | \n",
+ " Contractor | \n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
"
+ ],
+ "text/plain": [
+ " name role join_date email \\\n",
+ "0 Rebecca Lee Senior Software Engineer 2024-07-01 rebecca@co1t.com \n",
+ "1 Emma Williams Product Designer 2024-06-15 emma@co1t.com \n",
+ "2 Michael Jones Marketing Manager 2024-05-20 michael@co1t.com \n",
+ "3 Amelia Thompson Sales Representative 2024-05-20 amelia@co1t.com \n",
+ "4 Ethan Davis Product Designer 2024-05-25 ethan@co1t.com \n",
+ "\n",
+ " status \n",
+ "0 Full-time \n",
+ "1 Full-time \n",
+ "2 Full-time \n",
+ "3 Part-time \n",
+ "4 Contractor "
+ ]
+ },
+ "execution_count": 7,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "import pandas as pd\n",
+ "from io import StringIO\n",
+ "\n",
+ "# Create a demo CSV file\n",
+ "data = \"\"\"name,role,join_date,email,status\n",
+ "Rebecca Lee,Senior Software Engineer,2024-07-01,rebecca@co1t.com,Full-time\n",
+ "Emma Williams,Product Designer,2024-06-15,emma@co1t.com,Full-time\n",
+ "Michael Jones,Marketing Manager,2024-05-20,michael@co1t.com,Full-time\n",
+ "Amelia Thompson,Sales Representative,2024-05-20,amelia@co1t.com,Part-time\n",
+ "Ethan Davis,Product Designer,2024-05-25,ethan@co1t.com,Contractor\"\"\"\n",
+ "data_csv = StringIO(data)\n",
+ "\n",
+ "# Load the CSV file\n",
+ "df = pd.read_csv(data_csv)\n",
+ "df.head()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Rank: 1\n",
+ "Score: 0.986828\n",
+ "Document: {'name': 'Emma Williams', 'role': 'Product Designer', 'join_date': '2024-06-15', 'email': 'emma@co1t.com', 'status': 'Full-time'}\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Define the documents and rank fields\n",
+ "employees = df.to_dict('records')\n",
+ "rank_fields = df.columns.tolist()\n",
+ "\n",
+ "# Add the user query\n",
+ "query = \"Any full-time product designers who joined recently?\"\n",
+ "\n",
+ "# Rerank the documents\n",
+ "results = co.rerank(query=query,\n",
+ " documents=employees,\n",
+ " top_n=1,\n",
+ " model='rerank-english-v3.0',\n",
+ " rank_fields=rank_fields)\n",
+ "\n",
+ "return_results(results, employees)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Multilingual reranking"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The Rerank endpoint also supports multilingual semantic search via the `rerank-multilingual-...` models. This means you can perform semantic search on texts in different languages.\n",
+ "\n",
+ "In the example below, we repeat the steps of performing reranking with one difference – changing the model type to a multilingual one. Here, we use the `rerank-multilingual-v3.0` model. Here, we are reranking the FAQ list using an Arabic query."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Rank: 1\n",
+ "Score: 0.42232594\n",
+ "Document: {'text': 'Health and Wellness Benefits: We care about your well-being and offer gym memberships, on-site yoga classes, and comprehensive health insurance.'}\n",
+ "\n",
+ "Rank: 2\n",
+ "Score: 0.00025118678\n",
+ "Document: {'text': 'Performance Reviews Frequency: We conduct informal check-ins every quarter and formal performance reviews twice a year.'}\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Define the query\n",
+ "query = \"هل هناك مزايا تتعلق باللياقة البدنية؟\" # Are there fitness benefits?\n",
+ "\n",
+ "# Rerank the documents\n",
+ "results = co.rerank(query=query,\n",
+ " documents=faqs_short,\n",
+ " top_n=2,\n",
+ " model='rerank-multilingual-v3.0')\n",
+ "\n",
+ "return_results(results, faqs_short)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Conclusion"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In this tutorial, you learned about:\n",
+ "- How to rerank lexical/semantic search results\n",
+ "- How to rerank semi-structured data\n",
+ "- How to rerank tabular data\n",
+ "- How to perform Multilingual reranking\n",
+ "\n",
+ "We have now seen two critical components of a powerful search system - semantic search, or dense retrieval (Part 4) and reranking (Part 5). These building blocks are essential for implementing RAG solutions.\n",
+ "\n",
+ "In Part 6, you will learn how to implement RAG."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "base",
+ "language": "python",
+ "name": "base"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/guides/getting-started/v2/tutorial_pt6_v2.ipynb b/notebooks/guides/getting-started/v2/tutorial_pt6_v2.ipynb
new file mode 100644
index 0000000..f818553
--- /dev/null
+++ b/notebooks/guides/getting-started/v2/tutorial_pt6_v2.ipynb
@@ -0,0 +1,590 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# RAG"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The Chat endpoint provides comprehensive support for various text generation use cases, including retrieval-augmented generation (RAG). \n",
+ "\n",
+ "While LLMs are good at maintaining the context of the conversation and generating responses, they can be prone to hallucinate and include factually incorrect or incomplete information in their responses.\n",
+ "\n",
+ "RAG enables a model to access and utilize supplementary information from external documents, thereby improving the accuracy of its responses.\n",
+ "\n",
+ "When using RAG with the Chat endpoint, these responses are backed by fine-grained citations linking to the source documents. This makes the responses easily verifiable.\n",
+ "\n",
+ "In this tutorial, you'll learn about:\n",
+ "- Basic RAG\n",
+ "- Search query generation\n",
+ "- Retrieval with Embed\n",
+ "- Reranking with Rerank\n",
+ "- Response and citation generation\n",
+ "\n",
+ "You'll learn these by building an onboarding assistant for new hires."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Setup\n",
+ "\n",
+ "To get started, first we need to install the `cohere` library and create a Cohere client."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "# pip install cohere\n",
+ "\n",
+ "import cohere\n",
+ "import numpy as np\n",
+ "import json\n",
+ "from typing import List\n",
+ "\n",
+ "co = cohere.ClientV2(api_key=\"COHERE_API_KEY\") # Get your free API key: https://dashboard.cohere.com/api-keys"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Basic RAG\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "To see how RAG works, let's define the documents that the application has access to. We'll use a short list of documents consisting of internal FAQs about the fictitious company Co1t (in production, these documents are massive).\n",
+ "\n",
+ "In this example, each document is a `data` object with one field, `text`. But we can define any number of fields we want, depending on the nature of the documents. For example, emails could contain `title` and `text` fields."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "documents = [\n",
+ " {\n",
+ " \"data\": {\n",
+ " \"text\": \"Reimbursing Travel Expenses: Easily manage your travel expenses by submitting them through our finance tool. Approvals are prompt and straightforward.\"\n",
+ " }\n",
+ " },\n",
+ " {\n",
+ " \"data\": {\n",
+ " \"text\": \"Working from Abroad: Working remotely from another country is possible. Simply coordinate with your manager and ensure your availability during core hours.\"\n",
+ " }\n",
+ " },\n",
+ " {\n",
+ " \"data\": {\n",
+ " \"text\": \"Health and Wellness Benefits: We care about your well-being and offer gym memberships, on-site yoga classes, and comprehensive health insurance.\"\n",
+ " }\n",
+ " }\n",
+ "]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "To call the Chat API with RAG, pass the following parameters at a minimum. This tells the model to run in RAG-mode and use these documents in its response.\n",
+ "\n",
+ "- `model` for the model ID\n",
+ "- `messages` for the user's query.\n",
+ "- `documents` for defining the documents.\n",
+ "\n",
+ "Let's create a query asking about the company's support for personal well-being, which is not going to be available to the model based on the data its trained on. It will need to use external documents.\n",
+ "\n",
+ "RAG introduces additional objects in the Chat response. One of them is `citations`, which contains details about:\n",
+ "- specific text spans from the retrieved documents on which the response is grounded.\n",
+ "- the documents referenced in the citations."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 37,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Add the user query\n",
+ "query = \"Are there health benefits?\"\n",
+ "\n",
+ "# Generate the response\n",
+ "response = co.chat(model=\"command-r-plus-08-2024\",\n",
+ " messages=[{'role': 'user', 'content': query}],\n",
+ " documents=documents)\n",
+ "\n",
+ "# Display the response\n",
+ "print(response.message.content[0].text)\n",
+ "\n",
+ "# Display the citations and source documents\n",
+ "if response.message.citations:\n",
+ " print(\"\\nCITATIONS:\")\n",
+ " for citation in response.message.citations:\n",
+ " print(citation, \"\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Search query generation"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The previous example showed how to get started with RAG, and in particular, the augmented generation portion of RAG. But as its name implies, RAG consists of other steps, such as retrieval. \n",
+ "\n",
+ "In a basic RAG application, the steps involved are:\n",
+ "\n",
+ "- Transforming the user message into search queries\n",
+ "- Retrieving relevant documents for a given search query\n",
+ "- Generating the response and citations\n",
+ "\n",
+ "Let's now look at the first step—search query generation. The chatbot needs to generate an optimal set of search queries to use for retrieval. \n",
+ "\n",
+ "There are different possible approaches to this. In this example, we'll take a [tool use](v2/docs/tool-use) approach.\n",
+ "\n",
+ "Here, we build a tool that takes a user query and returns a list of relevant document snippets for that query. The tool can generate zero, one or multiple search queries depending on the user query.\n",
+ "\n",
+ "We'll not do a detailed walkthrough of tool use here as we'll do that in the next tutorial, which is a dedicated tutorial on tool use.\n",
+ "\n",
+ "We recommend using the v1 API for this functionality in order to leverage the `force_single_step` feature, as follows:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 26,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "co_v1 = cohere.Client(api_key=\"COHERE_API_KEY\") # Get your free API key: https://dashboard.cohere.com/api-keys\n",
+ "\n",
+ "def generate_search_queries(message: str) -> List[str]:\n",
+ " \n",
+ " # Define the query generation tool\n",
+ " query_gen_tool = [\n",
+ " {\n",
+ " \"name\": \"internet_search\",\n",
+ " \"description\": \"Returns a list of relevant document snippets for a textual query retrieved from the internet\",\n",
+ " \"parameter_definitions\": {\n",
+ " \"queries\": {\n",
+ " \"description\": \"a list of queries to search the internet with.\",\n",
+ " \"type\": \"List[str]\",\n",
+ " \"required\": True\n",
+ " }\n",
+ " }\n",
+ " }]\n",
+ "\n",
+ " # Define a preamble to optimize search query generation\n",
+ " instructions = \"Write a search query that will find helpful information for answering the user's question accurately. If you need more than one search query, write a list of search queries. If you decide that a search is very unlikely to find information that would be useful in constructing a response to the user, you should instead directly answer.\"\n",
+ "\n",
+ " # Generate search queries (if any)\n",
+ " search_queries = []\n",
+ "\n",
+ " response = co_v1.chat(\n",
+ " preamble=instructions,\n",
+ " model=\"command-r-08-2024\",\n",
+ " message=message,\n",
+ " force_single_step=True,\n",
+ " tools=query_gen_tool\n",
+ " )\n",
+ "\n",
+ " if response.tool_calls:\n",
+ " search_queries = response.tool_calls[0].parameters[\"queries\"]\n",
+ "\n",
+ " return search_queries"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In the example above, the tool breaks down the user message into two separate queries."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 27,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "['how to stay connected with the company', 'does the company organize team events']\n"
+ ]
+ }
+ ],
+ "source": [
+ "query = \"How to stay connected with the company, and do you organize team events?\"\n",
+ "queries_for_search = generate_search_queries(query)\n",
+ "print(queries_for_search)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And in the example below, the tool decides that one query is sufficient."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 28,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "['how flexible are the working hours at the company']\n"
+ ]
+ }
+ ],
+ "source": [
+ "query = \"How flexible are the working hours\"\n",
+ "queries_for_search = generate_search_queries(query)\n",
+ "print(queries_for_search)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And in the example below, the tool decides that no retrieval is needed to answer the query."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 29,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[]\n"
+ ]
+ }
+ ],
+ "source": [
+ "query = \"What is 2 + 2\"\n",
+ "queries_for_search = generate_search_queries(query)\n",
+ "print(queries_for_search)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Retrieval with Embed"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Given the search query, we need a way to retrieve the most relevant documents from a large collection of documents.\n",
+ "\n",
+ "This is where we can leverage text embeddings through the Embed endpoint. It enables semantic search, which lets us to compare the semantic meaning of the documents and the query. It solves the problem faced by the more traditional approach of lexical search, which is great at finding keyword matches, but struggles at capturing the context or meaning of a piece of text.\n",
+ "\n",
+ "The Embed endpoint takes in texts as input and returns embeddings as output.\n",
+ "\n",
+ "First, we need to embed the documents to search from. We call the Embed endpoint using `co.embed()` and pass the following arguments:\n",
+ "\n",
+ "- `model`: Here we choose `embed-english-v3.0`, which generates embeddings of size 1024\n",
+ "- `input_type`: We choose `search_document` to ensure the model treats these as the documents (instead of the query) for search\n",
+ "- `texts`: The list of texts (the FAQs)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 30,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Define the documents\n",
+ "faqs_long = [\n",
+ " {\"text\": \"Joining Slack Channels: You will receive an invite via email. Be sure to join relevant channels to stay informed and engaged.\"},\n",
+ " {\"text\": \"Finding Coffee Spots: For your caffeine fix, head to the break room's coffee machine or cross the street to the café for artisan coffee.\"},\n",
+ " {\"text\": \"Team-Building Activities: We foster team spirit with monthly outings and weekly game nights. Feel free to suggest new activity ideas anytime!\"},\n",
+ " {\"text\": \"Working Hours Flexibility: We prioritize work-life balance. While our core hours are 9 AM to 5 PM, we offer flexibility to adjust as needed.\"},\n",
+ " {\"text\": \"Side Projects Policy: We encourage you to pursue your passions. Just be mindful of any potential conflicts of interest with our business.\"},\n",
+ " {\"text\": \"Reimbursing Travel Expenses: Easily manage your travel expenses by submitting them through our finance tool. Approvals are prompt and straightforward.\"},\n",
+ " {\"text\": \"Working from Abroad: Working remotely from another country is possible. Simply coordinate with your manager and ensure your availability during core hours.\"},\n",
+ " {\"text\": \"Health and Wellness Benefits: We care about your well-being and offer gym memberships, on-site yoga classes, and comprehensive health insurance.\"},\n",
+ " {\"text\": \"Performance Reviews Frequency: We conduct informal check-ins every quarter and formal performance reviews twice a year.\"},\n",
+ " {\"text\": \"Proposing New Ideas: Innovation is welcomed! Share your brilliant ideas at our weekly team meetings or directly with your team lead.\"},\n",
+ "]\n",
+ "\n",
+ "# Embed the documents\n",
+ "doc_emb = co.embed(\n",
+ " model=\"embed-english-v3.0\",\n",
+ " input_type=\"search_document\",\n",
+ " texts=[doc['text'] for doc in faqs_long],\n",
+ " embedding_types=[\"float\"]).embeddings.float"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Next, we add a query, which asks about how to get to know the team.\n",
+ "\n",
+ "We choose `search_query` as the `input_type` to ensure the model treats this as the query (instead of the documents) for search."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 31,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Search query: how to get to know teammates\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Add the user query\n",
+ "query = \"How to get to know my teammates\"\n",
+ "\n",
+ "# Generate the search query\n",
+ "# Note: For simplicity, we are assuming only one query generated. For actual implementations, you will need to perform search for each query.\n",
+ "queries_for_search = generate_search_queries(query)[0]\n",
+ "print(\"Search query: \", queries_for_search)\n",
+ "\n",
+ "# Embed the search query\n",
+ "query_emb = co.embed(\n",
+ " model=\"embed-english-v3.0\",\n",
+ " input_type=\"search_query\",\n",
+ " texts=[queries_for_search],\n",
+ " embedding_types=[\"float\"]).embeddings.float"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now, we want to search for the most relevant documents to the query. For this, we make use of the `numpy` library to compute the similarity between each query-document pair using the dot product approach.\n",
+ "\n",
+ "Each query-document pair returns a score, which represents how similar the pair are. We then sort these scores in descending order and select the top most similar pairs, which we choose 5 (this is an arbitrary choice, you can choose any number).\n",
+ "\n",
+ "Here, we show the most relevant documents with their similarity scores."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 32,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Rank: 1\n",
+ "Score: 0.34212792245283796\n",
+ "Document: {'text': 'Team-Building Activities: We foster team spirit with monthly outings and weekly game nights. Feel free to suggest new activity ideas anytime!'}\n",
+ "\n",
+ "Rank: 2\n",
+ "Score: 0.2883222063024371\n",
+ "Document: {'text': 'Proposing New Ideas: Innovation is welcomed! Share your brilliant ideas at our weekly team meetings or directly with your team lead.'}\n",
+ "\n",
+ "Rank: 3\n",
+ "Score: 0.278128283997032\n",
+ "Document: {'text': 'Joining Slack Channels: You will receive an invite via email. Be sure to join relevant channels to stay informed and engaged.'}\n",
+ "\n",
+ "Rank: 4\n",
+ "Score: 0.19474858706643985\n",
+ "Document: {'text': \"Finding Coffee Spots: For your caffeine fix, head to the break room's coffee machine or cross the street to the café for artisan coffee.\"}\n",
+ "\n",
+ "Rank: 5\n",
+ "Score: 0.13713692506528824\n",
+ "Document: {'text': 'Side Projects Policy: We encourage you to pursue your passions. Just be mindful of any potential conflicts of interest with our business.'}\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Compute dot product similarity and display results\n",
+ "n = 5\n",
+ "scores = np.dot(query_emb, np.transpose(doc_emb))[0]\n",
+ "max_idx = np.argsort(-scores)[:n]\n",
+ "\n",
+ "retrieved_documents = [faqs_long[item] for item in max_idx]\n",
+ "\n",
+ "for rank, idx in enumerate(max_idx):\n",
+ " print(f\"Rank: {rank+1}\")\n",
+ " print(f\"Score: {scores[idx]}\")\n",
+ " print(f\"Document: {retrieved_documents[rank]}\\n\")\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Reranking can boost the results from semantic or lexical search further. The Rerank endpoint takes a list of search results and reranks them according to the most relevant documents to a query. This requires just a single line of code to implement.\n",
+ "\n",
+ "We call the endpoint using `co.rerank()` and pass the following arguments:\n",
+ "\n",
+ "- `query`: The user query\n",
+ "- `documents`: The list of documents we get from the semantic search results\n",
+ "- `top_n`: The top reranked documents to select\n",
+ "- `model`: We choose Rerank English 3\n",
+ "\n",
+ "Looking at the results, we see that since the query is about getting to know the team, the document that talks about joining Slack channels is now ranked higher (1st) compared to earlier (3rd).\n",
+ "\n",
+ "Here we select `top_n` to be 2, which will be the documents we will pass next for response generation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 33,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Rank: 1\n",
+ "Score: 0.0020507434\n",
+ "Document: {'text': 'Joining Slack Channels: You will receive an invite via email. Be sure to join relevant channels to stay informed and engaged.'}\n",
+ "\n",
+ "Rank: 2\n",
+ "Score: 0.0014158706\n",
+ "Document: {'text': 'Team-Building Activities: We foster team spirit with monthly outings and weekly game nights. Feel free to suggest new activity ideas anytime!'}\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Rerank the documents\n",
+ "results = co.rerank(query=queries_for_search,\n",
+ " documents=retrieved_documents,\n",
+ " top_n=2,\n",
+ " model='rerank-english-v3.0')\n",
+ "\n",
+ "# Display the reranking results\n",
+ "for idx, result in enumerate(results.results):\n",
+ " print(f\"Rank: {idx+1}\") \n",
+ " print(f\"Score: {result.relevance_score}\")\n",
+ " print(f\"Document: {retrieved_documents[result.index]}\\n\")\n",
+ " \n",
+ "reranked_documents = [retrieved_documents[result.index] for result in results.results]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Finally we reach the step that we saw in the earlier `Basic RAG` section.\n",
+ "\n",
+ "To call the Chat API with RAG, we pass the following parameters. This tells the model to run in RAG-mode and use these documents in its response.\n",
+ "\n",
+ "- `model` for the model ID\n",
+ "- `messages` for the user's query.\n",
+ "- `documents` for defining the documents.\n",
+ "\n",
+ "The response is then generated based on the the query and the documents retrieved.\n",
+ "\n",
+ "RAG introduces additional objects in the Chat response. One of them is `citations`, which contains details about:\n",
+ "- specific text spans from the retrieved documents on which the response is grounded.\n",
+ "- the documents referenced in the citations."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "You can get to know your teammates by joining your company's Slack channels and taking part in team-building activities. You will receive an invite to join relevant Slack channels via email. Your company also organises monthly outings and weekly game nights to foster team spirit.\n",
+ "\n",
+ "CITATIONS:\n",
+ "start=38 end=75 text=\"joining your company's Slack channels\" sources=[Source_Document(id='doc:0:0', document={'id': 'doc:0:0', 'text': 'Joining Slack Channels: You will receive an invite via email. Be sure to join relevant channels to stay informed and engaged.'}, type='document')] \n",
+ "\n",
+ "start=95 end=120 text='team-building activities.' sources=[Source_Document(id='doc:0:1', document={'id': 'doc:0:1', 'text': 'Team-Building Activities: We foster team spirit with monthly outings and weekly game nights. Feel free to suggest new activity ideas anytime!'}, type='document')] \n",
+ "\n",
+ "start=130 end=190 text='receive an invite to join relevant Slack channels via email.' sources=[Source_Document(id='doc:0:0', document={'id': 'doc:0:0', 'text': 'Joining Slack Channels: You will receive an invite via email. Be sure to join relevant channels to stay informed and engaged.'}, type='document')] \n",
+ "\n",
+ "start=219 end=280 text='monthly outings and weekly game nights to foster team spirit.' sources=[Source_Document(id='doc:0:1', document={'id': 'doc:0:1', 'text': 'Team-Building Activities: We foster team spirit with monthly outings and weekly game nights. Feel free to suggest new activity ideas anytime!'}, type='document')] \n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Generate the response\n",
+ "response = co.chat(model=\"command-r-plus-08-2024\",\n",
+ " messages=[{'role': 'user', 'content': query}],\n",
+ " documents=reranked_documents)\n",
+ "\n",
+ "# Display the response\n",
+ "print(response.message.content[0].text)\n",
+ "\n",
+ "# Display the citations and source documents\n",
+ "if response.message.citations:\n",
+ " print(\"\\nCITATIONS:\")\n",
+ " for citation in response.message.citations:\n",
+ " print(citation, \"\\n\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "base",
+ "language": "python",
+ "name": "base"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/guides/getting-started/v2/tutorial_pt7_v2.ipynb b/notebooks/guides/getting-started/v2/tutorial_pt7_v2.ipynb
new file mode 100644
index 0000000..aa7779c
--- /dev/null
+++ b/notebooks/guides/getting-started/v2/tutorial_pt7_v2.ipynb
@@ -0,0 +1,554 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Agents with Tool Use"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Tool use extends the ideas from RAG, where external systems are used to guide the response of an LLM, but by leveraging a much bigger set of tools than what’s possible with RAG. The concept of tool use leverages LLMs' useful feature of being able to act as a reasoning and decision-making engine.\n",
+ "\n",
+ "While RAG enables applications that can *answer questions*, tool use enables those that can *automate tasks*.\n",
+ "\n",
+ "Tool use also enables developers to build agentic applications that can take actions, that is, doing both read and write operations on an external system.\n",
+ "\n",
+ "In this tutorial, you'll learn about:\n",
+ "- Creating tools\n",
+ "- Tool planning and calling\n",
+ "- Tool execution\n",
+ "- Response and citation generation\n",
+ "- Multi-step tool use\n",
+ "\n",
+ "You'll learn these by building an onboarding assistant for new hires."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Setup\n",
+ "\n",
+ "To get started, first we need to install the `cohere` library and create a Cohere client."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "# pip install cohere\n",
+ "\n",
+ "import cohere\n",
+ "import json\n",
+ "\n",
+ "co = cohere.ClientV2(api_key=\"COHERE_API_KEY\") # Get your free API key: https://dashboard.cohere.com/api-keys"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Creating tools"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The pre-requisite, before we can run a tool use workflow, is to set up the tools. Let's create three tools:\n",
+ "- `search_faqs`: A tool for searching the FAQs. For simplicity, we'll not implement any retrieval logic, but we'll simply pass a list of pre-defined documents, which are the FAQ documents we had used in the Text Embeddings section.\n",
+ "- `search_emails`: A tool for searching the emails. Same as above, we'll simply pass a list of pre-defined emails from the Reranking section.\n",
+ "- `create_calendar_event`: A tool for creating new calendar events. Again, for simplicity, we'll not implement actual event bookings, but will return a mock success event. In practice, we can connect to a calendar service API and implement all the necessary logic here.\n",
+ "\n",
+ "Here, we are defining a Python function for each tool, but more broadly, the tool can be any function or service that can receive and send objects."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Create the tools\n",
+ "def search_faqs(query):\n",
+ " faqs = [\n",
+ " {\"text\": \"Reimbursing Travel Expenses: Easily manage your travel expenses by submitting them through our finance tool. Approvals are prompt and straightforward.\"},\n",
+ " {\"text\": \"Working from Abroad: Working remotely from another country is possible. Simply coordinate with your manager and ensure your availability during core hours.\"}\n",
+ " ]\n",
+ " return {\"faqs\" : faqs}\n",
+ "\n",
+ "def search_emails(query):\n",
+ " emails = [\n",
+ " {\"from\": \"it@co1t.com\", \"to\": \"david@co1t.com\", \"date\": \"2024-06-24\", \"subject\": \"Setting Up Your IT Needs\", \"text\": \"Greetings! To ensure a seamless start, please refer to the attached comprehensive guide, which will assist you in setting up all your work accounts.\"},\n",
+ " {\"from\": \"john@co1t.com\", \"to\": \"david@co1t.com\", \"date\": \"2024-06-24\", \"subject\": \"First Week Check-In\", \"text\": \"Hello! I hope you're settling in well. Let's connect briefly tomorrow to discuss how your first week has been going. Also, make sure to join us for a welcoming lunch this Thursday at noon—it's a great opportunity to get to know your colleagues!\"}\n",
+ " ]\n",
+ " return {\"emails\" : emails}\n",
+ " \n",
+ "def create_calendar_event(date: str, time: str, duration: int):\n",
+ " # You can implement any logic here\n",
+ " return {\"is_success\": True,\n",
+ " \"message\": f\"Created a {duration} hour long event at {time} on {date}\"}\n",
+ " \n",
+ "functions_map = {\n",
+ " \"search_faqs\": search_faqs,\n",
+ " \"search_emails\": search_emails,\n",
+ " \"create_calendar_event\": create_calendar_event\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The second and final setup step is to define the tool schemas in a format that can be passed to the Chat endpoint. The schema must contain the following fields: `name`, `description`, and `parameters` in the format shown below. \n",
+ "\n",
+ "This schema informs the LLM about what the tool does, and the LLM decides whether to use a particular tool based on it. Therefore, the more descriptive and specific the schema, the more likely the LLM will make the right tool call decisions."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Further reading:\n",
+ "- [Documentation on parameter types in tool use](https://docs.cohere.com/v2/docs/parameter-types-in-tool-use)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Define the tools\n",
+ "tools = [\n",
+ " {\n",
+ " \"type\": \"function\",\n",
+ " \"function\": {\n",
+ " \"name\": \"search_faqs\",\n",
+ " \"description\": \"Given a user query, searches a company's frequently asked questions (FAQs) list and returns the most relevant matches to the query.\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"query\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"The query from the user\"\n",
+ " }\n",
+ " },\n",
+ " \"required\": [\"query\"]\n",
+ " }\n",
+ " }\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"function\",\n",
+ " \"function\": {\n",
+ " \"name\": \"search_emails\",\n",
+ " \"description\": \"Given a user query, searches a person's emails and returns the most relevant matches to the query.\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"query\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"The query from the user\"\n",
+ " }\n",
+ " },\n",
+ " \"required\": [\"query\"]\n",
+ " }\n",
+ " }\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"function\",\n",
+ " \"function\": {\n",
+ " \"name\": \"create_calendar_event\",\n",
+ " \"description\": \"Creates a new calendar event of the specified duration at the specified time and date. A new event cannot be created on the same time as an existing event.\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"date\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"the date on which the event starts, formatted as mm/dd/yy\"\n",
+ " },\n",
+ " \"time\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"the time of the event, formatted using 24h military time formatting\"\n",
+ " },\n",
+ " \"duration\": {\n",
+ " \"type\": \"number\",\n",
+ " \"description\": \"the number of hours the event lasts for\"\n",
+ " }\n",
+ " },\n",
+ " \"required\": [\"date\", \"time\", \"duration\"]\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ "]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Tool planning and calling"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can now run the tool use workflow. We can think of a tool use system as consisting of four components:\n",
+ "- The user\n",
+ "- The application\n",
+ "- The LLM\n",
+ "- The tools\n",
+ "\n",
+ "At its most basic, these four components interact in a workflow through four steps:\n",
+ "- **Step 1: Get user message** – The LLM gets the user message (via the application)\n",
+ "- **Step 2: Tool planning and calling** – The LLM makes a decision on the tools to call (if any) and generates - the tool calls\n",
+ "- **Step 3: Tool execution** - The application executes the tools and the results are sent to the LLM\n",
+ "- **Step 4: Response and citation generation** – The LLM generates the response and citations to back to the user"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Tool plan:\n",
+ "I will search the user's emails for any messages about getting set up with IT. \n",
+ "\n",
+ "Tool calls:\n",
+ "Tool name: search_emails | Parameters: {\"query\":\"IT setup\"}\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Create custom system message\n",
+ "system_message=\"\"\"## Task and Context\n",
+ "You are an assistant who assist new employees of Co1t with their first week. You respond to their questions and assist them with their needs. Today is Monday, June 24, 2024\"\"\"\n",
+ "\n",
+ "\n",
+ "# Step 1: Get user message\n",
+ "message = \"Is there any message about getting setup with IT?\"\n",
+ "\n",
+ "# Add the system and user messages to the chat history\n",
+ "messages = [{\"role\": \"system\", \"content\": system_message},\n",
+ " {\"role\": \"user\", \"content\": message}]\n",
+ "\n",
+ "# Step 2: Tool planning and calling\n",
+ "response = co.chat(\n",
+ " model=\"command-r-plus-08-2024\",\n",
+ " messages=messages,\n",
+ " tools=tools\n",
+ " )\n",
+ "\n",
+ "if response.message.tool_calls: \n",
+ " print(\"Tool plan:\")\n",
+ " print(response.message.tool_plan,\"\\n\")\n",
+ " print(\"Tool calls:\")\n",
+ " for tc in response.message.tool_calls:\n",
+ " print(f\"Tool name: {tc.function.name} | Parameters: {tc.function.arguments}\")\n",
+ " \n",
+ " # Append tool calling details to the chat history\n",
+ " messages.append({\"role\": \"assistant\", \"tool_calls\": response.message.tool_calls, \"tool_plan\": response.message.tool_plan})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Given three tools to choose from, the model is able to pick the right tool (in this case, `search_emails`) based on what the user is asking for.\n",
+ "\n",
+ "Also, notice that the model first generates a plan about what it should do (\"I will do ...\") before actually generating the tool call(s)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Tool execution"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Tool results:\n",
+ "{\"emails\": [{\"from\": \"it@co1t.com\", \"to\": \"david@co1t.com\", \"date\": \"2024-06-24\", \"subject\": \"Setting Up Your IT Needs\", \"text\": \"Greetings! To ensure a seamless start, please refer to the attached comprehensive guide, which will assist you in setting up all your work accounts.\"}, {\"from\": \"john@co1t.com\", \"to\": \"david@co1t.com\", \"date\": \"2024-06-24\", \"subject\": \"First Week Check-In\", \"text\": \"Hello! I hope you're settling in well. Let's connect briefly tomorrow to discuss how your first week has been going. Also, make sure to join us for a welcoming lunch this Thursday at noon\\u2014it's a great opportunity to get to know your colleagues!\"}]}\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Step 3: Tool execution\n",
+ "tool_content = []\n",
+ "for tc in response.message.tool_calls:\n",
+ " tool_result = functions_map[tc.function.name](**json.loads(tc.function.arguments))\n",
+ " tool_content.append(json.dumps(tool_result))\n",
+ " # Append tool results to the chat history\n",
+ " messages.append({\"role\": \"tool\", \"tool_call_id\": tc.id, \"tool_content\": tool_content}) \n",
+ "\n",
+ "print(\"Tool results:\") \n",
+ "for result in tool_content:\n",
+ " print(result)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Response and citation generation"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Response:\n",
+ "Yes, there is an email from IT with a comprehensive guide attached.\n",
+ "==================================================\n",
+ "\n",
+ "CITATIONS:\n",
+ "start=17 end=30 text='email from IT' sources=[Source_Tool(type='tool', id='search_emails_dy73yjrx50xq:0', tool_output={'emails': '[{\"date\":\"2024-06-24\",\"from\":\"it@co1t.com\",\"subject\":\"Setting Up Your IT Needs\",\"text\":\"Greetings! To ensure a seamless start, please refer to the attached comprehensive guide, which will assist you in setting up all your work accounts.\",\"to\":\"david@co1t.com\"},{\"date\":\"2024-06-24\",\"from\":\"john@co1t.com\",\"subject\":\"First Week Check-In\",\"text\":\"Hello! I hope you\\'re settling in well. Let\\'s connect briefly tomorrow to discuss how your first week has been going. Also, make sure to join us for a welcoming lunch this Thursday at noon—it\\'s a great opportunity to get to know your colleagues!\",\"to\":\"david@co1t.com\"}]'})] \n",
+ "\n",
+ "start=38 end=66 text='comprehensive guide attached' sources=[Source_Tool(type='tool', id='search_emails_dy73yjrx50xq:0', tool_output={'emails': '[{\"date\":\"2024-06-24\",\"from\":\"it@co1t.com\",\"subject\":\"Setting Up Your IT Needs\",\"text\":\"Greetings! To ensure a seamless start, please refer to the attached comprehensive guide, which will assist you in setting up all your work accounts.\",\"to\":\"david@co1t.com\"},{\"date\":\"2024-06-24\",\"from\":\"john@co1t.com\",\"subject\":\"First Week Check-In\",\"text\":\"Hello! I hope you\\'re settling in well. Let\\'s connect briefly tomorrow to discuss how your first week has been going. Also, make sure to join us for a welcoming lunch this Thursday at noon—it\\'s a great opportunity to get to know your colleagues!\",\"to\":\"david@co1t.com\"}]'})] \n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Step 4: Response and citation generation\n",
+ "response = co.chat(\n",
+ " model=\"command-r-plus-08-2024\",\n",
+ " messages=messages,\n",
+ " tools=tools\n",
+ ")\n",
+ "\n",
+ "# Append assistant response to the chat history\n",
+ "messages.append({\"role\": \"assistant\", \"content\": response.message.content[0].text})\n",
+ "\n",
+ "# Print final response\n",
+ "print(\"Response:\")\n",
+ "print(response.message.content[0].text)\n",
+ "print(\"=\"*50)\n",
+ "\n",
+ "# Print citations (if any)\n",
+ "if response.message.citations:\n",
+ " print(\"\\nCITATIONS:\")\n",
+ " for citation in response.message.citations:\n",
+ " print(citation, \"\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Multi-step tool use"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The model can execute more complex tasks in tool use – tasks that require tool calls to happen in a sequence. This is referred to as \"multi-step\" tool use. \n",
+ "\n",
+ "Let's create a function to called `run_assistant` to implement these steps, and along the way, print out the key events and messages. Optionally, this function also accepts the chat history as an argument to keep the state in a multi-turn conversation. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "model = \"command-r-plus-08-2024\"\n",
+ "\n",
+ "system_message=\"\"\"## Task and Context\n",
+ "You are an assistant who assists new employees of Co1t with their first week. You respond to their questions and assist them with their needs. Today is Monday, June 24, 2024\"\"\"\n",
+ "\n",
+ "def run_assistant(query, messages=None):\n",
+ " if messages is None:\n",
+ " messages = []\n",
+ " \n",
+ " if \"system\" not in {m.get(\"role\") for m in messages}:\n",
+ " messages.append({\"role\": \"system\", \"content\": system_message})\n",
+ " \n",
+ " # Step 1: get user message\n",
+ " print(f\"Question:\\n{query}\")\n",
+ " print(\"=\"*50)\n",
+ " \n",
+ " messages.append({\"role\": \"user\", \"content\": query})\n",
+ "\n",
+ " # Step 2: Generate tool calls (if any)\n",
+ " response = co.chat(\n",
+ " model=model,\n",
+ " messages=messages,\n",
+ " tools=tools\n",
+ " )\n",
+ "\n",
+ " while response.message.tool_calls:\n",
+ " \n",
+ " print(\"Tool plan:\")\n",
+ " print(response.message.tool_plan,\"\\n\")\n",
+ " print(\"Tool calls:\")\n",
+ " for tc in response.message.tool_calls:\n",
+ " print(f\"Tool name: {tc.function.name} | Parameters: {tc.function.arguments}\")\n",
+ " print(\"=\"*50)\n",
+ "\n",
+ " messages.append({\"role\": \"assistant\", \"tool_calls\": response.message.tool_calls, \"tool_plan\": response.message.tool_plan}) \n",
+ " \n",
+ " # Step 3: Get tool results\n",
+ " tool_content = []\n",
+ " for idx, tc in enumerate(response.message.tool_calls):\n",
+ " tool_result = functions_map[tc.function.name](**json.loads(tc.function.arguments))\n",
+ " tool_content.append(json.dumps(tool_result))\n",
+ " messages.append({\"role\": \"tool\", \"tool_call_id\": tc.id, \"tool_content\": tool_content})\n",
+ " \n",
+ " # Step 4: Generate response and citations \n",
+ " response = co.chat(\n",
+ " model=model,\n",
+ " messages=messages,\n",
+ " tools=tools\n",
+ " )\n",
+ " \n",
+ " messages.append({\"role\": \"assistant\", \"content\": response.message.content[0].text})\n",
+ " \n",
+ " # Print final response\n",
+ " print(\"Response:\")\n",
+ " print(response.message.content[0].text)\n",
+ " print(\"=\"*50)\n",
+ " \n",
+ " # Print citations (if any)\n",
+ " if response.message.citations:\n",
+ " print(\"\\nCITATIONS:\")\n",
+ " for citation in response.message.citations:\n",
+ " print(citation, \"\\n\")\n",
+ " \n",
+ " return messages"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "To illustrate the concept of multi-step tool user, let's ask the assistant to block time for any lunch invites received in the email.\n",
+ "\n",
+ "This requires tasks to happen over multiple steps in a sequence. Here, we see the assistant running these steps:\n",
+ "- First, it calls the `search_emails` tool to find any lunch invites, which it found one.\n",
+ "- Next, it calls the `create_calendar_event` tool to create an event to block the person's calendar on the day mentioned by the email.\n",
+ "\n",
+ "This is also an example of tool use enabling a write operation instead of just a read operation that we saw with RAG."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Question:\n",
+ "Can you check if there are any lunch invites, and for those days, create a one-hour event on my calendar at 12PM.\n",
+ "==================================================\n",
+ "Tool plan:\n",
+ "I will search the user's emails for lunch invites and then create a calendar event for each day they are invited to lunch. \n",
+ "\n",
+ "Tool calls:\n",
+ "Tool name: search_emails | Parameters: {\"query\":\"lunch invite\"}\n",
+ "==================================================\n",
+ "Tool plan:\n",
+ "I have found an email inviting the user to a welcoming lunch on Thursday at noon. I will now create a calendar event for this. \n",
+ "\n",
+ "Tool calls:\n",
+ "Tool name: create_calendar_event | Parameters: {\"date\":\"06/27/24\",\"duration\":1,\"time\":\"12:00\"}\n",
+ "==================================================\n",
+ "Response:\n",
+ "Sure, I found an email from John inviting you to a welcoming lunch this Thursday at noon. I've created a one-hour event on your calendar for this Thursday at 12 pm.\n",
+ "==================================================\n",
+ "\n",
+ "CITATIONS:\n",
+ "start=17 end=32 text='email from John' sources=[Source_Tool(type='tool', id='search_emails_j72zv2xhq0sj:0', tool_output={'emails': '[{\"date\":\"2024-06-24\",\"from\":\"it@co1t.com\",\"subject\":\"Setting Up Your IT Needs\",\"text\":\"Greetings! To ensure a seamless start, please refer to the attached comprehensive guide, which will assist you in setting up all your work accounts.\",\"to\":\"david@co1t.com\"},{\"date\":\"2024-06-24\",\"from\":\"john@co1t.com\",\"subject\":\"First Week Check-In\",\"text\":\"Hello! I hope you\\'re settling in well. Let\\'s connect briefly tomorrow to discuss how your first week has been going. Also, make sure to join us for a welcoming lunch this Thursday at noon—it\\'s a great opportunity to get to know your colleagues!\",\"to\":\"david@co1t.com\"}]'})] \n",
+ "\n",
+ "start=51 end=88 text='welcoming lunch this Thursday at noon' sources=[Source_Tool(type='tool', id='search_emails_j72zv2xhq0sj:0', tool_output={'emails': '[{\"date\":\"2024-06-24\",\"from\":\"it@co1t.com\",\"subject\":\"Setting Up Your IT Needs\",\"text\":\"Greetings! To ensure a seamless start, please refer to the attached comprehensive guide, which will assist you in setting up all your work accounts.\",\"to\":\"david@co1t.com\"},{\"date\":\"2024-06-24\",\"from\":\"john@co1t.com\",\"subject\":\"First Week Check-In\",\"text\":\"Hello! I hope you\\'re settling in well. Let\\'s connect briefly tomorrow to discuss how your first week has been going. Also, make sure to join us for a welcoming lunch this Thursday at noon—it\\'s a great opportunity to get to know your colleagues!\",\"to\":\"david@co1t.com\"}]'})] \n",
+ "\n",
+ "start=105 end=163 text='one-hour event on your calendar for this Thursday at 12 pm' sources=[Source_Tool(type='tool', id='create_calendar_event_vs7mxjzk9jzs:0', tool_output={'is_success': 'true', 'message': 'Created a 1 hour long event at 12:00 on 06/27/24'})] \n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "messages = run_assistant(\"Can you check if there are any lunch invites, and for those days, create a one-hour event on my calendar at 12PM.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In this tutorial, you learned about:\n",
+ "- How to create tools\n",
+ "- How tool planning and calling happens\n",
+ "- How tool execution happens\n",
+ "- How to generate the response and citations\n",
+ "- How to run tool use in a multi-step scenario\n",
+ "\n",
+ "And that concludes our 7-part Cohere tutorial. We hope that they have provided you with a foundational understanding of the Cohere API, the available models and endpoints, and the types of use cases that you can build with them.\n",
+ "\n",
+ "To continue your learning, check out:\n",
+ "- [LLM University - A range of courses and step-by-step guides to help you start building](https://cohere.com/llmu)\n",
+ "- [Cookbooks - A collection of basic to advanced example applications](https://docs.cohere.com/page/cookbooks)\n",
+ "- [Cohere's documentation](https://docs.cohere.com/docs/the-cohere-platform)\n",
+ "- [The Cohere API reference](https://docs.cohere.com/reference/about)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "base",
+ "language": "python",
+ "name": "base"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}