Skip to content

Commit

Permalink
Sync files
Browse files Browse the repository at this point in the history
  • Loading branch information
jgbradley1 authored Jun 6, 2024
1 parent c47ec05 commit a21685c
Show file tree
Hide file tree
Showing 5 changed files with 242 additions and 139 deletions.
34 changes: 34 additions & 0 deletions TRANSPARENCY.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# GraphRAG: Responsible AI FAQ

## What is GraphRAG?
GraphRAG is an AI-based content interpretation and search capability. Using LLMs, it parses data to create a knowledge graph and answer user questions about a user-provided private dataset.

## What can GraphRAG do?
GraphRAG is able to connect information across large volumes of information and use these connections to answer questions that are difficult or impossible to answer using keyword and vector-based search mechanisms. Building on the previous question, provide semi-technical, high-level information on how the system offers functionality for various uses. This lets a system using GraphRAG to answer questions where the answers span many documents as well as thematic questions such as “what are the top themes in this dataset?.”

## What are GraphRAG’s intended use(s)?
GraphRAG is intended to support critical information discovery and analysis use cases where the information required to arrive at a useful insight spans many documents, is noisy, is mixed with mis and/or dis-information, or when the questions users aim to answer are more abstract or thematic than the underlying data can directly answer.

GraphRAG is designed to be used in settings where users are already trained on responsible analytic approaches and critical reasoning is expected. GraphRAG is capable of providing high degrees of insight on complex information topics, however human analysis by a domain expert of the answers is needed in order to verify and augment GraphRAG’s generated responses.

GraphRAG is intended to be deployed and used with a domain specific corpus of text data. GraphRAG itself does not collect user data, but users are encouraged to verify data privacy policies of the chosen LLM used to configure GraphRAG.

## How was GraphRAG evaluated? What metrics are used to measure performance?

GraphRAG has been evaluated in multiple ways. The primary concerns are 1) accurate representation of the data set, 2) providing transparency and groundedness of responses, 3) resilience to prompt and data corpus injection attacks, and 4) low hallucination rates. Details on how each of these has been evaluated is outlined below by number.
1. Accurate representation of the dataset has been tested by both manual inspection and automated testing against a “gold answer” that is created from randomly selected subsets of a test corpus.
1. Transparency and groundedness of responses is tested via automated answer coverage evaluation and human inspection of the underlying context returned.
1. We test both user prompt injection attacks (“jailbreaks”) and cross prompt injection attacks (“data attacks”) using manual and semi-automated techniques.
1. Hallucination rates are evaluated using claim coverage metrics, manual inspection of answer and source, and adversarial attacks to attempt a forced hallucination through adversarial and exceptionally challenging datasets.

## What are the limitations of GraphRAG? How can users minimize the impact of GraphRAG’s limitations when using the system?
GraphRAG depends on a well-constructed indexing examples. For general applications (e.g. content oriented around people, places, organizations, things, etc.) we provide example indexing prompts. For unique datasets effective indexing can depend on proper identification of domain-specific concepts.

Indexing is a relatively expensive operation; a best practice to mitigate indexing is to create a small test dataset in the target domain to ensure indexer performance prior to large indexing operations.

## What operational factors and settings allow for effective and responsible use of GraphRAG?
GraphRAG is designed for use by users with domain sophistication and experience working through difficult information challenges. While the approach is generally robust to injection attacks and identifying conflicting sources of information, the system is designed for trusted users. Proper human analysis of responses is important to generate reliable insights, and the provenance of information should be traced to ensure human agreement with the inferences made as part of the answer generation.

GraphRAG yields the most effective results on natural language text data that is collectively focused on an overall topic or theme, and that is entity rich – entities being people, places, things, or objects that can be uniquely identified.

While GraphRAG has been evaluated for its resilience to prompt and data corpus injection attacks, and has been probed for specific types of harms, the LLM that the user configures with GraphRAG may produce inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case and model. Developers should assess outputs for their context and use available safety classifiers, model specific safety filters and features (such as [https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety)), or custom solutions appropriate for their use case.
10 changes: 10 additions & 0 deletions docs/DEPLOYMENT-GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,16 @@ Login with Azure CLI and set the appropriate Azure subscription.
> az account set --subscription "<subscription_id>"
```

The Azure subscription that you deploy the accelerator in will require the `Microsoft.OperationsManagement` resource provider to be registered.
This can be accomplished via the [Portal](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/resource-providers-and-types#azure-ortal) or these [Azure CLI](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/resource-providers-and-types#azure-cli) commands:

```shell
# Register provider
az provider register --namespace Microsoft.OperationsManagement
# Verify provider was registered
az provider show --namespace Microsoft.OperationsManagement -o table
```

## 3. Deploy Azure Container Registry (ACR) and host the `graphrag` docker image in the registry
ACR may be deployed using the [Portal](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal?tabs=azure-cli) or [Azure CLI](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli).

Expand Down
5 changes: 4 additions & 1 deletion frontend/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@ Add the following variables to a `.env` file

* APIM_SUBSCRIPTION_KEY
* DEPLOYMENT_URL
* AI_SEARCH_URL
* AI_SEARCH_KEY
* DEPLOYER_EMAIL

The frontend can run natively as a streamlit app:
```
Expand All @@ -16,4 +19,4 @@ or as a docker container:
> docker run --env-file <env_file> -p 8080:8080 graphrag:frontend
```

To access the app, visit `localhost:8080` in your browser
To access the app, visit `localhost:8080` in your browser
Loading

0 comments on commit a21685c

Please sign in to comment.