Product DocumentationAPI and Python SDK ReferenceRelease Notes
Schedule a Demo
Schedule a Demo

Langchain

This guide walks through how to use the ArthurCallbackHandler, an integration that allows you to send LLM inferences to the Arthur platform through LangChain.

Register your ArthurModel

You can skip this step if your generative text model is already registered with Arthur.

If you do not have a model currently onboarded to Arthur, follow the steps on our Generative Text Onboarding Guide

You will need your model's ID registered to the platform before you create an ArthurCallbackHandler with your LangChain LLM.

Create your LangChain LLM with the ArthurCallbackHandler

First, get your Arthur login info and the ID of your registered ArthurModel:

arthur_url = "<https://app.arthur.ai">  
arthur_login = "your-arthur-login-username-here"  
arthur_model_id = "your-arthur-model-id-here"

Next, we create a LangChain ChatOpenAI LLM with your Arthur credential info passed to the ArthurCallbackHandler

Note that we are also configuring the LLM with the useful StreamingStdOutCallbackHandler from LangChain, which returns responses as a token-by-token stream instead of returning the entire result at once - this typically makes for a better UX for development & testing.

from langchain.callbacks import ArthurCallbackHandler  
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler  
from langchain.chat_models import ChatOpenAI

chatgpt = ChatOpenAI(  
        streaming=True,  
        temperature=0.1,  
        callbacks=[  
            StreamingStdOutCallbackHandler(),  
            ArthurCallbackHandler.from_credentials(  
                arthur_model_id,  
                arthur_url=arthur_url,  
                arthur_login=arthur_login)  
        ])

You can now use the LLM in your LangChain application, with its input text, output text, and other monitored attributes recorded to the Arthur platform for each inference.

Note that the attributes from each LLM response will only be saved to Arthur if you have registered those attributes with your ArthurModel. For more information on registered generative text models with additional attributes besides input text & output text, visit the Generative Text Onboarding Guide.

Run the model to log inferences to the Arthur platform

Here we define a run function that executes a loop for a chat between a user and an LLM until the user types q to quit

Note that this function is not required to use the ArthurCallbackHandler - it is just meant as a quick demonstration of how to use a LangChain LLM

from langchain.schema import HumanMessage

def run(llm):  
    history = []  
    while True:  
        user_input = input("\n>>> input >>>\n>>>: ")  
        if user_input == "q":  
            break  
        history.append(HumanMessage(content=user_input))  
        history.append(llm(history))

Each subsequent user <> LLM back-and-forth response will be logged as its own inference in the Arthur platform.

For example, here is an execution of this run function with the ChatGPT LLM.

run(chatgpt)
input >>>

> > > : What is a callback handler?  
> > > A callback handler, also known as a callback function or callback method, is a piece of code that is executed in response to a specific event or condition. It is commonly used in programming languages that support event-driven or asynchronous programming paradigms.

The purpose of a callback handler is to provide a way for developers to define custom behavior that should be executed when a certain event occurs. Instead of waiting for a result or blocking the execution, the program registers a callback function and continues with other tasks. When the event is triggered, the callback function is invoked, allowing the program to respond accordingly.

Callback handlers are commonly used in various scenarios, such as handling user input, responding to network requests, processing asynchronous operations, and implementing event-driven architectures. They provide a flexible and modular way to handle events and decouple different components of a system.  
input >>>

> > > : What do I need to do to get the full benefits of this  
> > > To get the full benefits of using a callback handler, you should consider the following:

1. Understand the event or condition: Identify the specific event or condition that you want to respond to with a callback handler. This could be user input, network requests, or any other asynchronous operation.

2. Define the callback function: Create a function that will be executed when the event or condition occurs. This function should contain the desired behavior or actions you want to take in response to the event.

3. Register the callback function: Depending on the programming language or framework you are using, you may need to register or attach the callback function to the appropriate event or condition. This ensures that the callback function is invoked when the event occurs.

4. Handle the callback: Implement the necessary logic within the callback function to handle the event or condition. This could involve updating the user interface, processing data, making further requests, or triggering other actions.

5. Consider error handling: It's important to handle any potential errors or exceptions that may occur within the callback function. This ensures that your program can gracefully handle unexpected situations and prevent crashes or undesired behavior.

6. Maintain code readability and modularity: As your codebase grows, it's crucial to keep your callback handlers organized and maintainable. Consider using design patterns or architectural principles to structure your code in a modular and scalable way.

By following these steps, you can leverage the benefits of callback handlers, such as asynchronous and event-driven programming, improved responsiveness, and modular code design.  
input >>>

> > > : q