ChatContextual
This will help you getting started with Contextual AI's Grounded Language Model chat models.
To learn more about Contextual AI, please visit our documentation.
This integration requires the contextual-client
Python SDK. Learn more about it here.
Overviewโ
This integration invokes Contextual AI's Grounded Language Model.
Integration detailsโ
Class | Package | Local | Serializable | JS support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatContextual | langchain-contextual | โ | beta | โ |
Model featuresโ
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
โ | โ | โ | โ | โ | โ | โ | โ | โ | โ |
Setupโ
To access Contextual models you'll need to create a/an Contextual AI account, get an API key, and install the langchain-contextual
integration package.
Credentialsโ
Head to app.contextual.ai to sign up to Contextual and generate an API key. Once you've done this set the CONTEXTUAL_AI_API_KEY environment variable:
import getpass
import os
if not os.getenv("CONTEXTUAL_AI_API_KEY"):
os.environ["CONTEXTUAL_AI_API_KEY"] = getpass.getpass(
"Enter your Contextual API key: "
)
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
Installationโ
The LangChain Contextual integration lives in the langchain-contextual
package:
%pip install -qU langchain-contextual
Instantiationโ
Now we can instantiate our model object and generate chat completions.
The chat client can be instantiated with these following additional settings:
Parameter | Type | Description | Default |
---|---|---|---|
temperature | Optional[float] | The sampling temperature, which affects the randomness in the response. Note that higher temperature values can reduce groundedness. | 0 |
top_p | Optional[float] | A parameter for nucleus sampling, an alternative to temperature which also affects the randomness of the response. Note that higher top_p values can reduce groundedness. | 0.9 |
max_new_tokens | Optional[int] | The maximum number of tokens that the model can generate in the response. Minimum is 1 and maximum is 2048. | 1024 |
from langchain_contextual import ChatContextual
llm = ChatContextual(
model="v1", # defaults to `v1`
api_key="",
temperature=0, # defaults to 0
top_p=0.9, # defaults to 0.9
max_new_tokens=1024, # defaults to 1024
)
Invocationโ
The Contextual Grounded Language Model accepts additional kwargs
when calling the ChatContextual.invoke
method.
These additional inputs are:
Parameter | Type | Description |
---|---|---|
knowledge | list[str] | Required: A list of strings of knowledge sources the grounded language model can use when generating a response. |
system_prompt | Optional[str] | Optional: Instructions the model should follow when generating responses. Note that we do not guarantee that the model follows these instructions exactly. |
avoid_commentary | Optional[bool] | Optional (Defaults to False ): Flag to indicate whether the model should avoid providing additional commentary in responses. Commentary is conversational in nature and does not contain verifiable claims; therefore, commentary is not strictly grounded in available context. However, commentary may provide useful context which improves the helpfulness of responses. |
# include a system prompt (optional)
system_prompt = "You are a helpful assistant that uses all of the provided knowledge to answer the user's query to the best of your ability."
# provide your own knowledge from your knowledge-base here in an array of string
knowledge = [
"There are 2 types of dogs in the world: good dogs and best dogs.",
"There are 2 types of cats in the world: good cats and best cats.",
]
# create your message
messages = [
("human", "What type of cats are there in the world and what are the types?"),
]
# invoke the GLM by providing the knowledge strings, optional system prompt
# if you want to turn off the GLM's commentary, pass True to the `avoid_commentary` argument
ai_msg = llm.invoke(
messages, knowledge=knowledge, system_prompt=system_prompt, avoid_commentary=True
)
print(ai_msg.content)
Chainingโ
We can chain the Contextual Model with output parsers.
from langchain_core.output_parsers import StrOutputParser
chain = llm | StrOutputParser
chain.invoke(
messages, knowledge=knowledge, systemp_prompt=system_prompt, avoid_commentary=True
)
API referenceโ
For detailed documentation of all ChatContextual features and configurations head to the API reference: https://python.langchain.com/api_reference/en/latest/chat_models/langchain_contextual.chat_models.ChatContextual.html
Relatedโ
- Chat model conceptual guide
- Chat model how-to guides