Invokables¶
Note
You will never need to instantiate this class directly. You should always use one of the child classes.
Invokables
are a core construct in Beehive. An Invokable
is anything that uses an LLM in its internal architecture to reason through and execute a user's task.
Base Attributes¶
Info
Note that the Invokable
class is a Pydantic BaseModel
.
Attributetype |
Description |
---|---|
namestr |
The invokable name. |
backstorystr |
Backstory for the AI actor. This is used to prompt the AI actor and direct tasks towards it. Default is: 'You are a helpful AI assistant.' |
modelBHChatModel | BaseChatModel |
Chat model used by the invokable to execute its function. This can be a BHChatModel or a Langchain ChatModel . |
statelist[BHMessage | BHToolMessage] | list[BaseMessage] |
List of messages that this actor has seen. This enables the actor to build off of previous conversations / outputs. |
historybool |
Whether to use previous interactions / messages when responding to the current task. Default is False . |
history_lookbackint |
Number of days worth of previous messages to use for answering the current task. |
feedbackbool |
Whether to use feedback from the invokable's previous interactions. Feedback enables the LLM to improve their responses over time. Note that only feedback from tasks with a similar embedding are used. |
feedback_embedderBHEmbeddingModel | None |
Embedding model used to calculate embeddings of tasks. These embeddings are stored in a vector database. When a user prompts the Invokable, the Invokable searches against this vector database using the task embedding. It then takes the suggestions generated for similar, previous tasks and concatenates them to the task prompt. Default is None . |
feedback_modelBHChatModel | BaseChatModel |
Language model used to generate feedback for the invokable. If None , then default to the model attribute. |
feedback_embedding_distanceEmbeddingDistance |
Distance method of the embedding space. See the ChromaDB documentation for more information: https://docs.trychroma.com/guides#changing-the-distance-function. |
n_feedback_resultsint |
Amount of feedback to incorporate into answering the current task. This takes n tasks with the most similar embedding to the current one and incorporates their feedback into the Invokable's model. Default is 1 . |
colorstr |
Color used to represent the invokable in verbose printing. This can be a HEX code, an RGB code, or a standard color supported by the Rich API. See https://rich.readthedocs.io/en/stable/appendix/colors.html for more details. Default is chartreuse2 . |
"invoke" method¶
In order to have your invokable execute a task, you can use the invoke
method. You'll see several examples of this throughout the documentation.
Argumenttype |
Description |
---|---|
taskstr |
Task to execute. |
retry_limitstr |
Maximum number of retries before the Invokable returns an error. Default is 100 . |
pass_back_model_errorsbool |
Boolean controlling whether to pass the contents of an error back to the LLM via a prompt. Default is False . |
verbosebool |
Beautify stdout logs with the rich package. Default is True . |
contextlist[Invokable] | None |
List of Invokables whose state should be treated as context for this invokation. |
streambool |
Stream the output of the agent character-by-character. Default is False . |
stdout_printeroutput.printer.Printer | None |
Printer object to handle stdout messages. Default is None . |
Beehive offers several invokables out-of-the-box:
BeehiveAgent
BeehiveLangchainAgent
BeehiveEnsemble
BeehiveDebate
We'll cover these in detail next.
BeehiveAgent
¶
BeehiveAgent
s are the most basic type of Invokable
. They are autonomous units programmed to execute complex tasks by combining tool usage and memory.
Here are the additional fields supported by the BeehiveAgent
class.
Argumenttype |
Description |
---|---|
temperatureint |
Temperature setting for the agent's chat model |
toolslist[Callable[..., Any]] |
Functions that this agent can use to answer questions. These functions are converted to tools that can be intepreted and executed by LLMs. Note that the language model must support tool calling for these tools to be properly invoked. |
response_modeltype[BaseModel] | None |
Pydantic BaseModel defining the desired schema for the agent's output. When specified, Beehive will prompt the agent to make sure that its responses fit the models's schema. Default is None . |
termination_conditionCallable[..., bool] | None |
Condition which, if met, breaks the agent out of the chat loop. This should be a function that takes a response_model instance as input. Default is None . |
chat_loopint |
Number of times the model should loop when responding to a task. Usually, this will be 1, but certain prompting patterns (e.g., COT, reflection) may require more loops. This should always be used with a response_model and a termination_condition . |
docstring_formatDocstringFormat | None |
Docstring format in functions. Beehive uses these docstrings to convert functions into LLM-compatible tools. If None , then Beehive will autodetect the docstring format and parse the arg descriptions. Default is None . |
Warning
Note that tools
is simply a list of functions. These functions should have docstrings and type-hints. Beehive will throw an error if either of these are missing.
from beehive.invokable.agent import BeehiveAgent
from beehive.models.openai_model import OpenAIModel
math_agent = BeehiveAgent(
name="MathAgent",
backstory="You are a helpful AI assistant. You specialize in performing complex calculations.",
model=OpenAIModel(
model="gpt-3.5-turbo-0125",
api_key="<your_api_key>",
),
tools=[],
history=True,
feedback=True,
)
math_agent.invoke("What's 2+2?")
BeehiveLangchainAgent
¶
BeehiveLangchainAgents
are similar to BeehiveAgents
, except they use Langchain-native types internally.
Here are the additional fields supported by the BeehiveLangchainAgent
class.
Argumenttype |
Description |
---|---|
temperatureint |
Temperature setting for the agent's chat model |
toolslist[Callable[..., Any]] |
Functions that this agent can use to answer questions. These functions are converted to tools that can be intepreted and executed by LLMs. Note that the language model must support tool calling for these tools to be properly invoked. |
docstring_formatDocstringFormat | None |
Docstring format in functions. Beehive uses these docstrings to convert functions into LLM-compatible tools. If None , then Beehive will autodetect the docstring format and parse the arg descriptions. Default is None . |
configRunnableConfig | None |
Langchain Runnable configuration. This is used inside the ChatModel's invoke method. Default is None . |
stoplist[str] |
List of strings on which the model should stop generating. |
**model_kwargs | Extra keyword arguments for invoking the Langchain chat model. |
from beehive.invokable.langchain_agent import BeehiveLangchainAgent
from langchain_openai.chat_models import ChatOpenAI
math_agent = BeehiveLangchainAgent(
name="MathAgent",
backstory="You are a helpful AI assistant. You specialize in performing complex calculations.",
model=ChatOpenAI(
model="gpt-3.5-turbo-0125",
api_key="<your_api_key>",
),
tools=[],
history=True,
feedback=True,
)
math_agent.invoke("What's 2+2?")
BeehiveEnsemble
¶
In a BeehiveEnsemble
, n
agents are given the same task and produce n
different responses. These responses are then synthesized together to produce a final answer.
Beehive currently supports two different synthesis methods: an LLM agent or a similarity function. In the former, Beehive creates a new LLM agent whose task is to combine all n
responses into a better, final response. In the latter, Beehive computes the similarity between all pairs of responses and returns the answer that had the highest cumulative similarity.
Here are the additional fields supported by the BeehiveEnsemble
class.
Argumenttype |
Description |
---|---|
temperatureint |
Temperature setting for the agent's chat model |
toolslist[Callable[..., Any]] |
Functions that this agent can use to answer questions. These functions are converted to tools that can be intepreted and executed by LLMs. Note that the language model must support tool calling for these tools to be properly invoked. |
docstring_formatDocstringFormat | None |
Docstring format in functions. Beehive uses these docstrings to convert functions into LLM-compatible tools. If None , then Beehive will autodetect the docstring format and parse the arg descriptions. Default is None . |
response_modeltype[BaseModel] | None |
Pydantic BaseModel defining the desired schema for the agent's output. When specified, Beehive will prompt the agent to make sure that its responses fit the models's schema. Default is None . |
termination_conditionCallable[..., bool] | None |
Condition which, if met, breaks the agent out of the chat loop. This should be a function that takes a response_model instance as input. Default is None . |
chat_loopint |
Number of times the model should loop when responding to a task. Usually, this will be 1, but certain prompting patterns (e.g., COT, reflection) may require more loops. This should always be used with a response_model and a termination_condition . |
num_membersint |
Number of members on the team. |
final_answer_methodLiteral['llm', 'similarity'] |
Method used to obtain the final answer from the agents. Either llm or similarity . If llm , then Beehive will create an agent with the inputted synthesizer_model and use that to synthesize the responses from the agents and generate a single, final response. If similarity , then Beehive will choose the answer that has the highest cumulative similarity to the other agents. |
synthesizer_modelBHChatModel | BaseChatModel | None |
Model used to synthesize responses from agents and generate a final response. Only necessary if final_answer_method ='llm'. This class must match the model class. |
similarity_score_funcCallable[[str, str], float] |
Function used to compute the similarity score. Only necessary if final_answer_method ='similarity'. The function must take two string arguments and return a float. If the callable is not specified, then Beehive defaults to the BLEU score from Papineni et al., 2002. Default is None . |
**agent_kwargs | Extra keyword arguments for agent instantiation. This is ONLY used for Langchain agents, and this is used for both the member agent and synthesizer agent instantiation. |
This was inspired by the work of Li et. al.
from beehive.invokable.ensemble import BeehiveEnsemble
from beehive.models.openai_model import OpenAIModel
# Using similarity scores
ensemble_similarity = BeehiveEnsemble(
name="TestEnsembleSimilarity",
backstory="You are an expert software engineer.",
model=OpenAIModel(
model="gpt-3.5-turbo-0125",
api_key="<your_api_key>",
),
num_members=4,
history=True,
final_answer_method="similarity",
)
ensemble_similarity.invoke("Write a script that downloads data from S3.")
# Using synthesizer model
ensemble_synthesizer = BeehiveEnsemble(
name="TestEnsembleSimilarity",
backstory="You are an expert software engineer.",
model=OpenAIModel(
model="gpt-3.5-turbo-0125",
api_key="<your_api_key>",
),
num_members=4,
history=True,
final_answer_method="llm",
synthesizer_model=OpenAIModel(
model="gpt-3.5-turbo-0125",
api_key="<your_api_key>",
)
)
ensemble_similarity.invoke("Write a script that uploads data from S3.")
BeehiveDebateTeam
¶
In an BeehiveDebateTeam
, n
agents are initially given the same task and produce n
different responses. The agents then "debate" with one another, i.e., they look at the output of the other n-1
agents and update their own response. This happens over several rounds. Finally, a "judge" (another LLM agent) evaluates all of the responses and chooses the one answer the initial query best.
Here are the additional fields supported by the BeehiveDebateTeam
class.
Argumenttype |
Description |
---|---|
temperatureint |
Temperature setting for the agent's chat model |
toolslist[Callable[..., Any]] |
Functions that this agent can use to answer questions. These functions are converted to tools that can be intepreted and executed by LLMs. Note that the language model must support tool calling for these tools to be properly invoked. |
response_modeltype[BaseModel] | None |
Pydantic BaseModel defining the desired schema for the agent's output. When specified, Beehive will prompt the agent to make sure that its responses fit the models's schema. Default is None . |
termination_conditionCallable[..., bool] | None |
Condition which, if met, breaks the agent out of the chat loop. This should be a function that takes a response_model instance as input. Default is None . |
chat_loopint |
Number of times the model should loop when responding to a task. Usually, this will be 1, but certain prompting patterns (e.g., COT, reflection) may require more loops. This should always be used with a response_model and a termination_condition . |
docstring_formatDocstringFormat | None |
Docstring format in functions. Beehive uses these docstrings to convert functions into LLM-compatible tools. If None , then Beehive will autodetect the docstring format and parse the arg descriptions. Default is None . |
num_membersint |
Number of members on the team. |
num_roundsint |
Number of debate rounds. |
judge_modelBHChatModel | BaseChatModel |
Model used to power the judge agent. |
**agent_kwargs | Extra keyword arguments for agent instantiation. This is ONLY used for Langchain agents, and this is used for both the member agent and synthesizer agent instantiation. |
This was inspired by the work of Du et. al.
from beehive.invokable.debate import BeehiveDebateTeam
from beehive.models.openai_model import OpenAIModel
debaters = BeehiveDebateTeam(
name="TestDebateTeam",
backstory="You are a helpful AI assistant.",
model=OpenAIModel(
model="gpt-3.5-turbo-0125",
api_key="<your_api_key>",
),
num_members=2,
num_rounds=2,
judge_model=OpenAIModel(
model="gpt-3.5-turbo-0125",
api_key="<your_api_key>",
),
history=True,
feedback=True,
)
debaters.invoke(" A treasure hunter found a buried treasure chest filled with gems. There were 175 diamonds, 35 fewer rubies than diamonds, and twice the number of emeralds than the rubies. How many of the gems were there in the chest?")