Call Agent

Call an Agent. The Agent will run on the Humanloop runtime and return a completed Agent Log. If the Agent requires a tool call that cannot be ran by Humanloop, execution will halt. To continue, pass the ID of the incomplete Log and the required tool call to the /agents/continue endpoint. The agent will run for the maximum number of iterations, or until it encounters a stop condition, according to its configuration. You can use query parameters `version_id`, or `environment`, to target an existing version of the Agent. Otherwise the default deployed version will be chosen. Instead of targeting an existing version explicitly, you can instead pass in Agent details in the request body. A new version is created if it does not match any existing ones. This is helpful in the case where you are storing or deriving your Agent details in code.

Authentication

X-API-KEYstring
API Key authentication via header

Query parameters

version_idstringOptional
A specific Version ID of the Agent to log to.
environmentstringOptional
Name of the Environment identifying a deployed version to log to.

Request

This endpoint expects an object.
streamfalseRequired

If true, Agent events and tokens will be sent as data-only server-sent events.

pathstringOptional

Path of the Agent, including the name. This locates the Agent in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name or just name.

idstringOptional
ID for an existing Agent.
messageslist of objectsOptional
The messages passed to the to provider chat endpoint.
tool_choice"none" or "auto" or "required" or objectOptional
Controls how the model uses tools. The following options are supported: - `'none'` means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt. - `'auto'` means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt. - `'required'` means the model must call one or more of the provided tools. - `{'type': 'function', 'function': {name': <TOOL_NAME>}}` forces the model to use the named function.
agentobject or stringOptional
The Agent configuration to use. Two formats are supported: - An object representing the details of the Agent configuration - A string representing the raw contents of a .agent file A new Agent version will be created if the provided details do not match any existing version.
inputsmap from strings to anyOptional
The inputs passed to the prompt template.
sourcestringOptional
Identifies where the model was called from.
metadatamap from strings to anyOptional
Any additional metadata to record.
start_timedatetimeOptional
When the logged event started.
end_timedatetimeOptional
When the logged event ended.
source_datapoint_idstringOptional

Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.

trace_parent_idstringOptional
The ID of the parent Log to nest this Log under in a Trace.
userstringOptional

End-user ID related to the Log.

environmentstringOptional
The name of the Environment the Log is associated to.
savebooleanOptionalDefaults to true

Whether the request/response payloads will be stored on Humanloop.

log_idstringOptional
This will identify a Log. If you don't provide a Log ID, Humanloop will generate one for you.
provider_api_keysobjectOptional
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
return_inputsbooleanOptionalDefaults to true
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
include_trace_childrenbooleanOptionalDefaults to false

If true, populate trace_children for the returned Agent Log. Only applies when not streaming. Defaults to false.

Response

agentobject
Agent that generated the Log.
idstring
Unique identifier for the Log.
evaluator_logslist of objects
List of Evaluator Logs associated with the Log. These contain Evaluator judgments on the Log.
output_messageobject
The message returned by the provider.
prompt_tokensinteger
Number of tokens in the prompt used to generate the output.
reasoning_tokensinteger
Number of reasoning tokens used to generate the output.
output_tokensinteger
Number of tokens in the output generated by the model.
prompt_costdouble
Cost in dollars associated to the tokens in the prompt.
output_costdouble
Cost in dollars associated to the tokens in the output.
finish_reasonstring
Reason the generation finished.
messageslist of objects
The messages passed to the to provider chat endpoint.
tool_choice"none" or "auto" or "required" or object
Controls how the model uses tools. The following options are supported: - `'none'` means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt. - `'auto'` means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt. - `'required'` means the model must call one or more of the provided tools. - `{'type': 'function', 'function': {name': <TOOL_NAME>}}` forces the model to use the named function.
start_timedatetime
When the logged event started.
end_timedatetime
When the logged event ended.
outputstring

Generated output from your model for the provided inputs. Can be None if logging an error, or if creating a parent Log with the intention to populate it later.

created_atdatetime
User defined timestamp for when the log was created.
errorstring
Error message if the log is an error.
provider_latencydouble
Duration of the logged event in seconds.
stdoutstring
Captured log and debug statements.
provider_requestmap from strings to any
Raw request sent to provider.
provider_responsemap from strings to any
Raw response received the provider.
inputsmap from strings to any
The inputs passed to the prompt template.
sourcestring
Identifies where the model was called from.
metadatamap from strings to any
Any additional metadata to record.
log_statusenum

Status of the Agent Log. If incomplete, the Agent turn was suspended due to a tool call and can be continued by calling /agents/continue with responses to the Agent’s last message (which should contain tool calls). See the previous_agent_message field for easy access to the Agent’s last message.

Allowed values:
source_datapoint_idstring

Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.

trace_parent_idstring
The ID of the parent Log to nest this Log under in a Trace.
batcheslist of strings
Array of Batch IDs that this Log is part of. Batches are used to group Logs together for offline Evaluations
userstring

End-user ID related to the Log.

environmentstring
The name of the Environment the Log is associated to.
savebooleanDefaults to true

Whether the request/response payloads will be stored on Humanloop.

log_idstring
This will identify a Log. If you don't provide a Log ID, Humanloop will generate one for you.
trace_flow_idstring
Identifier for the Flow that the Trace belongs to.
trace_idstring
Identifier for the Trace that the Log belongs to.
trace_childrenlist of objects
Logs nested under this Log in the Trace.
previous_agent_messageobject

The Agent’s last message, which should contain tool calls. Only populated if the Log is incomplete due to a suspended Agent turn with tool calls. This is useful for continuing the Agent call by calling /agents/continue.

Errors

422
Agents Call Request Unprocessable Entity Error