Continue Agent Call

Continue an incomplete Agent call.

This endpoint allows continuing an existing incomplete Agent call, by passing the tool call requested by the Agent. The Agent will resume processing from where it left off.

The messages in the request will be appended to the original messages in the Log. You do not have to provide the previous conversation history.

The original log must be in an incomplete state to be continued.

Headers

X-API-KEYstringRequired

Request

This endpoint expects an object.
log_idstringRequired

This identifies the Agent Log to continue.

messageslist of objectsRequired

The additional messages with which to continue the Agent Log. Often, these should start with the Tool messages with results for the previous Assistant message’s tool calls.

streamfalseRequired

If true, packets will be sent as data-only server-sent events.

provider_api_keysobjectOptional

API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.

include_trace_childrenbooleanOptionalDefaults to false

If true, populate trace_children for the returned Agent Log. Defaults to false.

Response

agentobject

Agent that generated the Log.

idstring

Unique identifier for the Log.

evaluator_logslist of objects

List of Evaluator Logs associated with the Log. These contain Evaluator judgments on the Log.

output_messageobjectOptional

The message returned by the provider.

prompt_tokensintegerOptional

Number of tokens in the prompt used to generate the output.

reasoning_tokensintegerOptional

Number of reasoning tokens used to generate the output.

output_tokensintegerOptional

Number of tokens in the output generated by the model.

prompt_costdoubleOptional

Cost in dollars associated to the tokens in the prompt.

output_costdoubleOptional

Cost in dollars associated to the tokens in the output.

finish_reasonstringOptional

Reason the generation finished.

messageslist of objectsOptional

The messages passed to the to provider chat endpoint.

tool_choice"none" or "auto" or "required" or objectOptional

Controls how the model uses tools. The following options are supported:

  • 'none' means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.
  • 'auto' means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.
  • 'required' means the model must call one or more of the provided tools.
  • {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.
start_timedatetimeOptional

When the logged event started.

end_timedatetimeOptional

When the logged event ended.

outputstringOptional

Generated output from your model for the provided inputs. Can be None if logging an error, or if creating a parent Log with the intention to populate it later.

created_atdatetimeOptional

User defined timestamp for when the log was created.

errorstringOptional

Error message if the log is an error.

provider_latencydoubleOptional

Duration of the logged event in seconds.

stdoutstringOptional

Captured log and debug statements.

provider_requestmap from strings to anyOptional

Raw request sent to provider.

provider_responsemap from strings to anyOptional

Raw response received the provider.

inputsmap from strings to anyOptional

The inputs passed to the prompt template.

sourcestringOptional

Identifies where the model was called from.

metadatamap from strings to anyOptional

Any additional metadata to record.

log_statusenumOptional

Status of the Agent Log. If incomplete, the Agent turn was suspended due to a tool call and can be continued by calling /agents/continue with responses to the Agent’s last message (which should contain tool calls). See the previous_agent_message field for easy access to the Agent’s last message.

Allowed values:
source_datapoint_idstringOptional

Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.

trace_parent_idstringOptional

The ID of the parent Log to nest this Log under in a Trace.

batcheslist of stringsOptional

Array of Batch IDs that this Log is part of. Batches are used to group Logs together for offline Evaluations

userstringOptional

End-user ID related to the Log.

environmentstringOptional

The name of the Environment the Log is associated to.

savebooleanOptionalDefaults to true

Whether the request/response payloads will be stored on Humanloop.

log_idstringOptional

This will identify a Log. If you don’t provide a Log ID, Humanloop will generate one for you.

trace_flow_idstringOptional

Identifier for the Flow that the Trace belongs to.

trace_idstringOptional

Identifier for the Trace that the Log belongs to.

trace_childrenlist of objectsOptional

Logs nested under this Log in the Trace.

previous_agent_messageobjectOptional

The Agent’s last message, which should contain tool calls. Only populated if the Log is incomplete due to a suspended Agent turn with tool calls. This is useful for continuing the Agent call by calling /agents/continue.

Errors