Store Prompts in code

How to store Prompts and Agents in your codebase and use them in the Humanloop SDK

Humanloop allows you to store Prompt and Agent definitions in your local filesystem and in version control, while still leveraging Humanloop’s prompt management and evaluation capabilities. This guide will walk you through using local Files in your development workflow.

Prerequisites

  • A Humanloop account with at least one Prompt or Agent created
  • Humanloop SDK installed (which includes the CLI functionality)

To follow along with the examples in this guide, create a Prompt named welcome-email in the root directory of your Humanloop workspace.

We’ll create a prompt that generates a welcome email for new customers. The Prompt uses the Humanloop .prompt format which has two main parts:

  1. A configuration section (between --- marks) that specifies the model settings
  2. Message templates: a system message setting the context, and a user message with personalization variables

Toggle the Prompt File editor by hitting Cmd + Shift + E (or Ctrl + Shift + E on Windows) and replace the existing content with the following:

welcome-email.prompt
1---
2model: gpt-4o
3temperature: 0.7
4max_tokens: 500
5top_p: 1.0
6presence_penalty: 0.0
7frequency_penalty: 0.0
8provider: openai
9endpoint: chat
10tools: []
11---
12
13<system>
14 You are an email assistant that creates friendly welcome emails for new customers.
15 Keep your tone warm and conversational while clearly explaining the key features of our product.
16</system>
17
18<user>
19 Write a welcome email for {{customer_name}} who just signed up for our {{product_name}} service.
20</user>

Deploying the welcome-email Prompt

Editing the welcome-email Prompt in the File editor

Save this new version of the Prompt by pressing Manage, then Deploy…, select your default environment (typically production) and follow the steps to deploy.

The Humanloop SDK includes both the programming interface and CLI tools you’ll need for this guide.

$pip install humanloop

You’ll also need your Humanloop API key, which you can find on the API Keys page in your Organization settings.

$# Create a .env file in your project root
>echo "HUMANLOOP_API_KEY=<YOUR_HUMANLOOP_API_KEY>" > .env

Step 1: Pull Files from Humanloop

After creating your Prompt in Humanloop (like the example Prompt we set up earlier), you’ll see it in your workspace:

Example Prompt and other Files in the Humanloop workspace

Example Prompt and other Files in Humanloop workspace

Now you can pull this Prompt from Humanloop to your local environment by running:

$humanloop pull

This command clones your Humanloop workspace into a humanloop directory in your project root, maintaining the same folder structure as your remote workspace.

Pulling Files from Humanloop

Pulling Files from Humanloop

humanloop pull will pull all Files deployed to your default environment (typically production) and store them in the humanloop directory. To pull only specific Files, use a different environment, or change the local destination directory, check out the options in the help menu:

$humanloop pull --help

Step 2: Use local Files in your code

Now that you have your Prompt locally, you can use it in your code by configuring the SDK to use local Files.

1from humanloop import Humanloop
2
3# Initialize the client with local File support
4humanloop = Humanloop(
5 api_key="YOUR_HUMANLOOP_API_KEY",
6 # Enable using Files from the local filesystem
7 use_local_files=True,
8)
9
10# Call a local Prompt
11response = humanloop.prompts.call(
12 path="welcome-email", # Looks for humanloop/welcome-email.prompt
13 inputs={"customer_name": "Alex", "product_name": "Humanloop AI Platform"}
14)
15
16print(response.logs[0].output)

You can also log results from your own provider calls by referencing local Files:

1# After making your own provider call, log the results with a local Prompt File
2humanloop.prompts.log(
3 path="welcome-email", # References the local File
4 messages=[
5 {"role": "system", "content": "You are an email assistant..."},
6 {"role": "user", "content": "Write a welcome email..."}
7 ],
8 output="The generated email content...",
9 inputs={"customer_name": "Alex", "product_name": "Humanloop AI Platform"}
10)

Important: When local Files are enabled and you reference a File using its path, the SDK will:

  • By default, look for Files in the humanloop directory (this can be customized with local_files_directory/localFilesDirectory)
  • Only look for Files in your local directory (there is no fallback to remote workspace)
  • Use the exact local definition, even if newer versions exist in your remote workspace
  • Make API calls to Humanloop to execute Prompts and log results

Step 3: Verify local File usage

To verify that your code is actually using the local Prompt, you can make a small modification and observe the change in output.

  1. Open humanloop/welcome-email.prompt in a text editor and modify the system message.
humanloop/welcome-email.prompt
1<system>
2-You are an email assistant that creates friendly welcome emails for new customers. Keep your tone warm and conversational while clearly explaining the key features of our product.
3+You are an EXTREMELY ENTHUSIASTIC email assistant that creates friendly welcome emails for new customers. Use LOTS of emojis and exclamation points!! Keep your tone warm and conversational while clearly explaining the key features of our product.
4</system>
  1. Call the modified Prompt from your code:
1response = humanloop.prompts.call(
2 path="welcome-email",
3 inputs={"customer_name": "Alex", "product_name": "Humanloop AI Platform"}
4)
5
6print(response.logs[0].output)
7# The response should now include enthusiastic language and emojis!

You’ll notice in the UI that a log was created with the new version of the Prompt.

Calling a local Prompt

Log of the modified Prompt in the Humanloop UI

When you call a modified local File, Humanloop uses content hashing to check for changes. If the content differs from an existing version, a new version is automatically created.

Step 4: Version control

Now that your Prompt is a local File, you can add it to your version control system.

$# Add the humanloop directory to your git repository
>git add humanloop/
>git commit -m "Add Humanloop Prompt"

Once your Files are committed, you can track changes from Humanloop too. After modifying a Prompt in the Humanloop UI and deploying it, run humanloop pull to get the latest version, then use git diff to see what changed.

For example, if you update the model in your welcome-email Prompt from gpt-4o to gpt-4o-mini in the Humanloop UI and deploy it, after pulling the changes you would see:

Git diff showing Prompt changes

Git diff showing Prompt changes

By integrating your Prompts and Agents with version control, you create a more robust development workflow:

  • Single source of truth for both code and AI components
  • Change history with full audit trail of who changed what and when
  • Local experimentation through editing Files directly in your IDE without needing the Humanloop UI
  • Deployment safety via pull requests and code reviews for Prompt changes

Using with Agents

Everything you’ve learned about working with Prompt Files also applies to Agent Files. The process is identical - pull from Humanloop, reference with the path parameter in agents.call and agents.log operations, and manage through version control.

1# Call a local Agent
2agent_response = humanloop.agents.call(
3 path="customer-service/customer-support", # Looks for humanloop/customer-service/customer-support.agent
4 messages=[
5 {
6 "role": "user",
7 "content": "I need help with my order: 1234567890"
8 }
9 ]
10)
11
12# Access agent output
13print(agent_response.previous_agent_message)

Troubleshooting

If you encounter any issues while working with local Files, check these common solutions:

  • API key not detected? Ensure it’s stored in a .env file at the top level and that you run commands from your project root.
  • Changes not reflecting? Ensure you’ve deployed your Prompt in the Humanloop UI after making changes.
  • SDK not finding files? Ensure the directory used when pulling matches the one in your SDK initialization. (if you used --local-files-directory custom-dir when pulling, specify the same path in your SDK config for local_files_directory/localFilesDirectory).

Next Steps

By integrating Humanloop Files into your development workflow, you’ve bridged the gap between AI development and software engineering practices, making it easier to build robust AI applications.

Now that you’re using local Files, you might want to:

  1. Set up continuous integration to automatically test your Prompts and Agents when they change
  2. Learn more about the File formats to understand how to manually edit them if needed
  3. Explore environment labels to manage different versions for development, staging, and production