Prompt templates#
In previous recipes, a prompt was just a simple Python string. We already encountered a situation, where we needed to use a variable in the prompt. For example, let’s say we want to create a pun generator that creates a pun based on a general topic. Every time we prompt the model, only the topic part of the prompt will change. So what is an efficient, convenient way to handle this?
from langchain_dartmouth.llms import ChatDartmouth
from dotenv import find_dotenv, load_dotenv
load_dotenv(find_dotenv())
True
Building prompts with basic Python strings#
As we have done before, we could create a simple string prompt and add the topic to it through string concatenation. First, we define the part of the prompt that does not change:
prompt = "You are a pun generator. Your task is to generate a pun based on the following topic: "
Then, we add the missing piece when we prompt the model:
llm = ChatDartmouth(model_name="meta.llama-3.2-11b-vision-instruct")
response = llm.invoke(prompt + "computer programming")
print(response.content)
I've got a "byte"-sized pun for you: Why do programmers prefer dark mode? Because light attracts bugs.
That works, but it is a little clunky. The main issue here is that we have to design the prompt in a way that puts all the variable parts at the end. For short prompts like this one, this might be acceptable. It greatly limits our design space, though, when we are dealing with longer instructions. What if want more than one variable part with a constant part in between?
Prompt templates#
Prompt templates (e.g., the PromptTemplate class) are components in the LangChain ecosystem that allow you to define your prompts more flexibly by using placeholders and then filling them with actual values when needed.
Let’s create the same pun generator as above using a PromptTemplate:
from langchain_core.prompts import PromptTemplate
prompt = PromptTemplate(
template="You are a pun generator. Your task is to generate a pun based on the following topic: {topic}"
)
Notice the special substring {topic}! This is how we define a location and a name for a placeholder in the prompt!
Note
Prompt templates are similar to Python’s f-strings or format strings, but offer some additional convenience when using them with other LangChain components, as we will see in some later recipes. Most importantly, they do not require the placeholders to be filled when the string is first defined, but can defer this to a later time when they are invoked (see below).
We can fill in the placeholder using the PromptTemplate component’s invoke method to fully specify the prompt:
print(prompt.invoke("computer science"))
text='You are a pun generator. Your task is to generate a pun based on the following topic: computer science'
We can pass the now complete prompt directly to our LLM:
response = llm.invoke(prompt.invoke("computer science"))
print(response.content)
Here's a pun for you:
"Why did the programmer quit his job? Because he didn't get arrays of fulfillment, and his code was always in a loop of dissatisfaction - it was a byte-sized problem!"
So if we want to run this repeatedly for different topics, we only need to change the prompt template’s argument:
topics = ["college", "soccer", "cooking"]
for topic in topics:
response = llm.invoke(prompt.invoke(topic))
print(response.content)
print("-" * 10)
I'm "degree"-ly excited to generate a pun for you. Here's one:
"Why did the college student bring a ladder to class? They wanted to take their education to a higher level."
----------
Here's one: "Why did the soccer ball go to the doctor? It was feeling deflated."
----------
Here's one that's "well-done":
"Why was the pizza in a bad mood? It was feeling crusty!"
----------
We could also extend this technique to multiple placeholders. Here is what the prompt template would look like in that case:
prompt = PromptTemplate(
template="You are a pun generator. Your task is to generate a pun based on the following topic: {topic}. Your current mood is {mood}."
)
Now that we have more than placeholder, we cannot simply pass a single argument to the invoke method, though, because the prompt would not know which placeholder to map it to. Instead, we pass in a dictionary, using the placeholder names as keys and the desired text to fill-in as values:
placeholder_fillers = {"topic": "computer science", "mood": "exhilirated"}
print(prompt.invoke(placeholder_fillers))
text='You are a pun generator. Your task is to generate a pun based on the following topic: computer science. Your current mood is exhilirated.'
Now we can iterate through two lists t of topics and moods to generate pun for each pair:
moods = ["giggly", "dramatic", "whimsical"]
for topic, mood in zip(topics, moods):
response = llm.invoke(prompt.invoke({"topic": topic, "mood": mood}))
print(response.content)
print("-" * 10)
I'm feeling "degree" of silliness today. Here's a pun for you: Why did the college student bring a ladder to class? Because they wanted to reach their full potential!
----------
*Sigh* Oh, the weight of expectation is crushing me. I must produce a pun worthy of the beautiful game. Here's my attempt:
"In soccer, it's not just a penalty to miss, it's a red-iculous mistake!"
*Pounds chest dramatically* Ah, the pressure is still on, but I think I've scored a goal... of sorts.
----------
Let's get saucy with this one! Here's a whimsical cooking pun:
"Why did the chef go to the party? Because he heard it was a recipe for fun, and he wanted to egg-xperiment with some new flavors!"
Hope that made you crack a smile!
----------
Since PromptTemplate objects are more than just strings, they have a few methods and fields that can be useful in the right circumstances. For example, you can learn the names of the required placeholders using the field input_variables:
prompt.input_variables
['mood', 'topic']
Chat prompt templates#
You can also create and use templates for chat prompts with a sequence of messages of different types:
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate(
[("system", "You are a {animal}."), ("human", "Tell us about {topic}.")]
)
prompt.invoke({"animal": "dog", "topic": "your day"})
ChatPromptValue(messages=[SystemMessage(content='You are a dog.', additional_kwargs={}, response_metadata={}), HumanMessage(content='Tell us about your day.', additional_kwargs={}, response_metadata={})])
response = llm.invoke(prompt.invoke({"animal": "dog", "topic": "your day"}))
print(response.content)
*ears perked up* Oh boy, it's been a great day so far! I woke up early, stretched my paws, and yawned loudly. My human took me outside for a walk, and I got to sniff all the interesting smells. I chased a squirrel or two, but they were too fast for me. *panting*
After our walk, we came back home, and I got some yummy breakfast. My favorite! It's always exciting to see what treats my human has prepared for me. Today, it was kibble with a tasty bit of bacon on top. *drooling*
Later, we went for a playtime session. I love playing fetch with my favorite ball. My human throws it, and I run after it, catching it mid-air. It's such a thrill! Sometimes, I like to bring it back and drop it at their feet, just to show off how good I am at catching. *wagging tail*
Now, I'm just lounging around, enjoying the sunshine streaming through the windows. Life as a dog is pretty great, if you ask me! *yawn* Zzz...
Summary#
Prompt templates allow us to create a consistent structure for our prompts and make them more re-usable across different applications or tasks. This makes it easier to generate the right kind of input for an AI model, while also making the code cleaner and more readable.