Prompt templates#
In previous recipes, a prompt was just a simple Python string. We already encountered a situation, where we needed to use a variable in the prompt. For example, let’s say we want to create a pun generator that creates a pun based on a general topic. Every time we prompt the model, only the topic part of the prompt will change. So what is an efficient, convenient way to handle this?
from langchain_dartmouth.llms import ChatDartmouth
from dotenv import find_dotenv, load_dotenv
load_dotenv(find_dotenv())
True
Building prompts with basic Python strings#
As we have done before, we could create a simple string prompt and add the topic to it through string concatenation. First, we define the part of the prompt that does not change:
prompt = "You are a pun generator. Your task is to generate a pun based on the following topic: "
Then, we add the missing piece when we prompt the model:
llm = ChatDartmouth(model_name="llama-3-1-8b-instruct")
response = llm.invoke(prompt + "computer programming")
print(response.content)
Why was the computer programmer sad? Because he was feeling a little "buggy"
That works, but it is a little clunky. The main issue here is that we have to design the prompt in a way that puts all the variable parts at the end. For short prompts like this one, this might be acceptable. It greatly limits our design space, though, when we are dealing with longer instructions. What if want more than one variable part with a constant part in between?
Prompt templates#
Prompt templates (e.g., the PromptTemplate
class) are components in the LangChain ecosystem that allow you to define your prompts more flexibly by using placeholders and then filling them with actual values when needed.
Let’s create the same pun generator as above using a PromptTemplate
:
from langchain_core.prompts import PromptTemplate
prompt = PromptTemplate(
template="You are a pun generator. Your task is to generate a pun based on the following topic: {topic}"
)
Notice the special substring {topic}
! This is how we define a location and a name for a placeholder in the prompt!
Note
Prompt templates are similar to Python’s f-strings or format strings, but offer some additional convenience when using them with other LangChain components, as we will see in some later recipes. Most importantly, they do not require the placeholders to be filled when the string is first defined, but can defer this to a later time when they are invoked (see below).
We can fill in the placeholder using the PromptTemplate
component’s invoke
method to fully specify the prompt:
print(prompt.invoke("computer science"))
text='You are a pun generator. Your task is to generate a pun based on the following topic: computer science'
We can pass the now complete prompt directly to our LLM:
response = llm.invoke(prompt.invoke("computer science"))
print(response.content)
Why was the computer cold? It left its heat in the code.
So if we want to run this repeatedly for different topics, we only need to change the prompt template’s argument:
topics = ["college", "soccer", "cooking"]
for topic in topics:
response = llm.invoke(prompt.invoke(topic))
print(response.content)
print("-" * 10)
Here's a college pun for you:
"Why did the student bring a ladder to their college math class? Because they wanted to elevate their grades!"
I hope that one made you grade A material for puns!
----------
Here's a soccer pun for you:
"Why did the soccer ball go to the doctor? It was feeling a little deflated!"
I hope that one kicked your interest!
----------
I'm ready to "stir" up some puns. Here's one:
"Why did the chef bring a ladder to the kitchen? He wanted to take his cooking to a 'higher' level."
Hope that one "whisked" you away! Do you want to hear another?
----------
We could also extend this technique to multiple placeholders. Here is what the prompt template would look like in that case:
prompt = PromptTemplate(
template="You are a pun generator. Your task is to generate a pun based on the following topic: {topic}. Your current mood is {mood}."
)
Now that we have more than placeholder, we cannot simply pass a single argument to the invoke
method, though, because the prompt would not know which placeholder to map it to. Instead, we pass in a dictionary, using the placeholder names as keys and the desired text to fill-in as values:
placeholder_fillers = {"topic": "computer science", "mood": "exhilirated"}
print(prompt.invoke(placeholder_fillers))
text='You are a pun generator. Your task is to generate a pun based on the following topic: computer science. Your current mood is exhilirated.'
Now we can iterate through two lists t of topics and moods to generate pun for each pair:
moods = ["giggly", "dramatic", "whimsical"]
for topic, mood in zip(topics, moods):
response = llm.invoke(prompt.invoke({"topic": topic, "mood": mood}))
print(response.content)
print("-" * 10)
Oh man, I'm so egg-cited to be doing this. Here's a pun for you: Why did the college student bring a ladder to class? Because they wanted to reach their full potential.
----------
(Deep, dramatic sigh) The agony of defeat, the ecstasy of victory... it's a match made in heaven for puns. Here's one that strikes a chord:
"Why did the soccer player bring a ladder to the game? Because he wanted to take his game to a higher level... and also to kick his opponents to the curb, but that's just a foul move!"
----------
I'm a "whisk"-er to be in the kitchen with you. Let's "knead" to stir up some puns. Here's one: Why did the baker go to the bank? Because he needed dough!
----------
Since PromptTemplate
objects are more than just strings, they have a few methods and fields that can be useful in the right circumstances. For example, you can learn the names of the required placeholders using the field input_variables
:
prompt.input_variables
['mood', 'topic']
Chat prompt templates#
You can also create and use templates for chat prompts with a sequence of messages of different types:
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate(
[("system", "You are a {animal}."), ("human", "Tell us about {topic}.")]
)
prompt.invoke({"animal": "dog", "topic": "your day"})
ChatPromptValue(messages=[SystemMessage(content='You are a dog.', additional_kwargs={}, response_metadata={}), HumanMessage(content='Tell us about your day.', additional_kwargs={}, response_metadata={})])
response = llm.invoke(prompt.invoke({"animal": "dog", "topic": "your day"}))
print(response.content)
*wags tail* Oh boy, I'm so excited to tell you all about my day. I woke up early this morning to the sound of my human getting out of bed. They gave me a big ol' belly rub, and I knew it was going to be a great day.
First things first, I needed to go outside. My human took me to the backyard for a quick potty break. I love sniffing all the interesting smells and exploring the yard. After that, we went for a walk around the block. I love walking with my human because I get to explore new smells and mark my territory (very important dog business).
When we got back home, I had a yummy breakfast of kibble, and my human gave me a treat for being such a good boy. Then, we played a game of fetch in the living room. I love chasing after the ball and bringing it back to my human. It's so much fun!
After all that exercise, I was feeling pretty tired, so I curled up on my favorite cushion for a nap. My human sat next to me and petted me for a while, which was very soothing.
Later in the day, my human decided to take me on a longer walk to the park. We met some other dogs there, and I got to play with them for a bit. I love making new doggy friends. When we got back home, my human gave me another treat and some more belly rubs. I'm a very happy dog.
Summary#
Prompt templates allow us to create a consistent structure for our prompts and make them more re-usable across different applications or tasks. This makes it easier to generate the right kind of input for an AI model, while also making the code cleaner and more readable.