Streaming LLM output#
Large Language Models produce output token by token, and each token can take some time to generate. If you are waiting for the model to finish the entire response before outputting the result, these times add up quickly!
An alternative to this is streaming: Similar to video streaming, where you don’t wait for the entire video to be downloaded before playing it, you can stream the output of an LLM. In this recipe, we will explore how to do that with langchain_dartmouth
!
Note
Many LLMs in the LangChain ecosystem support streaming, not just the ones in langchain_dartmouth
! You could replace the model in this notebook with, e.g., ChatOpenAI
from langchain_openai
, and it would work exactly the same!
Importing and instantiating a model#
Just as we saw in the previous recipe, we will import a chat model and then instantiate it. We will use the streaming
parameter, however, to tell the model that we want it to stream its output!
from langchain_dartmouth.llms import ChatDartmouth
from dotenv import find_dotenv, load_dotenv
load_dotenv(find_dotenv())
True
llm = ChatDartmouth(model_name="llama-3-2-11b-vision-instruct", streaming=True)
Streaming the output#
We could use the invoke
method as we did before, but that would still require us to wait for the entire response to be generated before returning. Since we have set our model’s streaming
parameter to True
, we can instead call the stream
method. This will return a generator object. We can then iterate through this generator and print each returned chunk as it is generated:
for chunk in llm.stream("Write a haiku about Dartmouth College"):
print(chunk.content)
Snow
-covered
Han
over
Lib
erty
Walk
's
gentle
slope
D
art
mouth
's
quiet
pride
We can see that the chunks came in one by one, because the print
function breaks the line after every chunk. We can use the end
parameter to avoid that:
for chunk in llm.stream("Write a haiku about Dartmouth College"):
print(chunk.content, end="")
Hanover's quiet
Dartmouth's traditions sleep
Academy dreams
That looks better! Let’s try a longer response to show the benefit of streaming:
for chunk in llm.stream("Write five haiku about Dartmouth College"):
print(chunk.content, end="")
Snow falls on the Green
Dartmouth's historic
charm
Winter's gentle hush
Academy's pride
Lakeside campus, serene
Knowledge's gentle flow
River's
gentle
lap
Hanover's quiet, peaceful
Evening's calm sleep
Gold and green shine bright
Dartmouth's spirit,
strong and
Pride in every step
Maple leaves unfold
Autumn's vibrant, colorful
Dartmouth's beauty shines
Summary#
This recipe showed how to stream output from an LLM using the stream
method. Streaming long responses makes for a better user experience and a more efficient use of time when working with an LLM interactively.