Large Language Models#
A number of Large Language Models (LLMs) are available in langchain_dartmouth
.
LLMs in this library generally come in three flavors:
Baseline completion models:
These models are trained to simply continue the given prompt by adding the next token.
Instruction-tuned chat models:
These models are built on baseline completion models, but further trained using a specific prompt format to allow a conversational back-and-forth.
Commercial third-party chat models:
Dartmouth offers limited access to various third-party commercial models, e.g., OpenAI’s GPT-4o or Anthropic’s Claude. Daily token limits per user apply.
Each of these models are supported by langchain_dartmouth
using a separate component.
You can find all available models using the list()
method of the respective class, as we will see below.
Let’s explore these components! But before we we get started, we need to load our Dartmouth API key and Dartmouth Chat API key from the .env
file:
from dotenv import find_dotenv, load_dotenv
load_dotenv(find_dotenv())
True
Baseline Completion Models#
Baseline completion models are trained to simply continue the given prompt by adding the next token. The continued prompt is then considered the next input to the model, which extends it by another token. This continues until a specified maximum number of tokens have been added, or until a special token called a stop token is generated.
A popular use-case for completion models is to generate code. Let’s try an example and have the LLM generate a function based on its signature!
All baseline completion models are available through the component DartmouthLLM
in the submodule langchain_dartmouth.llms
, so we first need to import that class:
from langchain_dartmouth.llms import DartmouthLLM
We can find out, which models are available, by using the static method list()
:
Note
A static method is a function that is defined on the class itself, not on an instance of the class. It’s essentially just a regular function, but tied to a class for grouping purposes. In practice, that means that we can call a static method without instantiating an object of the class first. That is why there are no parentheses after the class name in the next code block!
DartmouthLLM.list()
[{'name': 'llama-3-8b-instruct',
'provider': 'meta',
'display_name': 'Llama 3 8B Instruct',
'tokenizer': 'meta-llama/Meta-Llama-3-8B-Instruct',
'type': 'llm',
'capabilities': ['chat'],
'server': 'text-generation-inference',
'parameters': {'max_input_tokens': 8192}},
{'name': 'llama-3-1-8b-instruct',
'provider': 'meta',
'display_name': 'Llama 3.1 8B Instruct',
'tokenizer': 'meta-llama/Llama-3.1-8B-Instruct',
'type': 'llm',
'capabilities': ['chat'],
'server': 'text-generation-inference',
'parameters': {'max_input_tokens': 8192}},
{'name': 'llama-3-2-11b-vision-instruct',
'provider': 'meta',
'display_name': 'Llama 3.2 11B Vision Instruct',
'tokenizer': 'meta-llama/Llama-3.2-11B-Vision-Instruct',
'type': 'llm',
'capabilities': ['chat', 'vision'],
'server': 'text-generation-inference',
'parameters': {'max_input_tokens': 127999}},
{'name': 'codellama-13b-instruct-hf',
'provider': 'meta',
'display_name': 'CodeLlama 13B Instruct HF',
'tokenizer': 'meta-llama/CodeLlama-13b-Instruct-hf',
'type': 'llm',
'capabilities': ['chat'],
'server': 'text-generation-inference',
'parameters': {'max_input_tokens': 6144}},
{'name': 'codellama-13b-python-hf',
'provider': 'meta',
'display_name': 'CodeLlama 13B Python HF',
'tokenizer': 'meta-llama/CodeLlama-13b-Python-hf',
'type': 'llm',
'capabilities': [],
'server': 'text-generation-inference',
'parameters': {'max_input_tokens': 2048}}]
We can now instantiate a specific LLM by specifying its name as it appears in the listing. Since the model will generate the continuation of our prompt, it usually makes sense to repeat our prompt in the response, which we can request by setting the parameter return_full_text
to True
:
llm = DartmouthLLM(model_name="codellama-13b-python-hf", return_full_text=True)
We can now send a prompt to the model and receive its response by using the invoke()
method:
response = llm.invoke("def remove_digits(s: str) -> str:")
print(response)
def remove_digits(s: str) -> str:
"""
Write a function that removes all digits from a given string.
"""
s = "".join([char for char in s if char not in "0123456789"])
return s
def run() -> None:
print(remove_digits("ab12c5"))
if __name__ == "__main__":
run()
Since they are only trained to continue the given prompt, completion models are not great at responding to chat-like prompts:
response = llm.invoke("How can I define a class in Python?")
print(response)
How can I define a class in Python?
A class defines the way the object is represented. It can have attributes and methods to manipulate the data in the object. You can also override methods from the parent class in the child class.
The syntax to create a class is: class class_name: pass.
By convention, the first character of class names is uppercase, and all other characters are lowercase.
A class is a blueprint for a particular object. An object is an instantiation of a class.
An object is defined as an instance of a class. It is a particular instance of the class.
Python allows you to define your own classes. These are called user defined classes.
You can override the methods from the parent class in the child class.
An object can also be defined as a class instance. It is a particular instance of the class.
The syntax for creating a class is: class class_name: pass.
A class definition can be split into three parts: the class name, the method definitions, and the class body. The class name is the name of the class, which should be in CamelCase (like a variable). The method definitions are where you define the methods and their arguments. The class body is the indented code block where you define the class variables and methods.
A class is a blueprint for a particular object.
An object is an instance of a class. It is a particular instance of the class.
Python allows you to define your own classes.
A class definition can be split into three parts: the class name, the method definitions, and the class body.
You can override the methods from the parent class in the child class.
An object can also be defined as a class instance. It is a particular instance of the class.
You can use the super() method to call the parent class constructor.
You can use the super() method to call the parent class constructor. You can use the super() method to call the parent class constructor.
You can use the super() method to call the parent class constructor. You can use the super() method to call the parent class constructor.
You can use the super() method to call the parent class constructor. You can use the super() method to call the parent class constructor. You can use the super() method to call the parent class constructor.
You can use the super() method to call the parent class constructor. You can use the super() method to call the parent class constructor.
As we can see, the model just continues the prompt in a way that is similar to what it has seen during its training. If we want to use it in a conversational way, we need to use an instruction-tuned chat model.
Instruction-Tuned Chat Models#
Instruction-tuned chat models are trained to follow a specific set of instructions that the model is expected to follow. These models can be used in conversational scenarios, where the user asks the model questions and the model replies with answers. The model will not just continue the prompt but also understand the context of the conversation preceding the prompt. To achieve this, baseline completion models are fine-tuned (i.e., further trained) on conversational text material that is formatted following a particular template. That is why we often see multiple variants of an LLM: the base model and the instruct version (see, e.g., CodeLlama).
Let’s see what happens if we ask an instruction-tuned model our question from the previous section:
llm = DartmouthLLM(model_name="codellama-13b-instruct-hf")
response = llm.invoke("How can I define a class in Python?")
print(response)
\begin{code}
class Person:
def __init__(self, name):
self.name = name
\end{code}
Answer: In Python, classes are defined using the [`class`](https://docs.python.org/3/reference/compound_stmts.html#class) statement. For example:
\begin{code}
class Person:
def __init__(self, name):
self.name = name
\end{code}
The `__init__` method is the constructor. It will be called automatically whenever an instance of the class is created. For example:
\begin{code}
p = Person("Alice")
\end{code}
The above line is the same as:
\begin{code}
p = Person()
p.__init__("Alice")
\end{code}
The `__init__` method can be omitted if there is nothing to initialize.
\section{See also}
\begin{itemize}
\item [Python tutorial: Classes](https://docs.python.org/3/tutorial/classes.html)
\item [Python tutorial: Defining Functions](https://docs.python.org/3/tutorial/controlflow.html#defining-functions)
\end{itemize}
Answer: \begin{code}
class Person:
def __init__(self, name):
self.name = name
def say(self, phrase):
print(self.name + ' says ' + phrase)
\end{code}
You can then create a new instance of the class and call its methods:
\begin{code}
person = Person('John')
person.say('Hello world')
\end{code}
Answer: The best way to learn is by doing, so I created a full code example with some additional details on how to use classes.
\begin{code}
class Person:
# constructor
def __init__(self, name):
self.name = name
# instance method
def say(self, phrase):
print(self.name + ' says ' + phrase)
# instance method
def set_name(self, name):
self
Well, that does not seem very helpful… What went wrong here?
The problem is that the prompt we use during inference (when we invoke the model) needs to follow the same format that was used during the instruction-tuning. This format is not the same for every model! Let’s try our prompt again using CodeLlama’s Instructions format:
response = llm.invoke("<s>[INST] How can I define a class in Python? [/INST] ")
print(response)
In Python, you can define a class using the `class` keyword followed by the name of the class, and the class body. The class body consists of methods and attributes that are associated with the class.
Here is an example of a simple class definition:
```
class MyClass:
pass
```
This defines a class called `MyClass` with no methods or attributes.
You can also define methods and attributes inside the class body, like this:
```
class MyClass:
def __init__(self, name):
self.name = name
def greet(self):
print(f"Hello, {self.name}!")
def farewell(self):
print(f"Goodbye, {self.name}!")
```
This defines a class called `MyClass` with two methods, `__init__` and `greet`, and one attribute, `name`. The `__init__` method is called when an instance of the class is created, and it assigns the `name` attribute to the name passed in as an argument. The `greet` method simply prints a greeting message to the console. The `farewell` method is not called in this example.
You can create an instance of the class and call its methods like this:
```
my_object = MyClass("Alice")
my_object.greet()
# Output: Hello, Alice!
my_object.farewell()
# Output: Goodbye, Alice!
```
Note that the `__init__` method is called automatically when the class is instantiated, and it sets the `name` attribute to the value passed in as an argument.
You can also define a class with a constructor function and a destructor function in Python, like this:
```
class MyClass:
def __init__(self, name):
self.name = name
def __del__(self):
print(f"{self.name} is being deleted!")
def greet(self):
print(f"Hello, {self.name}!")
def farewell(self):
print(f"Goodbye, {self.name}!")
```
This defines a class called `MyClass` with a constructor method called `__init__` and a destructor method
That looks a lot better!
Note
You may notice that the last sentence gets cut off. This is due to the default value for the maximum number of generated tokens, which may be too low. You can set a higher limit when you instantiate the DartmouthLLM
object. Check the API reference for more information.
Managing the prompt format can quickly get tedious, especially if you want to switch between different models. Fortunately, the ChatDartmouth
component handles the prompt formatting “under-the-hood” and we can just pass the actual message when we invoke it:
from langchain_dartmouth.llms import ChatDartmouth
llm = ChatDartmouth(model_name="llama-3-2-11b-vision-instruct")
response = llm.invoke("How can I define a class in Python?")
print(response.content)
**Defining a Class in Python**
=====================================
In Python, you can define a class using the `class` keyword followed by the name of the class. Here is a basic example of how to define a class:
```python
class MyClass:
pass
```
In this example, `MyClass` is the name of the class and the `pass` statement is a placeholder that does nothing.
**Class Structure**
--------------------
A class typically consists of the following components:
* **Class Definition**: The `class` keyword followed by the name of the class.
* **Class Body**: The code that defines the class, including methods, attributes, and other functionality.
* **Constructor**: A special method called `__init__` that is called when an object is created from the class.
* **Methods**: Functions that belong to the class and can be called on objects of the class.
* **Attributes**: Data members of the class that are stored in the object.
**Example Class**
-----------------
Here is an example of a class named `Rectangle` that has two attributes (`width` and `height`) and two methods (`area` and `perimeter`):
```python
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
```
**Creating an Object from the Class**
--------------------------------------
To create an object from the class, you can use the `()` operator:
```python
rectangle = Rectangle(4, 5)
```
This creates a new object `rectangle` of type `Rectangle` with `width` 4 and `height` 5.
**Accessing Attributes and Methods**
--------------------------------------
You can access the attributes and methods of an object using dot notation:
```python
print(rectangle.width) # Accessing the width attribute
print(rectangle.area()) # Calling the area method
```
**Inheritance**
----------------
In Python, a class can inherit attributes and methods from another class using the `class Child(ClassParent)` syntax:
```python
class Shape:
def area(self):
pass
class Rectangle(Shape):
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
```
In this example
That looks a lot better!
Note
ChatDartmouth
returns more than just a raw string: It returns an AIMessage
object, which you can learn more about in LangChain’s API reference.
We will see more of these message objects in the recipe on prompts!
By the way, just like with DartmouthLLM
, we can get a list of the available chat models using the static method list()
:
ChatDartmouth.list()
[{'name': 'llama-3-8b-instruct',
'provider': 'meta',
'display_name': 'Llama 3 8B Instruct',
'tokenizer': 'meta-llama/Meta-Llama-3-8B-Instruct',
'type': 'llm',
'capabilities': ['chat'],
'server': 'text-generation-inference',
'parameters': {'max_input_tokens': 8192}},
{'name': 'llama-3-1-8b-instruct',
'provider': 'meta',
'display_name': 'Llama 3.1 8B Instruct',
'tokenizer': 'meta-llama/Llama-3.1-8B-Instruct',
'type': 'llm',
'capabilities': ['chat'],
'server': 'text-generation-inference',
'parameters': {'max_input_tokens': 8192}},
{'name': 'llama-3-2-11b-vision-instruct',
'provider': 'meta',
'display_name': 'Llama 3.2 11B Vision Instruct',
'tokenizer': 'meta-llama/Llama-3.2-11B-Vision-Instruct',
'type': 'llm',
'capabilities': ['chat', 'vision'],
'server': 'text-generation-inference',
'parameters': {'max_input_tokens': 127999}},
{'name': 'codellama-13b-instruct-hf',
'provider': 'meta',
'display_name': 'CodeLlama 13B Instruct HF',
'tokenizer': 'meta-llama/CodeLlama-13b-Instruct-hf',
'type': 'llm',
'capabilities': ['chat'],
'server': 'text-generation-inference',
'parameters': {'max_input_tokens': 6144}}]
Third-party chat models#
In addition to the locally-deployed, open-source models, Dartmouth also offers access to various third-party chat models. These models are available through the ChatDartmouthCloud
class.
Note
Remember: You need a separate API key for ChatDartmouthCloud
. Follow the instructions to get yours, and then store it in an environment variable called DARTMOUTH_CHAT_API_KEY
.
from langchain_dartmouth.llms import ChatDartmouthCloud
As with the other classes, we can list the available models using the static method list()
:
ChatDartmouthCloud.list()
[{'name': 'anthropic.claude-3-5-haiku-20241022',
'provider': 'anthropic',
'type': 'llm',
'capabilities': ['chat', 'vision'],
'server': 'dartmouth-chat',
'parameters': {}},
{'name': 'anthropic.claude-3-7-sonnet-20250219',
'provider': 'anthropic',
'type': 'llm',
'capabilities': ['chat', 'vision'],
'server': 'dartmouth-chat',
'parameters': {}},
{'name': 'anthropic.claude-sonnet-4-20250514',
'provider': 'anthropic',
'type': 'llm',
'capabilities': ['chat', 'vision'],
'server': 'dartmouth-chat',
'parameters': {}},
{'name': 'openai.gpt-4.1-2025-04-14',
'provider': 'openai',
'type': 'llm',
'capabilities': ['chat', 'vision'],
'server': 'dartmouth-chat',
'parameters': {}},
{'name': 'openai.gpt-4.1-mini-2025-04-14',
'provider': 'openai',
'type': 'llm',
'capabilities': ['chat', 'vision'],
'server': 'dartmouth-chat',
'parameters': {}},
{'name': 'openai.o4-mini-2025-04-16',
'provider': 'openai',
'type': 'llm',
'capabilities': ['chat', 'vision'],
'server': 'dartmouth-chat',
'parameters': {}},
{'name': 'google_genai.gemini-2.0-flash-001',
'provider': 'google_genai',
'type': 'llm',
'capabilities': ['chat', 'vision'],
'server': 'dartmouth-chat',
'parameters': {}},
{'name': 'google_genai.gemini-1.5-pro-002',
'provider': 'google_genai',
'type': 'llm',
'capabilities': ['chat', 'vision'],
'server': 'dartmouth-chat',
'parameters': {}},
{'name': 'mistral.mistral-medium-2505',
'provider': 'mistral',
'type': 'llm',
'capabilities': ['chat', 'vision'],
'server': 'dartmouth-chat',
'parameters': {}},
{'name': 'mistral.pixtral-large-2411',
'provider': 'mistral',
'type': 'llm',
'capabilities': ['chat', 'vision'],
'server': 'dartmouth-chat',
'parameters': {}}]
Using the class works just like with the other two varieties:
llm = ChatDartmouthCloud(model_name="openai.gpt-4.1-mini-2025-04-14")
response = llm.invoke("Who are you?")
print(response.content)
Hello! I'm ChatGPT, an AI language model created by OpenAI. I'm here to help answer your questions, provide information, assist with writing, and much more. How can I assist you today?
Warning
The models available through ChatDartmouthCloud
are commercial, third-party models. This means that your data will be sent to the model provider to be processed. If you have privacy concerns, please reach out to Research Computing to obtain a copy of the terms of use for the model you are interested in.
Note
Dartmouth pays for a significant daily token allotment per user, but eventually you may hit a limit. If you need a larger volume of tokens for your project, please reach out!
Summary#
In this recipe, we have learned how to use the DartmouthLLM
, ChatDartmouth
, ChatDartmouthCloud
components. Which one to use depends on whether you are working with a baseline completion model, a local instruction-tuned chat model, or a Cloud-hosted third-party chat model:
Baseline completion models can only be used with DartmouthLLM
. Local instruction-tuned chat models should be used with ChatDartmouth
so the correct prompt format is applied automatically. For commercial third-party chat models, use ChatDartmouthCloud
.
You can also use DartmouthLLM
with an instruction-tuned model, if you want full control over the exact string that is sent to the model. In that case, however, you might see unexpected responses if the prompt format is not correct.