Skip to content

prompt

prompt

gen(user=None, system='', messages=None, append=True, model=globals.DEFAULT_MODEL, api_key=None, max_tokens=1024, temperature=1.0, loud=True, **kwargs)

Generate a response from Claude. Returns the text content (str) of Claude's response. If you want the Message object instead, use gen_msg.

Parameters:

Name Type Description Default
user Optional[str]

The user's message content. Defaults to None.

None
system str

The system message for Claude. Defaults to "".

''
messages Optional[List[MessageParam]]

A list of anthropic.types.MessageParam. Defaults to None.

None
append bool

Whether to append the generated response (as an anthropic.types.MessageParam) to messages. Defaults to True.

True
model str

The name of the model to use. Defaults to globals.DEFAULT_MODEL.

DEFAULT_MODEL
api_key Optional[str]

The API key to use for authentication. Defaults to None (if None, uses os.environ["ANTHROPIC_API_KEY]).

None
max_tokens int

The maximum number of tokens to generate in the response. Defaults to 1024.

1024
temperature float

The temperature value for controlling the randomness of the generated response.

1.0
loud bool

Whether to print verbose output. Defaults to True.

True
**kwargs Any

Additional keyword arguments to pass to the underlying generation function.

{}

Raises:

Type Description
ValueError

If no prompt is provided (both user and messages are None).

ValueError

If the last message in messages is from the user and user is also provided.

ValueError

If Claude does not provide a response.

Returns:

Name Type Description
str str

The text content of Claude's generated response.

Notes
  • If messages is None, the user parameter must be provided as a string.
  • If user is provided and messages is not None, the user message is appended to the messages list.
  • The function raises a ValueError if the roles in the messages list are not alternating (e.g., user, assistant, user).
  • If append is True and the last message in messages is from the assistant, the generated response is appended to the existing assistant's content.
  • The function uses the gen_msg function internally to generate Claude's response.
Example

user_message = "Hello, Claude!" response = gen(user=user_message) print(response) "Hello! How can I assist you today?"

Source code in alana/prompt.py
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def gen(user: Optional[str] = None, system: str = "", messages: Optional[List[MessageParam]] = None, append: bool = True, model: str = globals.DEFAULT_MODEL, api_key: Optional[str] = None, max_tokens = 1024, temperature=1.0, loud=True, **kwargs: Any) -> str:
    """Generate a response from Claude. Returns the text content (`str`) of Claude's response. If you want the Message object instead, use `gen_msg`.

    Args:
        user (Optional[str], optional): The user's message content. Defaults to None.
        system (str, optional): The system message for Claude. Defaults to "".
        messages (Optional[List[MessageParam]], optional): A list of `anthropic.types.MessageParam`. Defaults to None.
        append (bool, optional): Whether to append the generated response (as an `anthropic.types.MessageParam`) to `messages`. Defaults to True.
        model (str, optional): The name of the model to use. Defaults to globals.DEFAULT_MODEL.
        api_key (Optional[str], optional): The API key to use for authentication. Defaults to None (if None, uses os.environ["ANTHROPIC_API_KEY]).
        max_tokens (int, optional): The maximum number of tokens to generate in the response. Defaults to 1024.
        temperature (float, optional): The temperature value for controlling the randomness of the generated response.
        loud (bool, optional): Whether to print verbose output. Defaults to True.
        **kwargs: Additional keyword arguments to pass to the underlying generation function.

    Raises:
        ValueError: If no prompt is provided (both `user` and `messages` are None).
        ValueError: If the last message in `messages` is from the user and `user` is also provided.
        ValueError: If Claude does not provide a response.

    Returns:
        str: The text content of Claude's generated response.

    Notes:
        - If `messages` is None, the `user` parameter must be provided as a string.
        - If `user` is provided and `messages` is not None, the `user` message is appended to the `messages` list.
        - The function raises a ValueError if the roles in the `messages` list are not alternating (e.g., user, assistant, user).
        - If `append` is True and the last message in `messages` is from the assistant, the generated response is appended to the existing assistant's content.
        - The function uses the `gen_msg` function internally to generate Claude's response.

    Example:
        >>> user_message = "Hello, Claude!"
        >>> response = gen(user=user_message)
        >>> print(response)
        "Hello! How can I assist you today?"
    """
    if user is None and messages is None:
        raise ValueError("No prompt provided! `user` and `messages` are both None.")

    if messages is None:
        assert user is not None  # To be stricter, type(user) == str
        messages=[
            MessageParam(role="user", content=user),
        ]
    elif user is not None:
        assert messages is not None  # To be stricter, messages is List[MessageParam]
        if len(messages) >= 1 and messages[-1]["role"] == "user":
            raise ValueError("`gen`: Bad request! Roles must be alternating. Last message in `messages` is from user, but `user` provided.")
        messages.append(
            MessageParam(role="user", content=user)  # TODO: Check that non-alternating roles are ok (e.g. user, assistant, assistant)
        )

    output: Message = gen_msg(system=system, messages=messages, model=model, api_key=api_key, max_tokens=max_tokens, loud=loud, temperature=temperature, **kwargs)

    if len(output.content) == 0:
        raise ValueError(f"Claude did not provide a response. Stop reason: {output.stop_reason}. Full API response: {output}")

    if append == True:
        if messages[-1]["role"] == "assistant":  # NOTE: Anthropic API does not allow non-alternating roles (raises Err400). Let's enforce this.
            # NOTE: messages[-1]["content"] is assistant output, so should be `str`, since Anthropic API (as of Apr 16 2024) only supports text output!
            existing_assistant_content: str = messages[-1]["content"] # type: ignore
            assistant_content: str = existing_assistant_content + output.content[0].text
            messages.pop()
        else:
            assistant_content: str = output.content[0].text

        messages.append(
            MessageParam(
                role="assistant",
                content=assistant_content
            )
        )
    return output.content[0].text

gen_examples(instruction, n_examples=5, model=globals.DEFAULT_MODEL, api_key=None, max_tokens=1024, temperature=1.0, **kwargs)

Generate a formatted string containing few-shot examples for a given natural language instruction. Uses gen_examples_list.

Parameters:

Name Type Description Default
instruction str

The natural language instruction for which to generate examples.

required
n_examples int

The number of examples to generate. Defaults to 5.

5
model str

The name of the model to use. Defaults to globals.DEFAULT_MODEL.

DEFAULT_MODEL
api_key Optional[str]

The API key to use for authentication. Defaults to None.

None
max_tokens int

The maximum number of tokens to generate in the response. Defaults to 1024.

1024
temperature float

The temperature value for controlling the randomness of the generated response.

1.0
**kwargs Any

Additional keyword arguments to pass to the gen_examples_list function (passed to Anthropic API).

{}

Returns:

Name Type Description
str str

A formatted string containing the generated few-shot examples, enclosed in XML-like tags.

Notes
  • The function calls the gen_examples_list function to generate a list of few-shot examples based on the provided instruction, n_examples, model, api_key, max_tokens, temperature, and any additional keyword arguments.
  • The generated examples are then formatted into a string, with each example enclosed in <example/> tags.
  • The formatted string starts with an opening <examples> tag and ends with a closing </examples> tag (note plural).
Example

instruction = "Write a short story about a magical adventure." examples_str = gen_examples(instruction, n_examples=3) print(examples_str) Once upon a time, in a land far away, there was a young girl named Lily who discovered a mysterious portal in her backyard... In a world where magic was a part of everyday life, a brave knight named Eldric embarked on a quest to retrieve a powerful artifact... Deep in the enchanted forest, a group of talking animals gathered around a wise old oak tree to discuss a pressing matter...

Source code in alana/prompt.py
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
def gen_examples(instruction: str, n_examples: int = 5, model: str = globals.DEFAULT_MODEL, api_key: Optional[str] = None, max_tokens: int = 1024, temperature=1.0, **kwargs: Any) -> str:
    """Generate a formatted string containing few-shot examples for a given natural language instruction. Uses `gen_examples_list`.

    Args:
        instruction (str): The natural language instruction for which to generate examples.
        n_examples (int, optional): The number of examples to generate. Defaults to 5.
        model (str, optional): The name of the model to use. Defaults to globals.DEFAULT_MODEL.
        api_key (Optional[str], optional): The API key to use for authentication. Defaults to None.
        max_tokens (int, optional): The maximum number of tokens to generate in the response. Defaults to 1024.
        temperature (float, optional): The temperature value for controlling the randomness of the generated response.
        **kwargs: Additional keyword arguments to pass to the `gen_examples_list` function (passed to Anthropic API).

    Returns:
        str: A formatted string containing the generated few-shot examples, enclosed in XML-like tags.

    Notes:
        - The function calls the `gen_examples_list` function to generate a list of few-shot examples based on the provided `instruction`, `n_examples`, `model`, `api_key`, `max_tokens`, `temperature`, and any additional keyword arguments.
        - The generated examples are then formatted into a string, with each example enclosed in `<example/>` tags.
        - The formatted string starts with an opening `<examples>` tag and ends with a closing `</examples>` tag (note plural).

    Example:
        >>> instruction = "Write a short story about a magical adventure."
        >>> examples_str = gen_examples(instruction, n_examples=3)
        >>> print(examples_str)
        <examples>
        <example>Once upon a time, in a land far away, there was a young girl named Lily who discovered a mysterious portal in her backyard...</example>
        <example>In a world where magic was a part of everyday life, a brave knight named Eldric embarked on a quest to retrieve a powerful artifact...</example>
        <example>Deep in the enchanted forest, a group of talking animals gathered around a wise old oak tree to discuss a pressing matter...</example>
        </examples>
    """
    examples: List[str] = gen_examples_list(instruction=instruction, n_examples=n_examples, model=model, api_key=api_key, max_tokens=max_tokens, temperature=temperature, **kwargs)
    formatted_examples: str = "\n<examples>\n<example>" + '</example>\n<example>'.join(examples) + "</example>\n</examples>"
    return formatted_examples

gen_examples_list(instruction, n_examples=5, model=globals.DEFAULT_MODEL, api_key=None, max_tokens=1024, temperature=1.0, **kwargs)

Uses Claude to generate a Python list of few-shot examples for a given natural language instruction.

Parameters:

Name Type Description Default
instruction str

The natural language instruction for which to generate examples.

required
n_examples int

The number of examples to ask Claude to generate. Defaults to 5.

5
model str

The name of the model to use. Defaults to globals.DEFAULT_MODEL.

DEFAULT_MODEL
api_key Optional[str]

The API key to use for authentication. Defaults to None.

None
max_tokens int

The maximum number of tokens to generate in the response. Defaults to 1024.

1024
temperature float

The temperature value for controlling the randomness of the generated response.

1.0
**kwargs Any

Additional keyword arguments to pass to the gen function (gen passes kwargs to the Anthropic API).

{}

Returns:

Type Description
List[str]

List[str]: A Python list of generated few-shot examples.

Notes
  • The function constructs a system message using the globals.SYSTEM["few_shot"] template and the provided n_examples.
  • The function constructs a user message using the globals.USER["few_shot"] template and the provided instruction.
  • If n_examples is less than 1, the function prints a warning message using the red function but continues execution.
  • The function calls the gen function to generate the model's output based on the constructed system and user messages, along with the specified model, api_key, max_tokens, temperature, and any additional keyword arguments.
  • The generated model output is expected to be in XML format, with each example enclosed in <example/> tags.
  • The function uses the get_xml function to extract the content within the <example/> tags and returns it as a Python list of strings.
Example

instruction = "Write a short story about a magical adventure." examples = gen_examples_list(instruction, n_examples=3) print(examples) [ "Once upon a time, in a land far away, there was a young girl named Lily who discovered a mysterious portal in her backyard...", "In a world where magic was a part of everyday life, a brave knight named Eldric embarked on a quest to retrieve a powerful artifact...", "Deep in the enchanted forest, a group of talking animals gathered around a wise old oak tree to discuss a pressing matter..." ]

Source code in alana/prompt.py
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
def gen_examples_list(instruction: str, n_examples: int = 5, model: str = globals.DEFAULT_MODEL, api_key: Optional[str] = None, max_tokens: int = 1024, temperature=1.0, **kwargs: Any) -> List[str]:
    """Uses Claude to generate a Python list of few-shot examples for a given natural language instruction.

    Args:
        instruction (str): The natural language instruction for which to generate examples.
        n_examples (int, optional): The number of examples to ask Claude to generate. Defaults to 5.
        model (str, optional): The name of the model to use. Defaults to `globals.DEFAULT_MODEL`.
        api_key (Optional[str], optional): The API key to use for authentication. Defaults to None.
        max_tokens (int, optional): The maximum number of tokens to generate in the response. Defaults to 1024.
        temperature (float, optional): The temperature value for controlling the randomness of the generated response.
        **kwargs: Additional keyword arguments to pass to the `gen` function (`gen` passes kwargs to the Anthropic API).

    Returns:
        List[str]: A Python list of generated few-shot examples.

    Notes:
        - The function constructs a system message using the `globals.SYSTEM["few_shot"]` template and the provided `n_examples`.
        - The function constructs a user message using the `globals.USER["few_shot"]` template and the provided `instruction`.
        - If `n_examples` is less than 1, the function prints a warning message using the `red` function but continues execution.
        - The function calls the `gen` function to generate the model's output based on the constructed system and user messages, along with the specified `model`, `api_key`, `max_tokens`, `temperature`, and any additional keyword arguments.
        - The generated model output is expected to be in XML format, with each example enclosed in `<example/>` tags.
        - The function uses the `get_xml` function to extract the content within the `<example/>` tags and returns it as a Python list of strings.

    Example:
        >>> instruction = "Write a short story about a magical adventure."
        >>> examples = gen_examples_list(instruction, n_examples=3)
        >>> print(examples)
        [
            "Once upon a time, in a land far away, there was a young girl named Lily who discovered a mysterious portal in her backyard...",
            "In a world where magic was a part of everyday life, a brave knight named Eldric embarked on a quest to retrieve a powerful artifact...",
            "Deep in the enchanted forest, a group of talking animals gathered around a wise old oak tree to discuss a pressing matter..."
        ]
    """
    system: str = globals.SYSTEM["few_shot"].format(n_examples=n_examples)
    user: str = globals.USER["few_shot"].format(instruction=instruction)
    if n_examples < 1:
        red(var="Too few examples requested! Trying anyway...")

    model_output: str = gen(user=user, system=system, model=model, api_key=api_key, max_tokens=max_tokens, temperature=temperature, **kwargs)
    return get_xml(tag='example', content=model_output)

gen_msg(messages, system='', model=globals.DEFAULT_MODEL, api_key=None, max_tokens=1024, temperature=1.0, loud=True, **kwargs)

Generate a response from Claude using the Anthropic API.

Parameters:

Name Type Description Default
messages List[MessageParam]

A list of anthropic.types.MessageParams representing the conversation history.

required
system str

The system message to set the context for Claude. Defaults to "".

''
model str

The name of the model to use. Defaults to globals.DEFAULT_MODEL.

DEFAULT_MODEL
api_key Optional[str]

The API key to use for authentication. Defaults to None.

None
max_tokens int

The maximum number of tokens to generate in the response. Defaults to 1024.

1024
temperature float

The temperature value for controlling the randomness of the generated response.

1.0
loud bool

Whether to print verbose output. Defaults to True.

True
**kwargs Any

Additional keyword arguments to pass to the Anthropic API.

{}

Returns:

Name Type Description
Message Message

The Message object produced by the Anthropic API, containing the generated response.

Notes
  • If the model parameter is not recognized, the function reverts to using the default model specified in globals.DEFAULT_MODEL.
  • If api_key is None, the function attempts to retrieve the API key from the environment variable "ANTHROPIC_API_KEY".
  • The function creates an instance of the Anthropic client using the provided api_key.
  • Stream not supported yet! If the stream keyword argument is provided, the function disables streaming and sets stream to False. (TODO: Support stream)
  • The function uses the messages.create method of the Anthropic client to generate Claude's response.
  • If loud is True, the generated message is printed using the yellow function for verbose output.
Example

messages = [ ... MessageParam(role="user", content="What is the capital of France?") ... ] response = gen_msg(messages, system="You are a helpful assistant.") print(response.content[0].text) The capital of France is Paris.

Source code in alana/prompt.py
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
def gen_msg(messages: List[MessageParam], system: str = "", model: str = globals.DEFAULT_MODEL, api_key: Optional[str] = None, max_tokens = 1024, temperature=1.0, loud=True, **kwargs: Any) -> Message:
    """Generate a response from Claude using the Anthropic API.

    Args:
        messages (List[MessageParam]): A list of `anthropic.types.MessageParam`s representing the conversation history.
        system (str, optional): The system message to set the context for Claude. Defaults to "".
        model (str, optional): The name of the model to use. Defaults to globals.DEFAULT_MODEL.
        api_key (Optional[str], optional): The API key to use for authentication. Defaults to None.
        max_tokens (int, optional): The maximum number of tokens to generate in the response. Defaults to 1024.
        temperature (float, optional): The temperature value for controlling the randomness of the generated response.
        loud (bool, optional): Whether to print verbose output. Defaults to True.
        **kwargs: Additional keyword arguments to pass to the Anthropic API.

    Returns:
        Message: The Message object produced by the Anthropic API, containing the generated response.

    Notes:
        - If the `model` parameter is not recognized, the function reverts to using the default model specified in `globals.DEFAULT_MODEL`.
        - If `api_key` is None, the function attempts to retrieve the API key from the environment variable "ANTHROPIC_API_KEY".
        - The function creates an instance of the Anthropic client using the provided `api_key`.
        - Stream not supported yet! If the `stream` keyword argument is provided, the function disables streaming and sets `stream` to False. (TODO: Support stream)
        - The function uses the `messages.create` method of the Anthropic client to generate Claude's response.
        - If `loud` is True, the generated message is printed using the `yellow` function for verbose output.

    Example:
        >>> messages = [
        ...     MessageParam(role="user", content="What is the capital of France?")
        ... ]
        >>> response = gen_msg(messages, system="You are a helpful assistant.")
        >>> print(response.content[0].text)
        The capital of France is Paris.
    """
    backend: str = globals.MODELS[globals.DEFAULT_MODEL]
    if model in globals.MODELS:
        backend = globals.MODELS[model]
    else:
        red(var=f"gen() -- Caution! model string not recognized; reverting to {globals.DEFAULT_MODEL=}.") # TODO: C'mon we can do better error logging than this

    if api_key is None:
        api_key = os.environ.get("ANTHROPIC_API_KEY")
    client = Anthropic(
        api_key=api_key,
    )

    if 'stream' in kwargs:
        red(var="Streaming not supported! Disabling...")
        kwargs['stream'] = False

    message: Message = client.messages.create(  # TODO: Enable streaming support
        max_tokens=max_tokens,
        messages=messages,
        system=system,
        model=backend,
        temperature=temperature,
        **kwargs
    )
    if loud:
        yellow(var=message)

    return message

gen_prompt(instruction, messages=None, model=globals.DEFAULT_MODEL, api_key=None, max_tokens=1024, temperature=1.0, **kwargs)

Meta-prompter! Generate a prompt given an arbitrary instruction.

Parameters:

Name Type Description Default
instruction str

The arbitrary instruction for which to generate a prompt.

required
messages Optional[List[MessageParam]]

!!!!EXPERIMENTAL!!!! A list wherein to receive a 2-turn prompt generation thread! STRONGLY RECOMMEND TO BE EMPTY.

None
model str

The name of the model to use. Defaults to globals.DEFAULT_MODEL.

DEFAULT_MODEL
api_key Optional[str]

The API key to use for authentication. Defaults to None.

None
max_tokens int

The maximum number of tokens to generate in the response. Defaults to 1024.

1024
temperature float

The temperature value for controlling the randomness of the generated response.

1.0
**kwargs Any

Additional keyword arguments to pass to the gen function.

{}

Returns:

Type Description
Dict[Literal['system', 'user', 'full'], Union[str, List]]

Dict[Literal["system", "user", "full"], Union[str, List]]: A dictionary containing the generated prompts. - "system" (Union[str, List[str]]): The generated system prompt(s). - "user" (Union[str, List[str]]): The generated user prompt(s). - "full" (str): The full generated output, including both system and user prompts.

Notes
  • The function constructs a meta-system prompt using the globals.SYSTEM["gen_prompt"] template.
  • The function constructs a meta-prompt using the globals.USER["gen_prompt"] template and the provided instruction.
  • The function calls the gen function to generate the full output based on the meta-system prompt, meta-prompt, model, api_key, max_tokens, temperature, and any additional keyword arguments (which are passed to the Anthropic API).
  • The function uses the get_xml function to extract the content within the <system_prompt/> and <user_prompt/> tags from the full output.
  • The function returns a dictionary containing the generated system prompt(s), user prompt(s), and the full output.
  • Things can get janky if the model tries to provide multiple system prompts or multiple user prompts. I make some wild guess about what you might want to get in that case (right now, it would return the first system prompt, but all the user prompts in a list).
Example

instruction = "Write a story about a robot learning to love." prompts = gen_prompt(instruction) print(prompts["system"]) You are a creative story writer. Write a short story based on the given prompt, focusing on character development and emotional depth. print(prompts["user"]) Write a story about a robot learning to love. print(prompts["full"]) You are a creative story writer. Write a short story based on the given prompt, focusing on character development and emotional depth.

Write a story about a robot learning to love.

Source code in alana/prompt.py
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
def gen_prompt(instruction: str, messages: Optional[List[MessageParam]] = None, model: str = globals.DEFAULT_MODEL, api_key: Optional[str] = None, max_tokens: int = 1024, temperature=1.0, **kwargs: Any) -> Dict[Literal["system", "user", "full"], Union[str, List]]:
    """Meta-prompter! Generate a prompt given an arbitrary instruction.

    Args:
        instruction (str): The arbitrary instruction for which to generate a prompt.
        messages (Optional[List[MessageParam]]): !!!!EXPERIMENTAL!!!! A list wherein to receive a 2-turn prompt generation thread! STRONGLY RECOMMEND TO BE EMPTY.
        model (str, optional): The name of the model to use. Defaults to globals.DEFAULT_MODEL.
        api_key (Optional[str], optional): The API key to use for authentication. Defaults to None.
        max_tokens (int, optional): The maximum number of tokens to generate in the response. Defaults to 1024.
        temperature (float, optional): The temperature value for controlling the randomness of the generated response.
        **kwargs: Additional keyword arguments to pass to the `gen` function.

    Returns:
        Dict[Literal["system", "user", "full"], Union[str, List]]: A dictionary containing the generated prompts.
            - "system" (Union[str, List[str]]): The generated system prompt(s).
            - "user" (Union[str, List[str]]): The generated user prompt(s).
            - "full" (str): The full generated output, including both system and user prompts.

    Notes:
        - The function constructs a meta-system prompt using the `globals.SYSTEM["gen_prompt"]` template.
        - The function constructs a meta-prompt using the `globals.USER["gen_prompt"]` template and the provided `instruction`.
        - The function calls the `gen` function to generate the full output based on the meta-system prompt, meta-prompt, `model`, `api_key`, `max_tokens`, `temperature`, and any additional keyword arguments (which are passed to the Anthropic API).
        - The function uses the `get_xml` function to extract the content within the `<system_prompt/>` and `<user_prompt/>` tags from the full output.
        - The function returns a dictionary containing the generated system prompt(s), user prompt(s), and the full output.
        - Things can get janky if the model tries to provide multiple system prompts or multiple user prompts. I make some wild guess about what you might want to get in that case (right now, it would return the first system prompt, but all the user prompts in a list).

    Example:
        >>> instruction = "Write a story about a robot learning to love."
        >>> prompts = gen_prompt(instruction)
        >>> print(prompts["system"])
        You are a creative story writer. Write a short story based on the given prompt, focusing on character development and emotional depth.
        >>> print(prompts["user"])
        Write a story about a robot learning to love.
        >>> print(prompts["full"])
        <system_prompt>
        You are a creative story writer. Write a short story based on the given prompt, focusing on character development and emotional depth.
        </system_prompt>

        <user_prompt>
        Write a story about a robot learning to love.
        </user_prompt>
    """
    meta_system_prompt: str = globals.SYSTEM["gen_prompt"]
    meta_prompt: str = globals.USER["gen_prompt"].format(instruction=instruction)

    if messages is not None:
        yellow(var="`alana.prompt.gen_prompt`: Please note that `messages` support in `gen_prompt` is experimental!")
    if messages is not None and len(messages) > 0:
        red("`alana.prompt.gen_prompt`: Non-empty `messages` received! In `gen_prompt`, it's STRONGLY recommended to pass in an empty list for `messages`.")

    full_output: str = gen(user=meta_prompt, messages=messages, system=meta_system_prompt, model=model, api_key=api_key, max_tokens=max_tokens, temperature=temperature, **kwargs)
    system_prompt: Union[List[str], str] = get_xml(tag="system_prompt", content=full_output)
    if len(system_prompt) >= 1:
        system_prompt = system_prompt[0]
    user_prompt: Union[List[str], str] = get_xml(tag="user_prompt", content=full_output)
    if len(user_prompt) == 1:  # TODO: Find a saner way to handle this. E.g. delegate to a formatter model.
        user_prompt = user_prompt[0]
    return {"system": system_prompt, "user": user_prompt, "full": full_output}

get_xml(tag, content)

Return contents of XML tags.

Source code in alana/prompt.py
15
16
17
18
19
def get_xml(tag: str, content: str) -> List[str]:
    """Return contents of <tag/> XML tags."""
    pattern: str = get_xml_pattern(tag=tag)
    matches: List[Any] = re.findall(pattern=pattern, string=content, flags=re.DOTALL)
    return matches

get_xml_pattern(tag)

Return regex pattern for getting contents of XML tags.

Source code in alana/prompt.py
 9
10
11
12
13
def get_xml_pattern(tag: str):
    """Return regex pattern for getting contents of <tag/> XML tags."""
    if tag.count('<') > 0 or tag.count('>') > 0:
        raise ValueError("No '>' or '<' allowed in get_xml tag name!")
    return rf"<{tag}>(.*?)</{tag}>"

pretty_print(var, loud=True, model='sonnet', **kwargs)

Pretty-print an arbitrary variable. By default, uses Sonnet (not globals.DEFAULT_MODEL).

Parameters:

Name Type Description Default
var Any

The variable to pretty-print.

required
loud bool

Whether to print the pretty-printed output. Defaults to True.

True
model str

The name of the model to use. Defaults to "sonnet".

'sonnet'

Returns:

Name Type Description
str str

The pretty-printed representation of the variable.

Raises:

Type Description
ValueError

If no tags are found in the generated output.

Notes
  • The function constructs a system prompt using the globals.SYSTEM["pretty_print"] template.
  • The function constructs a user prompt using the globals.USER["pretty_print"] template and the provided var.
  • The function calls the gen function to generate the pretty-printed output based on the system prompt, user prompt, and specified model.
  • The function uses the get_xml function to extract the content within the <pretty> tags from the generated output.
  • If no <pretty/> tags are found in the model output, the function raises a ValueError.
  • If multiple <pretty> tags are found in the model output, the function uses the last one as the pretty-printed output.
  • The function returns the pretty-printed output as a string.
Example

my_var = {"name": "John", "age": 30, "city": "New York"} pretty_output = pretty_print(my_var) { "name": "John", "age": 30, "city": "New York" } print(pretty_output) { "name": "John", "age": 30, "city": "New York" }

Source code in alana/prompt.py
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
def pretty_print(var: Any, loud: bool = True, model: str = "sonnet", **kwargs) -> str:
    """Pretty-print an arbitrary variable. By default, uses Sonnet (not globals.DEFAULT_MODEL).

    Args:
        var (Any): The variable to pretty-print.
        loud (bool, optional): Whether to print the pretty-printed output. Defaults to True.
        model (str, optional): The name of the model to use. Defaults to "sonnet".

    Returns:
        str: The pretty-printed representation of the variable.

    Raises:
        ValueError: If no <pretty/> tags are found in the generated output.

    Notes:
        - The function constructs a system prompt using the `globals.SYSTEM["pretty_print"]` template.
        - The function constructs a user prompt using the `globals.USER["pretty_print"]` template and the provided `var`.
        - The function calls the `gen` function to generate the pretty-printed output based on the system prompt, user prompt, and specified `model`.
        - The function uses the `get_xml` function to extract the content within the `<pretty>` tags from the generated output.
        - If no `<pretty/>` tags are found in the model output, the function raises a `ValueError`.
        - If multiple `<pretty>` tags are found in the model output, the function uses the last one as the pretty-printed output.
        - The function returns the pretty-printed output as a string.

    Example:
        >>> my_var = {"name": "John", "age": 30, "city": "New York"}
        >>> pretty_output = pretty_print(my_var)
        {
            "name": "John",
            "age": 30,
            "city": "New York"
        }
        >>> print(pretty_output)
        {
            "name": "John",
            "age": 30,
            "city": "New York"
        }
    """
    system = globals.SYSTEM["pretty_print"]
    user = globals.USER["pretty_print"].format(var=f'{var}')

    string: str = gen(user=user, system=system, model=model, loud=False, **kwargs) # NOTE: We just don't log pretty print model outputs
    pretty: Union[List[str], str] = get_xml(tag="pretty", content=string)
    if len(pretty) == 0:
        raise ValueError("`pretty_print`: XML parsing error! Number of <pretty/> tags is 0.")
    else:
        pretty = pretty[-1]
    if loud:
        print(pretty)
    return pretty

remove_xml(tag='reasoning', content='', repl='')

Return a copy of content with XML elements (both content and tag) replaced with repl (default "").

Source code in alana/prompt.py
21
22
23
24
25
26
27
28
29
def remove_xml(tag: str = "reasoning", content: str = "", repl: str="") -> str:
    """Return a copy of `content` with <tag/> XML elements (both content and tag) replaced with `repl` (default "")."""
    if tag.count('<') > 0 or tag.count('>') > 0:
        raise ValueError("No '>' or '<' allowed in get_xml tag name!")
    if content == "":
        red(var="`remove_xml`: Empty string provided as `content`.") # TODO: Improve error logging
    pattern: str = rf"<{tag}>.*?</{tag}>" # NOTE: Removed group matching, so can't use `get_xml_pattern`
    output: str = re.sub(pattern=pattern, repl=repl, string=content, flags=re.DOTALL)
    return output

respond(content, messages=None, role='user')

Append a user message to messages list.

Parameters:

Name Type Description Default
content str

The newest message content.

required
messages Optional[List[MessageParam]]

A list of anthropic.types.MessageParam objects. The last MessageParam should be from assistant. If messages is None, we will populate it with exactly one MessageParam based on user.

None
role Literal['user', 'assistant']

Corresponding source for the message!

'user'

Returns:

Type Description
List[MessageParam]

List[MessageParam]

Source code in alana/prompt.py
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
def respond(content: str, messages: Optional[List[MessageParam]] = None, role: Literal["user", "assistant"] = "user") -> List[MessageParam]:
    """Append a user message to messages list.

    Args:
        content (str): The newest message content.
        messages (Optional[List[MessageParam]]): A list of `anthropic.types.MessageParam` objects. The last MessageParam should be from assistant. If `messages` is None, we will populate it with exactly one MessageParam based on `user`.
        role (Literal["user", "assistant"]): Corresponding source for the message!

    Returns:
        List[MessageParam]
    """
    if messages is None:
        messages = []
    messages.append(MessageParam(
        role=role,
        content=content,
    ))
    return messages