prompt
prompt
gen(user=None, system='', messages=None, append=True, model=globals.DEFAULT_MODEL, api_key=None, max_tokens=1024, temperature=1.0, loud=True, **kwargs)
Generate a response from Claude. Returns the text content (str
) of Claude's response. If you want the Message object instead, use gen_msg
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
user |
Optional[str]
|
The user's message content. Defaults to None. |
None
|
system |
str
|
The system message for Claude. Defaults to "". |
''
|
messages |
Optional[List[MessageParam]]
|
A list of |
None
|
append |
bool
|
Whether to append the generated response (as an |
True
|
model |
str
|
The name of the model to use. Defaults to globals.DEFAULT_MODEL. |
DEFAULT_MODEL
|
api_key |
Optional[str]
|
The API key to use for authentication. Defaults to None (if None, uses os.environ["ANTHROPIC_API_KEY]). |
None
|
max_tokens |
int
|
The maximum number of tokens to generate in the response. Defaults to 1024. |
1024
|
temperature |
float
|
The temperature value for controlling the randomness of the generated response. |
1.0
|
loud |
bool
|
Whether to print verbose output. Defaults to True. |
True
|
**kwargs |
Any
|
Additional keyword arguments to pass to the underlying generation function. |
{}
|
Raises:
Type | Description |
---|---|
ValueError
|
If no prompt is provided (both |
ValueError
|
If the last message in |
ValueError
|
If Claude does not provide a response. |
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
The text content of Claude's generated response. |
Notes
- If
messages
is None, theuser
parameter must be provided as a string. - If
user
is provided andmessages
is not None, theuser
message is appended to themessages
list. - The function raises a ValueError if the roles in the
messages
list are not alternating (e.g., user, assistant, user). - If
append
is True and the last message inmessages
is from the assistant, the generated response is appended to the existing assistant's content. - The function uses the
gen_msg
function internally to generate Claude's response.
Example
user_message = "Hello, Claude!" response = gen(user=user_message) print(response) "Hello! How can I assist you today?"
Source code in alana/prompt.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
|
gen_examples(instruction, n_examples=5, model=globals.DEFAULT_MODEL, api_key=None, max_tokens=1024, temperature=1.0, **kwargs)
Generate a formatted string containing few-shot examples for a given natural language instruction. Uses gen_examples_list
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
instruction |
str
|
The natural language instruction for which to generate examples. |
required |
n_examples |
int
|
The number of examples to generate. Defaults to 5. |
5
|
model |
str
|
The name of the model to use. Defaults to globals.DEFAULT_MODEL. |
DEFAULT_MODEL
|
api_key |
Optional[str]
|
The API key to use for authentication. Defaults to None. |
None
|
max_tokens |
int
|
The maximum number of tokens to generate in the response. Defaults to 1024. |
1024
|
temperature |
float
|
The temperature value for controlling the randomness of the generated response. |
1.0
|
**kwargs |
Any
|
Additional keyword arguments to pass to the |
{}
|
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
A formatted string containing the generated few-shot examples, enclosed in XML-like tags. |
Notes
- The function calls the
gen_examples_list
function to generate a list of few-shot examples based on the providedinstruction
,n_examples
,model
,api_key
,max_tokens
,temperature
, and any additional keyword arguments. - The generated examples are then formatted into a string, with each example enclosed in
<example/>
tags. - The formatted string starts with an opening
<examples>
tag and ends with a closing</examples>
tag (note plural).
Example
instruction = "Write a short story about a magical adventure." examples_str = gen_examples(instruction, n_examples=3) print(examples_str)
Once upon a time, in a land far away, there was a young girl named Lily who discovered a mysterious portal in her backyard... In a world where magic was a part of everyday life, a brave knight named Eldric embarked on a quest to retrieve a powerful artifact... Deep in the enchanted forest, a group of talking animals gathered around a wise old oak tree to discuss a pressing matter...
Source code in alana/prompt.py
226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 |
|
gen_examples_list(instruction, n_examples=5, model=globals.DEFAULT_MODEL, api_key=None, max_tokens=1024, temperature=1.0, **kwargs)
Uses Claude to generate a Python list of few-shot examples for a given natural language instruction.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
instruction |
str
|
The natural language instruction for which to generate examples. |
required |
n_examples |
int
|
The number of examples to ask Claude to generate. Defaults to 5. |
5
|
model |
str
|
The name of the model to use. Defaults to |
DEFAULT_MODEL
|
api_key |
Optional[str]
|
The API key to use for authentication. Defaults to None. |
None
|
max_tokens |
int
|
The maximum number of tokens to generate in the response. Defaults to 1024. |
1024
|
temperature |
float
|
The temperature value for controlling the randomness of the generated response. |
1.0
|
**kwargs |
Any
|
Additional keyword arguments to pass to the |
{}
|
Returns:
Type | Description |
---|---|
List[str]
|
List[str]: A Python list of generated few-shot examples. |
Notes
- The function constructs a system message using the
globals.SYSTEM["few_shot"]
template and the providedn_examples
. - The function constructs a user message using the
globals.USER["few_shot"]
template and the providedinstruction
. - If
n_examples
is less than 1, the function prints a warning message using thered
function but continues execution. - The function calls the
gen
function to generate the model's output based on the constructed system and user messages, along with the specifiedmodel
,api_key
,max_tokens
,temperature
, and any additional keyword arguments. - The generated model output is expected to be in XML format, with each example enclosed in
<example/>
tags. - The function uses the
get_xml
function to extract the content within the<example/>
tags and returns it as a Python list of strings.
Example
instruction = "Write a short story about a magical adventure." examples = gen_examples_list(instruction, n_examples=3) print(examples) [ "Once upon a time, in a land far away, there was a young girl named Lily who discovered a mysterious portal in her backyard...", "In a world where magic was a part of everyday life, a brave knight named Eldric embarked on a quest to retrieve a powerful artifact...", "Deep in the enchanted forest, a group of talking animals gathered around a wise old oak tree to discuss a pressing matter..." ]
Source code in alana/prompt.py
185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 |
|
gen_msg(messages, system='', model=globals.DEFAULT_MODEL, api_key=None, max_tokens=1024, temperature=1.0, loud=True, **kwargs)
Generate a response from Claude using the Anthropic API.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
messages |
List[MessageParam]
|
A list of |
required |
system |
str
|
The system message to set the context for Claude. Defaults to "". |
''
|
model |
str
|
The name of the model to use. Defaults to globals.DEFAULT_MODEL. |
DEFAULT_MODEL
|
api_key |
Optional[str]
|
The API key to use for authentication. Defaults to None. |
None
|
max_tokens |
int
|
The maximum number of tokens to generate in the response. Defaults to 1024. |
1024
|
temperature |
float
|
The temperature value for controlling the randomness of the generated response. |
1.0
|
loud |
bool
|
Whether to print verbose output. Defaults to True. |
True
|
**kwargs |
Any
|
Additional keyword arguments to pass to the Anthropic API. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
Message |
Message
|
The Message object produced by the Anthropic API, containing the generated response. |
Notes
- If the
model
parameter is not recognized, the function reverts to using the default model specified inglobals.DEFAULT_MODEL
. - If
api_key
is None, the function attempts to retrieve the API key from the environment variable "ANTHROPIC_API_KEY". - The function creates an instance of the Anthropic client using the provided
api_key
. - Stream not supported yet! If the
stream
keyword argument is provided, the function disables streaming and setsstream
to False. (TODO: Support stream) - The function uses the
messages.create
method of the Anthropic client to generate Claude's response. - If
loud
is True, the generated message is printed using theyellow
function for verbose output.
Example
messages = [ ... MessageParam(role="user", content="What is the capital of France?") ... ] response = gen_msg(messages, system="You are a helpful assistant.") print(response.content[0].text) The capital of France is Paris.
Source code in alana/prompt.py
124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 |
|
gen_prompt(instruction, messages=None, model=globals.DEFAULT_MODEL, api_key=None, max_tokens=1024, temperature=1.0, **kwargs)
Meta-prompter! Generate a prompt given an arbitrary instruction.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
instruction |
str
|
The arbitrary instruction for which to generate a prompt. |
required |
messages |
Optional[List[MessageParam]]
|
!!!!EXPERIMENTAL!!!! A list wherein to receive a 2-turn prompt generation thread! STRONGLY RECOMMEND TO BE EMPTY. |
None
|
model |
str
|
The name of the model to use. Defaults to globals.DEFAULT_MODEL. |
DEFAULT_MODEL
|
api_key |
Optional[str]
|
The API key to use for authentication. Defaults to None. |
None
|
max_tokens |
int
|
The maximum number of tokens to generate in the response. Defaults to 1024. |
1024
|
temperature |
float
|
The temperature value for controlling the randomness of the generated response. |
1.0
|
**kwargs |
Any
|
Additional keyword arguments to pass to the |
{}
|
Returns:
Type | Description |
---|---|
Dict[Literal['system', 'user', 'full'], Union[str, List]]
|
Dict[Literal["system", "user", "full"], Union[str, List]]: A dictionary containing the generated prompts. - "system" (Union[str, List[str]]): The generated system prompt(s). - "user" (Union[str, List[str]]): The generated user prompt(s). - "full" (str): The full generated output, including both system and user prompts. |
Notes
- The function constructs a meta-system prompt using the
globals.SYSTEM["gen_prompt"]
template. - The function constructs a meta-prompt using the
globals.USER["gen_prompt"]
template and the providedinstruction
. - The function calls the
gen
function to generate the full output based on the meta-system prompt, meta-prompt,model
,api_key
,max_tokens
,temperature
, and any additional keyword arguments (which are passed to the Anthropic API). - The function uses the
get_xml
function to extract the content within the<system_prompt/>
and<user_prompt/>
tags from the full output. - The function returns a dictionary containing the generated system prompt(s), user prompt(s), and the full output.
- Things can get janky if the model tries to provide multiple system prompts or multiple user prompts. I make some wild guess about what you might want to get in that case (right now, it would return the first system prompt, but all the user prompts in a list).
Example
instruction = "Write a story about a robot learning to love." prompts = gen_prompt(instruction) print(prompts["system"]) You are a creative story writer. Write a short story based on the given prompt, focusing on character development and emotional depth. print(prompts["user"]) Write a story about a robot learning to love. print(prompts["full"])
You are a creative story writer. Write a short story based on the given prompt, focusing on character development and emotional depth.
Source code in alana/prompt.py
260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 |
|
get_xml(tag, content)
Return contents of
Source code in alana/prompt.py
15 16 17 18 19 |
|
get_xml_pattern(tag)
Return regex pattern for getting contents of
Source code in alana/prompt.py
9 10 11 12 13 |
|
pretty_print(var, loud=True, model='sonnet', **kwargs)
Pretty-print an arbitrary variable. By default, uses Sonnet (not globals.DEFAULT_MODEL).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
var |
Any
|
The variable to pretty-print. |
required |
loud |
bool
|
Whether to print the pretty-printed output. Defaults to True. |
True
|
model |
str
|
The name of the model to use. Defaults to "sonnet". |
'sonnet'
|
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
The pretty-printed representation of the variable. |
Raises:
Type | Description |
---|---|
ValueError
|
If no |
Notes
- The function constructs a system prompt using the
globals.SYSTEM["pretty_print"]
template. - The function constructs a user prompt using the
globals.USER["pretty_print"]
template and the providedvar
. - The function calls the
gen
function to generate the pretty-printed output based on the system prompt, user prompt, and specifiedmodel
. - The function uses the
get_xml
function to extract the content within the<pretty>
tags from the generated output. - If no
<pretty/>
tags are found in the model output, the function raises aValueError
. - If multiple
<pretty>
tags are found in the model output, the function uses the last one as the pretty-printed output. - The function returns the pretty-printed output as a string.
Example
my_var = {"name": "John", "age": 30, "city": "New York"} pretty_output = pretty_print(my_var) { "name": "John", "age": 30, "city": "New York" } print(pretty_output) { "name": "John", "age": 30, "city": "New York" }
Source code in alana/prompt.py
319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 |
|
remove_xml(tag='reasoning', content='', repl='')
Return a copy of content
with repl
(default "").
Source code in alana/prompt.py
21 22 23 24 25 26 27 28 29 |
|
respond(content, messages=None, role='user')
Append a user message to messages list.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
content |
str
|
The newest message content. |
required |
messages |
Optional[List[MessageParam]]
|
A list of |
None
|
role |
Literal['user', 'assistant']
|
Corresponding source for the message! |
'user'
|
Returns:
Type | Description |
---|---|
List[MessageParam]
|
List[MessageParam] |
Source code in alana/prompt.py
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
|