Skip to content

Commit aa59558

Browse files
authored
Allow tools to return rg.Stop in addition to raise it. Some other docs cleanup. (#197)
1 parent 883dc77 commit aa59558

File tree

4 files changed

+37
-20
lines changed

4 files changed

+37
-20
lines changed

docs/api/tools.mdx

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -185,6 +185,9 @@ async def handle_tool_call( # noqa: PLR0912
185185
result = self.fn(**kwargs) # type: ignore [call-arg]
186186
if inspect.isawaitable(result):
187187
result = await result
188+
189+
if isinstance(result, Stop):
190+
raise result # noqa: TRY301
188191
except Stop as e:
189192
result = f"<{TOOL_STOP_TAG}>{e.message}</{TOOL_STOP_TAG}>"
190193
span.set_attribute("stop", True)

docs/topics/generators.mdx

Lines changed: 25 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -8,38 +8,45 @@ Underlying LLMs (or any function which completes text) is represented as a gener
88

99
## Identifiers
1010

11-
Much like database connection strings, Rigging generators can be represented as strings which define what provider, model, API key, generation params, etc should be used. They are formatted as follows:
11+
Much like database connection strings, Rigging generators can be represented as strings which define what provider, model, API key, generation params, etc should be used.
12+
13+
<Note>
14+
Throughout our code, we frequently use these generator identifiers as CLI arguments, environment variables, and API parameters. They are convenient for passing around complex configurations without having to represent model configurations in multiple places. They are also used to serialize generators to storage when chats are stored, so you can save and load them easily without having to reconfigure the generator each time.
15+
</Note>
16+
17+
Here are some examples of valid identifiers:
18+
19+
```text
20+
gpt-4.1
21+
openai/o3-mini
22+
gemini/gemini-2.5-pro
23+
claude-4-sonnet-latest
24+
vllm_hosted/meta-llama/Llama-3.1-8B-Instruct
25+
ollama/qwen3
26+
27+
openai/gpt-4,api_key=sk-1234
28+
anthropic/claude-3-7-haiku-latest,stop=output:;---,seed=1337
29+
together_ai/meta-llama/Llama-3-70b-chat-hf
30+
openai/google/gemma-7b,api_base=https://integrate.api.nvidia.com/v1
31+
```
32+
33+
Identifiers are formally defined as follows:
1234

1335
```
1436
<provider>!<model>,<**kwargs>
1537
```
1638

17-
- `provider` maps to a particular subclass of `Generator`.
39+
- `provider` maps to a particular subclass of `Generator` (optional).
1840
- `model` is a any `str` value, typically used by the provider to indicate a specific LLM to target.
1941
- `kwargs` are used to carry:
2042
1. API key (`,api_key=...`) or the base URL (`,api_base=...`) for the model provider.
2143
1. Serialized `GenerateParams`fields like like temp, stop tokens, etc.
2244
1. Additional provider-specific attributes to set on the constructed generator class. For instance, you
2345
can set the `LiteLLMGenerator.max_connections`property by passing `,max_connections=` in the identifier string.
2446

25-
The provider is optional and Rigging will fallback to [`litellm`](https://github.com/BerriAI/litellm)/`LiteLLMGenerator` by default.
47+
The provider is optional and Rigging will fallback to [`litellm`](https://github.com/BerriAI/litellm)/`LiteLLMGenerator` by default.
2648
You can view the [LiteLLM docs](https://docs.litellm.ai/docs/) for more information about supported model providers and parameters.
2749

28-
<Note>
29-
Throughout our code, we frequently use these generator identifiers as CLI arguments, environment variables, and API parameters. They work like database connection strings and are super convenient for passing around complex configurations. They are also used to serialize generators to storage, so you can save and load them easily.
30-
</Note>
31-
32-
Here are some examples of valid identifiers:
33-
34-
```text
35-
gpt-3.5-turbo,temperature=0.5
36-
openai/gpt-4,api_key=sk-1234
37-
litellm!claude-3-sonnet-2024022
38-
anthropic/claude-2.1,stop=output:;---,seed=1337
39-
together_ai/meta-llama/Llama-3-70b-chat-hf
40-
openai/google/gemma-7b,api_base=https://integrate.api.nvidia.com/v1
41-
```
42-
4350
Building generators from string identifiers is optional, but a convenient way to represent complex LLM configurations.
4451

4552
<Tip>

docs/topics/tools.mdx

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -229,7 +229,7 @@ The `max_depth` parameter limits how many levels deep tool calls can go. If a to
229229

230230
### Stopping Tool Calls
231231

232-
You may want to use a particular tool, or catch a condition inside a tool, and indicate to any pipelines that it should stop going back to the model for more calls. You can do this by raising a `rigging.Stop` exception with a message to the model.
232+
You may want to use a particular tool, or catch a condition inside a tool, and indicate to any pipelines that it should stop going back to the model for more calls. You can do this by raising or returning a `rigging.Stop` exception with a message to be passed back to the model for context.
233233

234234
```python
235235
import rigging as rg
@@ -240,7 +240,7 @@ def execute_code(code: str) -> str:
240240
...
241241

242242
if "<flag>" in output: # Stop the model from executing more code
243-
raise rg.Stop(f"Task finished")
243+
return rg.Stop(f"Task finished") # or `raise rg.Stop("Task finished")`
244244

245245
return output
246246

@@ -252,6 +252,10 @@ chat = (
252252
)
253253
```
254254

255+
<Tip>
256+
Returning the `rg.Stop` exception instead of raising it is helpful if you don't want any surrounding code (decorators that wrap the tool function) to catch the exception, alter it, or behave as if a typical exception occurred.
257+
</Tip>
258+
255259
<Note>
256260
This stop indication won't completely halt the pipeline, but it will let it continue to any additional parsing mechanics or custom callbacks which follow tool calling.
257261
</Note>

rigging/tools/base.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -383,6 +383,9 @@ async def handle_tool_call( # noqa: PLR0912
383383
result = self.fn(**kwargs) # type: ignore [call-arg]
384384
if inspect.isawaitable(result):
385385
result = await result
386+
387+
if isinstance(result, Stop):
388+
raise result # noqa: TRY301
386389
except Stop as e:
387390
result = f"<{TOOL_STOP_TAG}>{e.message}</{TOOL_STOP_TAG}>"
388391
span.set_attribute("stop", True)

0 commit comments

Comments
 (0)