Skip to content

Conversation

@devin-ai-integration
Copy link
Contributor

feat: Add OpenAI Responses API Integration

Summary

Implements native support for OpenAI's Responses API (/v1/responses) as a new LLM provider in CrewAI, addressing feature request #4152.

The Responses API offers advantages for agent workflows including simpler input format, built-in conversation management via previous_response_id, and native support for o-series reasoning models.

Usage:

# Option 1: Using provider parameter
llm = LLM(model="gpt-4o", provider="openai_responses")

# Option 2: Using model prefix
llm = LLM(model="openai_responses/gpt-4o")

# With o-series reasoning models
llm = LLM(model="o3-mini", provider="openai_responses", reasoning_effort="high")

Key implementation details:

  • New OpenAIResponsesCompletion class extending BaseLLM (~900 lines)
  • Message conversion: system messages → instructions param, other messages → input param
  • Tool calling support with Responses API format (sets strict: True by default)
  • Streaming support (sync and async)
  • Structured output via Pydantic models
  • Support for stateful conversations via previous_response_id

Review & Testing Checklist for Human

  • Verify message conversion logic: System messages become instructions, user/assistant messages become input array. This is fundamentally different from Chat Completions - confirm this matches Responses API expectations
  • Test with real OpenAI API: All tests use mocks. Recommend testing basic call, streaming, and tool calling against actual API
  • Verify tool format: The _convert_tools_for_responses method sets strict: True by default on all tools - verify this doesn't break tool schemas that aren't strict-compatible
  • Test o-series model handling: Verify reasoning_effort parameter works correctly with o1/o3/o4 models
  • Check streaming event types: Events like response.output_text.delta and response.function_call_arguments.delta need verification against actual streaming responses

Recommended test plan:

  1. Create a simple agent with provider="openai_responses" and verify basic completion works
  2. Test with a tool-using agent to verify function calling
  3. Test streaming with stream=True
  4. Test with an o-series model (o3-mini) with reasoning_effort="high"

Notes

  • There's an unused import (ResponseOutputMessage) that could be cleaned up
  • The context window size lookup uses ordered list matching with startswith - order matters for correct matching (e.g., gpt-4o before gpt-4)

Link to Devin run: https://app.devin.ai/sessions/5344f7b180844f958605133c3772c492
Requested by: João (joao@crewai.com)

Implements native support for OpenAI's Responses API (/v1/responses) as a new
LLM provider in CrewAI. This addresses feature request #4152.

Key features:
- New OpenAIResponsesCompletion class extending BaseLLM
- Support for both explicit provider parameter and model prefix routing
- Message conversion from CrewAI format to Responses API format
- Tool/function calling support
- Streaming support (sync and async)
- Structured output via Pydantic models
- Token usage tracking
- Support for o-series reasoning models with reasoning_effort parameter
- Support for stateful conversations via previous_response_id

Usage:
  # Option 1: Using provider parameter
  llm = LLM(model='gpt-4o', provider='openai_responses')

  # Option 2: Using model prefix
  llm = LLM(model='openai_responses/gpt-4o')

Includes comprehensive test coverage for:
- Provider routing
- Message conversion
- Tool conversion
- API calls
- Parameter preparation
- Context window sizes
- Feature support methods
- Token usage extraction

Co-Authored-By: João <joao@crewai.com>
@devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@devin-ai-integration
Copy link
Contributor Author

Closing due to inactivity for more than 7 days. Configure here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant