Skip to content

Conversation

@strangecreator
Copy link

Summary:

This PR adds an end-to-end “manual mode” (human-in-the-loop) workflow: OpenEvolve can enqueue LLM requests as tasks, and the visualizer now exposes a dedicated /manual page to review prompts and submit answers back to the runner.

What changed:

Visualizer:

  • Added a manual tasks page at GET /manual (monaco-based prompt viewer + answer editor).
  • Added manual mode API endpoints:
    • GET /manual/api/tasks (pending tasks only)
    • GET /manual/api/tasks/<id>
    • POST /manual/api/tasks/<id>/answer

Manual-mode LLM integration:

  • When llm.manual_mode=true, OpenAILLM writes <task_id>.json into <openevolve_output>/manual_tasks_queue and waits for <task_id>.answer.json.
  • Controller creates/clears <openevolve_output>/manual_tasks_queue at run start so stale tasks from previous runs are not present in the UI.
  • Queue path is injected via runtime-only private config field _manual_queue_dir and propagated to all model configs (works in process workers too).
  • Manual mode only activates when manual_mode is explicitly True to avoid accidental enablement with mock tests.

Why:

  • Provides clear and helpful way to study prompts and experiment with responses.

How to use:

  1. Run OpenEvolve with llm.manual_mode: true.
  2. Start the visualizer pointing at the run output directory (openevolve_output).
  3. Open http://<host>:<port>/manual, answer tasks, and evolution continues once answers are submitted.

Pictures:

image image

@CLAassistant
Copy link

CLAassistant commented Jan 6, 2026

CLA assistant check
All committers have signed the CLA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants