Skip to content

Conversation

@TaeBbong
Copy link

@TaeBbong TaeBbong commented Jan 28, 2026

Summary

This PR adds support for configuring base_url separately for llm_engine and llm_engine_fixed, enabling users to use local LLM servers (e.g., vLLM) for all components.

Problem

Previously, the base_url parameter was only passed to llm_engine, while llm_engine_fixed always used the default API endpoint. This caused APIConnectionError when:

  1. Running with a local vLLM server without external API keys
  2. Using model_engine = ["trainable", "trainable", "trainable", "trainable"] configuration

The error occurred because:

  • Planner.llm_engine_fixed and Verifier.llm_engine_fixed were created without base_url
  • Base_Generator_Tool also lacked base_url parameter support

Solution

Added new parameters to construct_solver():

  • base_url_fixed: For llm_engine_fixed (falls back to base_url if not set)

Modified Files

  • solver.py: Added base_url_fixed and base_url_tool parameters
  • planner.py: Added base_url_fixed parameter for llm_engine_fixed
  • verifier.py: Added base_url_fixed parameter for llm_engine_fixed
  • initializer.py: Pass base_url to tool instances
  • tools/base_generator/tool.py: Added base_url parameter support

Usage

VLLM_BASE_URL = "http://localhost:8000/v1"

solver = construct_solver(
    llm_engine_name="vllm-Qwen/Qwen2.5-7B-Instruct",
    model_engine=["trainable", "trainable", "trainable", "trainable"],
    enabled_tools=["Base_Generator_Tool", "Python_Coder_Tool"],
    tool_engine=["self", "Default"],
    base_url=VLLM_BASE_URL,           # For llm_engine
    base_url_fixed=VLLM_BASE_URL,     # For llm_engine_fixed
)

- Add field `base_url_fixed` to setup base_url for llm_engine_fixed
- Use field `base_url_fixed` for each flow modules
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant