LLMs¶
Nirvana provides a unified interface for LLMs.
LLMArguments¶
nirvana.executors.llm_backbone.LLMArguments
¶
Bases: BaseModel
Attributes¶
max_tokens: int = Field(default=512, ge=1, le=16384, description='The maximum number of tokens to generate.')
class-attribute
instance-attribute
¶
temperature: float = Field(default=0.1, ge=0.0, le=1.0, description='The sampling temperature.')
class-attribute
instance-attribute
¶
max_timeouts: int = Field(default=3, ge=1, le=10, description='The maximum number of timeouts.')
class-attribute
instance-attribute
¶
LLMClient¶
nirvana.executors.llm_backbone.LLMClient
¶
Attributes¶
default_model: str | None = None
class-attribute
instance-attribute
¶
client: AsyncOpenAI = None
class-attribute
instance-attribute
¶
config: LLMArguments = LLMArguments()
class-attribute
instance-attribute
¶
Functions¶
configure(model_name: str, api_key: str | Path | None = None, base_url: str | None = None, **kwargs)
classmethod
¶
Configure the shared LLM client.
The provider (OpenAI / DeepSeek / Qwen) is inferred from model_name,
and appropriate defaults for base_url and api_key are applied.
Users can still override both api_key and base_url explicitly.