Custom Model
In the current Revornix product, models are no longer a single chat-only setting. Different workflows now consume different default models.
The two most important default model slots today are:
Default Revornix AI modelDefault document-reading / summary model
Those roles are now clearly separated.
1. What the current model slots actually do
Default Revornix AI model
This model is mainly used for Revornix AI interaction itself.
When you chat with Revornix AI in the product, this is the default model path.
Default document-reading / summary model
This model now does much more than generate a summary.
In the current workflow it can participate in:
- Document summarization
- Title and description generation
- Section content processing
- Knowledge-graph-related extraction
- Supporting text understanding inside some engine-driven flows
So the document-reading or summary model is now better understood as a general content-understanding model, not just a summary generator.
2. Protocol support
Custom models are still centered around OpenAI-compatible APIs.
If a third-party model can be exposed through an OpenAI-style API, it can be integrated into Revornix as a candidate model.
3. Current configuration flow
The model configuration flow is currently split into three layers:
- Create a model provider
- Add models under that provider
- Bind a model to a default usage slot
In practice:
- Provider name and description are mostly for your own organization
- The model name must match the real upstream model name
- After adding models, you still need to assign one in the default model selectors
4. What each model-related config field means
The fields users directly fill in are mainly split into two layers:
Model provider fields
These are the fields you enter when creating a model provider.
name: The provider label used for your own organization, such asOpenAI Official,OpenRouter, orMy Gateway.description: A human-readable note about what this provider is for. It does not directly change runtime behavior.api_key: The credential used to authenticate against that provider. In practice this is usually required if the upstream does not allow anonymous access.base_url: The provider root URL. Revornix currently consumes models through an OpenAI-compatible API style, so this is usually the root of a compatible endpoint.is_public: Whether the provider should be published to the community. Other users may discover and fork it, but they do not directly inherit your private credential values.
Among these fields, the ones that directly affect runtime calls are:
api_keybase_url
Model fields
These are the fields you enter when adding a model under a provider.
name: The real upstream model name, which becomes themodelvalue in runtime requests. Examples includegpt-5.4,gpt-4o-mini,claude-3-7-sonnet, orqwen-max.description: A human-friendly note to help you distinguish model roles, such as “best for chat,” “cheap test model,” or “long-document summarization.”required_plan_level: The plan level required to access this model. This is mainly used for access checks and default-selection eligibility.is_official_hosted: Whether the model is officially hosted by the platform. This matters more for platform operations and billing than for the upstream request itself.compute_point_multiplier: The usage multiplier used for platform-side compute-point accounting. It does not change the upstream model behavior.
Among these fields, the one that directly affects the upstream request is:
name
A practical mental model is: provider fields decide where the request goes and how it authenticates; model fields decide which exact upstream model is called.
5. The three model fields users most often care about
If you are integrating a standard OpenAI-compatible model source, the three fields you will care about most are:
api_key: The credential proving you are allowed to call the upstream.base_url: The root URL deciding where the request is sent.model_name/ modelname: The exact model identifier that decides which model actually runs.
In plain language:
api_keydecides “are you allowed to use it”base_urldecides “where the request goes”model_namedecides “which model actually runs”
6. Current plan and accessibility rules
Models now have more than visibility rules. They also carry access-level constraints.
In the current code:
- A model can declare
required_plan_level - Updating default models performs an access check before saving
- If the user’s current plan does not satisfy that requirement, the model can appear in lists but still be disabled for actual selection or use
That means:
- You can browse public models from the model community
- But default selection and real runtime usage still depend on current plan access
7. Official hosted models
The codebase now explicitly supports the concept of official hosted models.
Important implications:
- A model can be marked with
is_official_hosted - Official hosted models can carry a
compute_point_multiplier - In official deployments, the system seeds a built-in Revornix-operated model provider
The current seeded official provider is:
Revornix
The current seeded official model entry is:
gpt-5.4
But this should be read as the current default seeded model entry in the repository, not as a promise that all platform traffic is permanently hard-routed to one single upstream model.
8. Model community and forks
The model community is still part of the product:
- You can publish your own model provider
- Other users can discover public model providers
- To actually use someone else’s provider in your own settings, you still need to fork it first
Forking makes that provider part of your own available resource set. Without that step, the default model selectors will not treat it as one of your own usable resources.
9. Boundary between models and engines
In the current product, models and engines should not be treated as the same thing:
- Models are mainly about LLM, reasoning, reading, and chat
- Engines are mainly about specialized capabilities such as website parsing, file parsing, podcast synthesis, transcription, and image generation
For many product features, you now need both a default model and one or more default engines instead of only configuring one side.