🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.
cortex engines
This command allows you to manage various engines available within Cortex.
Usage:
You can use the --verbose
flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand]
.
- MacOs/Linux
- Windows
# Stablecortex engines [options] [subcommand]# Betacortex-beta engines [options] [subcommand]# Nightlycortex-nightly engines [options] [subcommand]
# Stablecortex.exe engines [options] [subcommand]# Betacortex-beta.exe engines [options] [subcommand]# Nightlycortex-nightly.exe engines [options] [subcommand]
Options:
Option | Description | Required | Default value | Example |
---|---|---|---|---|
-h , --help | Display help information for the command. | No | - | -h |
cortex engines get
This CLI command calls the following API endpoint:
This command returns an engine detail defined by an engine engine_name
.
Usage:
You can use the --verbose
flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand]
.
- MacOs/Linux
- Windows
# Stablecortex engines get <engine_name># Betacortex-beta engines get <engine_name># Nightlycortex-nightly engines get <engine_name>
# Stablecortex.exe engines get <engine_name># Betacortex-beta.exe engines get <engine_name># Nightlycortex-nightly.exe engines get <engine_name>
For example, it returns the following:
┌─────────────┬────────────────────────────────────────────────────────────────────────────┐│ (index) │ Values │├─────────────┼────────────────────────────────────────────────────────────────────────────┤│ name │ 'onnx' ││ description │ 'This extension enables chat completion API calls using the Cortex engine' ││ version │ '0.0.1' ││ productName │ 'Cortex Inference Engine' │└─────────────┴────────────────────────────────────────────────────────────────────────────┘
To get an engine name, run the engines list
command first.
Options:
Option | Description | Required | Default value | Example |
---|---|---|---|---|
engine_name | The name of the engine that you want to retrieve. | Yes | - | llama-cpp |
-h , --help | Display help information for the command. | No | - | -h |
cortex engines list
This CLI command calls the following API endpoint:
This command lists all the Cortex's engines.
Usage:
You can use the --verbose
flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand]
.
- MacOs/Linux
- Windows
# Stablecortex engines list [options]# Betacortex-beta engines list [options]# Nightlycortex-nightly engines list [options]
# Stablecortex.exe engines list [options]# Betacortex-beta.exe engines list [options]# Nightlycortex-nightly.exe engines list [options]
For example, it returns the following:
+---+--------------+-------------------+---------+----------------------------+---------------+| # | Name | Supported Formats | Version | Variant | Status |+---+--------------+-------------------+---------+----------------------------+---------------+| 1 | onnxruntime | ONNX | | | Incompatible |+---+--------------+-------------------+---------+----------------------------+---------------+| 2 | llama-cpp | GGUF | 0.1.34 | linux-amd64-avx2-cuda-12-0 | Ready |+---+--------------+-------------------+---------+----------------------------+---------------+| 3 | tensorrt-llm | TensorRT Engines | | | Not Installed |+---+--------------+-------------------+---------+----------------------------+---------------+
Options:
Option | Description | Required | Default value | Example |
---|---|---|---|---|
-h , --help | Display help for command. | No | - | -h |
cortex engines install
This CLI command calls the following API endpoint:
This command downloads the required dependencies and installs the engine within Cortex. Currently, Cortex supports three engines:
llama-cpp
onnxruntime
tensorrt-llm
Usage:
You can use the --verbose
flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand]
.
- MacOs/Linux
- Windows
# Stablecortex engines install [options] <engine_name># Betacortex-beta engines install [options] <engine_name># Nightlycortex-nightly engines install [options] <engine_name>
# Stablecortex.exe engines install [options] <engine_name># Betacortex-beta.exe engines install [options] <engine_name># Nightlycortex-nightly.exe engines install [options] <engine_name>
For Example:
## Llama.cpp enginecortex engines install llama-cpp## ONNX enginecortex engines install onnxruntime## Tensorrt-LLM enginecortex engines install tensorrt-llm
Options:
Option | Description | Required | Default value | Example |
---|---|---|---|---|
engine_name | The name of the engine you want to install. | Yes | - | - |
-h , --help | Display help for command. | No | - | -h |
cortex engines uninstall
This command uninstalls the engine within Cortex.
Usage:
You can use the --verbose
flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand]
.
- MacOs/Linux
- Windows
# Stablecortex engines uninstall [options] <engine_name># Betacortex-beta engines uninstall [options] <engine_name># Nightlycortex-nightly engines uninstall [options] <engine_name>
# Stablecortex.exe engines uninstall [options] <engine_name># Betacortex-beta.exe engines uninstall [options] <engine_name># Nightlycortex-nightly.exe engines uninstall [options] <engine_name>
For Example:
## Llama.cpp enginecortex engines uninstall llama-cpp## ONNX enginecortex engines uninstall onnxruntime## Tensorrt-LLM enginecortex engines uninstall tensorrt-llm
Options:
Option | Description | Required | Default value | Example |
---|---|---|---|---|
engine_name | The name of the engine you want to uninstall. | Yes | - | - |
-h , --help | Display help for command. | No | - | -h |