Skip to content

Load ONNX Model

Load the specified ONNX model file, start an independent inference process, and return the unique identifier (onnxId) of the model.

💡 Hardware Acceleration

This API automatically detects whether the host machine supports CUDA. If not supported, it defaults to CPU inference without manual configuration.

API Description

API Type

loadOnnxModel

Parameters

ParameterTypeRequiredDescription
filestringYesFull path to the ONNX model file

Return Value

Returns a string type model ID (onnxId) in the format: onnx_ + random string

javascript
// Return example
"onnx_8k3j2h9s1d4f6"

Basic Usage

javascript
// Load ONNX model
const onnxId = await apiInvoke('loadOnnxModel', {
    file: 'C:\\models\\yolov5s.onnx'
});

console.log('Model loaded successfully, ID:', onnxId);
// Output: Model loaded successfully, ID: onnx_8k3j2h9s1d4f6

Performance Recommendations

  1. Unload as needed - For models that are not frequently used, unload them promptly after completing tasks
  2. Reasonable allocation - Limited by host performance, don't load too many models

Cooperation: try.catch@foxmail.com