Load ONNX Model
Load the specified ONNX model file, start an independent inference process, and return the unique identifier (onnxId) of the model.
💡 Hardware Acceleration
This API automatically detects whether the host machine supports CUDA. If not supported, it defaults to CPU inference without manual configuration.
API Description
API Type
loadOnnxModelParameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| file | string | Yes | Full path to the ONNX model file |
Return Value
Returns a string type model ID (onnxId) in the format: onnx_ + random string
javascript
// Return example
"onnx_8k3j2h9s1d4f6"Basic Usage
javascript
// Load ONNX model
const onnxId = await apiInvoke('loadOnnxModel', {
file: 'C:\\models\\yolov5s.onnx'
});
console.log('Model loaded successfully, ID:', onnxId);
// Output: Model loaded successfully, ID: onnx_8k3j2h9s1d4f6Performance Recommendations
- Unload as needed - For models that are not frequently used, unload them promptly after completing tasks
- Reasonable allocation - Limited by host performance, don't load too many models