Predict
Run inference on models deployed on the CX platform. You can run inference on models you uploaded yourself, or publicly available ones hosted by CX.
Command Line Interface
Run predictions directly from the commandline with the cx predict
command.
cx predict
arguments:
--app
the name of the deployed app--data
the payload that the model is expecting--is-public
a boolean (defaultFalse
). This will beTrue
if you are accessing a publicly hosted model by CX, andFalse
if your org is hosting it.--is-serverless
a boolean (defaultFalse
). This will beTrue
if the app was deployed as serverless.
Python
To call the API using python, follow the examples in
cURL Request
Alternatively, you can run inference through a cURL request as well:
Update the payload in -F
to match your desired inference configuration.
Last updated