Has anyone tried to deploy ludwig serve to a Google Vertex model endpoint? #1728
-
Hi folks, I've been following a guide on deploying custom models for serving on Vertex. The guide is Training, Tuning and Deploying a PyTorch Text Classification Model on Vertex AI. In particular, the section at the end, "Invoking the Endpoint with deployed Model using Vertex SDK to make predictions". So far I've managed to:
From here I can make requests to the endpoint, but I always get the following response:
This happens locally if the request is not formatted correctly. For example, the expected format for my model is:
Vertex expects a bit of additional JSON and provides the following example for plain text:
Source: Get online predictions from custom-trained models The instances key is required and is used to structure multiple inputs in a request. I have tried including the feature name as follows:
However I still get error response with "entry must contain all input features". I'm not sure how to progress from here. I've created an issue in the Google Cloud Platform github also: Custom model deployed with a docker container but requests are not working as expected Any ideas? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi, the default We are working on updating the In the meantime you could explore using TorchServe to deploy a PyTorch model to Vertex AI. See the following code to save the traced torchscript model which will include pre-processing for numeric and category fields:
You can then use the torch-model-archiver to package your model along with the handler required for Vertex AI.
|
Beta Was this translation helpful? Give feedback.
Hi, the default
ludwig serve
endpoint expects inputs to be provided as form fields. This is not a supported interface for Vertex AI prediction requests which requiresContent-Type: application/json
.We are working on updating the
serve
capability in ludwig to be compatible with the KServe v2 Prediction Portocol which is a support format with Vertex AI and will remove the need to build a custom container to host ludwig models with Vertex AI.In the meantime you could explore using TorchServe to deploy a PyTorch model to Vertex AI. See the following code to save the traced torchscript model which will include pre-processing for numeric and category fields: