Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom model format breaks Models UI #75

Open
ghost opened this issue Oct 17, 2023 · 3 comments
Open

Custom model format breaks Models UI #75

ghost opened this issue Oct 17, 2023 · 3 comments

Comments

@ghost
Copy link

ghost commented Oct 17, 2023

Currently with, if a user specifies a model format with a custom name in the inference service manifest and deploys it, the Models UI on kubeflow will become disfunctional and not showing any model deployed under the same namespace. This has been mentioned in #46 but there has been no update.

For example,

apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
  name: dummy-model
  namespace: dummy-namespace
spec:
  predictor:
    model:
      image: dummy-image
      modelFormat:
        name: my-model
      name: 'kserve-container'

with a ClusterServingRuntime

kind: ClusterServingRuntime
metadata:
  name: kserve-my-model
spec:
  containers:
    image: dummy-image
    name: kserve-container
  supportedModelFormats:
  - autoSelect: true
    name: my-model

This is because PredictorType is currently hardcoded as an Enum (See here), but getPredictorType function will return the name of the model format as an Enum directly instead of checking if it's a member of the Enum first.. TypeScript does not raise an error when by default when this happens, and will return undefined, which breaks all the downstream tasks, and the logs from the web-app will not show anything related to this.

Suggestion:
Check if the model format is a member of PredictorType Enum and returns PredictorType.Custom if it's not (similarly to what we do with old predictor formats before Kserve 0.7). I believe #47 has already attempted at fixing this problem but I can also make a PR for this.

Question:
How should we make PredictorType more flexible for custom model types? Say a user has custom model types: "A", "B" and "C". Even with the above suggestion, the Models UI will still show custom for all three.

@Sayed-Imran
Copy link

The same is happening for the sample mlflow modelFormat.

The model is getting deployed, and I am even able to inference the model, still it is not showing in the UI

@supertetelman
Copy link

supertetelman commented Apr 23, 2024

I am encountering this same issue when I launch my custom defined ServingRuntime and InferenceService (https://github.com/supertetelman/nim-kserve/blob/main/nim-models/llama-2-7b_1-a100_24.01.yaml) running on build version Kubeflow v1.8.0

When I deploy this InferenceService I need to check the developer console to get the endpoint URL and things work. But nothing is displayed in the UI.

@capoolebugchat
Copy link

Raising this issue again. Everything works great but this undermines the project, as models logging for debugs (without the need to read the Pod's logs or any additional services) is quite a crucial part for small ML teams.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants