Skip to content

Current - RAG UX Enhancements | Model Directory | API Odds and Ends

No due date 91% complete

Users should be able to:

  • [RAG] View annotations in UI to see sources when using RAG for "chat with your docs"
  • [RAG] Upload new document types, including Word, PowerPoint, and Excel (images ignored) DONE

MUST HAVE (Non-negotiable product needs that are mandatory for the team)

  • [RAG] UI lets users upload Word, PowerPoint, and Excel files (images are ignored)

Users should be able to:

  • [RAG] View annotations in UI to see sources when using RAG for "chat with your docs"
  • [RAG] Upload new document types, including Word, PowerPoint, and Excel (images ignored) DONE

MUST HAVE (Non-negotiable product needs that are mandatory for the team)

  • [RAG] UI lets users upload Word, PowerPoint, and Excel files (images are ignored) DONE
    • The RAG system chunks data appropriately according to file type DONE
  • [RAG] Messages include file_citation and/or file_path annotations
  • [RAG] Reword the file_citation return to be more clear that the files referenced were 'used' even if they were not relevant to the original prompt
  • [MD] Models are no longer baked into runtime containers -> instead, they will be managed and deployed to some form of model registry that the containers will pull from at runtime.

SHOULD HAVE (Important initiatives that are not vital, but add significant value)

  • [RAG] UI shows users which document(s) were used in a RAG response
    • Optional (if easy): Users can select an annotation to view the passage that was used
    • Optional (if easy): Users can select an annotation to view/download the original file
  • [RAG] Creating the text-embeddings from file uploads is queued
  • [RAG] UI displays the status of uploaded files within the queue (e.g., queued, processing, ready)
  • [MD] Deploy multiple models as separate containers
  • [MD] UI lets users select from currently active chat LLMs when creating/editing an Assistant
  • [API] Users can request long-lived API keys via LeapfrogAI API
  • [API] transcriptions and translations endpoints are implemented according to OpenAI API spec DONE

COULD HAVE (Nice to have initiatives that will have a small impact if left out)

  • [RAG] Initial Set of Model Evals
    • Establish a list of models to evaluate, both for LLM and Embeddings
    • Create a testing dataset for RAG and question/answer/ground-truth data DONE
    • Formalize a set of metrics for evaluation DONE
    • Evaluate a subset of models and present for mission hero interpretation
  • [RAG] Integrate RAG eval tools into our repository & connect with LeapfrogAI
  • [RAG] Implement OCR/image-analysis to extract data from embedded images in file types listed above (e.g., PowerPoint)
  • [API] UI lets users generate long-lived API keys (dependent on above work) DONE
  • [Other] UI renders code in generated output as a formatted "code block" rather than regular text DONE

WILL NOT HAVE (Initiatives that are not a priority for this specific time frame)

  • UI implements workflow for transcription/translation/summarization
  • UI lets users create an Assistant and select a model that is in ModelRegistry (and not currently running) such that the model spins up.
Loading