Skip to content

Latest commit

 

History

History
105 lines (73 loc) · 2.8 KB

README.md

File metadata and controls

105 lines (73 loc) · 2.8 KB

Willingness to Pay

An interactive web application that helps users analyze call transcripts to assess customer willingness to pay.

Features

  • Analyzes call transcripts to assess customer willingness to pay
  • Customizable system prompts and analysis criteria
  • Dark/light theme support
  • Interactive results table with detailed modal view
  • Keyboard navigation support
  • Secure authentication via LLM Foundry API
  • Supports multiple transcript analysis in batch
  • JSON schema validation for consistent responses

Usage

  1. Log in using your LLM Foundry credentials
  2. Either use the default transcripts or enter your own:
    • Add transcripts separated by ==========
    • Customize the system prompt if needed
    • Modify the analysis criteria (one per line)
  3. Click "Analyze" to process the transcripts
  4. View results in the interactive table:
    • ✅ indicates positive responses
    • ❌ indicates negative responses
    • Click any row to view detailed analysis
    • Use ↑/↓ keys to navigate between results

Screenshot

Screenshot

Installation

Prerequisites

Local Setup

  1. Clone this repository:
git clone https://github.com/gramener/willingnesstopay.git
cd willingnesstopay
  1. Serve the files using any static web server. For example, using Python:
python -m http.server
  1. Open http://localhost:8000 in your web browser

Deployment

On Cloudflare DNS, proxy CNAME willingnesstopay.straive.app to gramener.github.io.

On this repository's page settings, set

  • Source: Deploy from a branch
  • Branch: main
  • Folder: /

Technical Details

Architecture

The application follows a simple single-page architecture:

  • Frontend-only implementation using vanilla JavaScript and ESM modules
  • Streaming LLM responses for real-time analysis feedback
  • Bootstrap for responsive UI components
  • lit-html for efficient DOM updates
  • JSON schema validation for API responses

Dependencies

The application uses the LLM Foundry API for:

  • Authentication via token-based access
  • GPT-4 powered transcript analysis
  • Streaming response handling

Development

Project Structure

├── index.html # Main HTML file
├── script.js # Main application logic
├── style.css # Styling
└── README.md # Documentation

License

MIT