John Snow Labs LangTest 2.0.0: Comprehensive Model Benchmarking, Added support for LM Studio , CLI Integration for Embedding Benchmarks, Enhanced Toxicity Tests, Multi-Dataset Comparison and elevated user experience with various bug fixes and enhancements. #984
ArshaanNazir
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
📢 Highlights
🌟 LangTest 2.0.0 Release by John Snow Labs
We're thrilled to announce the latest release of LangTest, introducing remarkable features that elevate its capabilities and user-friendliness. This update brings a host of enhancements:
🔬 Model Benchmarking: Conducted tests on diverse models across datasets for insights into performance.
🔌 Integration: LM Studio with LangTest: Offline utilization of Hugging Face quantized models for local NLP tests.
🚀 Text Embedding Benchmark Pipelines: Streamlined process for evaluating text embedding models via CLI.
📊 Compare Models Across Multiple Benchmark Datasets: Simultaneous evaluation of model efficacy across diverse datasets.
🤬 Custom Toxicity Checks: Tailor evaluations to focus on specific types of toxicity, offering detailed analysis in targeted areas of concern, such as obscenity, insult, threat, identity attack, and targeting based on sexual orientation, while maintaining broader toxicity detection capabilities.
Implemented LRU caching within the run method to optimize model prediction retrieval for duplicate records, enhancing runtime efficiency.
🔥 Key Enhancements:
🚀 Model Benchmarking: Exploring Insights into Model Performance
As part of our ongoing Model Benchmarking initiative, we're excited to share the results of our comprehensive tests on a diverse range of models across various datasets, focusing on evaluating their performance on top of accuracy and robustness .
Key Highlights:
Comprehensive Evaluation: Our rigorous testing methodology covered a wide array of models, providing a holistic view of their performance across diverse datasets and tasks.
Insights into Model Behavior: Through this initiative, we've gained valuable insights into the strengths and weaknesses of different models, uncovering areas where even large language models exhibit limitations.
Go to: Leaderboard
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
Deci/DeciLM-7B-instruct
,TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
TheBloke/Llama-2-7B-chat-GGUF
,TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
,TheBloke/neural-chat-7B-v3-1-GGUF
,TheBloke/openchat_3.5-GGUF
,TheBloke/phi-2-GGUF
,google/flan-t5-xxl
,gpt-3.5-turbo-instruct
,gpt-4-1106-preview
,mistralai/Mistral-7B-Instruct-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
,TheBloke/zephyr-7B-beta-GGUF
,mlabonne/NeuralBeagle14-7B-GGUF
,TheBloke/Llama-2-7B-Chat-GGUF
⚡Integration: LM Studio with LangTest
The integration of LM Studio with LangTest enables offline utilization of Hugging Face quantized models, offering users a seamless experience for conducting various NLP tests locally.
Key Benefits:
Offline Accessibility: With this integration, users can now leverage Hugging Face quantized models for NLP tasks like Question Answering, Summarization, Fill Mask, and Text Generation directly within LangTest, even without an internet connection.
Enhanced Control: LM Studio's user-friendly interface provides users with enhanced control over their testing environment, allowing for greater customization and optimization of test parameters.
How it Works:
Simply integrate LM Studio with LangTest to unlock offline utilization of Hugging Face quantized models for your NLP testing needs., below is the demo video for help.
LM-Studio-Demo.ipynb.-.langtest.-.Visual.Studio.Code.2024-01-31.19-00-22_1.mp4
🚀Text Embedding Benchmark Pipelines with CLI (LangTest + LlamaIndex)
Text embedding benchmarks play a pivotal role in assessing the performance of text embedding models across various tasks, crucial for evaluating the quality of text embeddings used in Natural Language Processing (NLP) applications.
The LangTest CLI for Text Embedding Benchmark Pipelines facilitates evaluation of HuggingFace's embedding models on a retrieval task on the Paul Graham dataset. It starts by initializing each embedding model and creating a context for vector operations. Then, it sets up a vector store index for efficient similarity searches. Next, it configures a query engine and a retriever, retrieving the top similar items based on a predefined parameter. Evaluation is then conducted using Mean Reciprocal Rank (MRR) and Hit Rate metrics, measuring the retriever's performance. Perturbations such as typos and word swaps are applied to test the retriever's robustness.
Key Features:
Simplified Benchmarking: Run text embedding benchmark pipelines effortlessly through our CLI, eliminating the need for complex setup or manual intervention.
Versatile Model Evaluation: Evaluate the performance of text embedding models across diverse tasks, empowering users to assess the quality and effectiveness of different models for their specific use cases.
How it Works:
python -m langtest benchmark embeddings --model TaylorAI/bge-micro --hub huggingface
python -m langtest benchmark embeddings --model "TaylorAI/bge-micro,TaylorAI/gte-tiny,intfloat/e5-small" --hub huggingface
📊 Compare Models Across Multiple Benchmark Datasets
Previously, when testing your model, you were limited to evaluating its performance on one dataset at a time. With this update, we've introduced the flexibility to assess your model's efficacy across diverse benchmark datasets simultaneously, empowering you to gain deeper insights into its performance under various conditions and data distributions.
Key Benefits:
Comprehensive Model Evaluation: Evaluate your model's performance across multiple benchmark datasets in a single run, allowing for a more comprehensive assessment of its capabilities and generalization across different data domains.
Time Efficiency: Streamline your testing process by eliminating the need to conduct separate evaluations for each dataset, saving valuable time and resources.
Enhanced Flexibility: Choose from a range of benchmark datasets to test your model against, catering to specific use cases and ensuring robust performance evaluation across diverse scenarios.
How it Works:
To leverage this new feature and compare models across different benchmark datasets, simply pass multiple datasets as input in data_source as a dict:
🤬 Enhanced Toxicity Testing
In this latest release, we've enhanced our toxicity testing framework by introducing a series of comprehensive assessments. Evaluations can be tailored to focus on specific types of toxicity, allowing for detailed analysis in targeted areas of concern. These enhancements are designed to evaluate various facets of toxicity, including obscenity, insult, threat, identity attack, and targeting based on sexual orientation. By expanding the scope of our assessments, we aim to better identify and address instances of negative or hateful comments, harmful stereotypes, and expressions of intent to cause harm or danger.
Supported Tests:
general_toxicity
: This metric focuses on detecting general toxicity, covering a wide range of categories such as obscenity, insult, threat, identity attack, sexual orientation, psychiatric or mental illness related content, and more.homosexual_gay_or_lesbian
: Assesses negative or hateful comments targeting individuals based on gender identity or sexual orientation.insult
: Evaluates the insult score of completions, particularly instances where vulgar language is used with the primary intent to demean or belittle the recipient.identity_attack
: Assesses whether completions perpetuate harmful stereotypes or attack individuals or groups based on intrinsic characteristics such as race, ethnicity, religion, etc.obscene
: Evaluates the presence of obscene content within completions, including vulgar language, explicit sexual references, or any form of communication deemed offensive or inappropriate.threat
: Identifies expressions of intent to cause harm or danger within completions.How to Execute:
Example Test Cases:
📒 New Notebooks
🐛 Fixes
⚡ Enhancements
What's Changed
Full Changelog: 1.10.0...2.0.0
This discussion was created from the release John Snow Labs LangTest 2.0.0: Comprehensive Model Benchmarking, Added support for LM Studio , CLI Integration for Embedding Benchmarks, Enhanced Toxicity Tests, Multi-Dataset Comparison and elevated user experience with various bug fixes and enhancements..
Beta Was this translation helpful? Give feedback.
All reactions