Langfun v0.1.0
Major release for the accumulated changes since 2023/09.
Key features:
-
An object-in-object-out interface for LLMs with
lf.query
.- more extensions:
lf.complete
,lf.parse
,lf.describe
.
- more extensions:
-
LLMs under a single interface
- support various models Gemini, GPT, Claudi, Groq, and etc.
- support prompt/response caching
- support rate-limit control and automatic backoff
- support usage tracking with
lf.track_usages
context manager
-
Flexible prompt formatting with
lf.Template
.- jinja2 syntax
- support drop in of multi-modality objects: image, video, PDF, and etc.
- support extension through both composition and inheritance.
-
Batch processing with
lf.concurrent_map
.- support both order and non-ordered output.
- optional progress reporting support for console/Jupyter Notebook
-
Toolkit for code generation through
lf.coding
-
sandbox for code execution via
lf.coding.run
-
lf.PythonCode
for structured code generation -
FunEval: Langfun's Evaluation framework (
lf.eval
)- supports dimension combination with PyGlove's
pg.oneof
- supports HTML summary for both metrics and individual cases.
- model-aware parallel execution across multiple subsets.
- supports dimension combination with PyGlove's