LLM Factory using Dependency Injection provided by Injector
It enables streamlined instantiation of an assortment of LLMs. Leveraging Injector, configuration is separated from the model providing a uniform interface for instantiation. Using Dependency Injection also encourages testability in code further downstream.
LLM Factory has the following goals:
- Streamlined instantiation
- Configuration Mgmt
- Opinionated set of LLM services provided by
Models supported:
- GPT4All
- OpenAI