Skip to content

Commit

Permalink
Updating provider documentation and small fixes in providers (#2469)
Browse files Browse the repository at this point in the history
* refactor(g4f/Provider/Airforce.py): improve model handling and filtering

- Add hidden_models set to exclude specific models
- Add evil alias for uncensored model handling
- Extend filtering for model-specific response tokens
- Add response buffering for streamed content
- Update model fetching with error handling

* refactor(g4f/Provider/Blackbox.py): improve caching and model handling

- Add caching system for validated values with file-based storage
- Rename 'flux' model to 'ImageGeneration' and update references
- Add temperature, top_p and max_tokens parameters to generator
- Simplify HTTP headers and remove redundant options
- Add model alias mapping for ImageGeneration
- Add file system utilities for cache management

* feat(g4f/Provider/RobocodersAPI.py): add caching and error handling

- Add file-based caching system for access tokens and sessions
- Add robust error handling with specific error messages
- Add automatic dialog continuation on resource limits
- Add HTML parsing with BeautifulSoup for token extraction
- Add debug logging for error tracking
- Add timeout configuration for API requests

* refactor(g4f/Provider/DarkAI.py): update DarkAI default model and aliases

- Change default model from llama-3-405b to llama-3-70b
- Remove llama-3-405b from supported models list
- Remove llama-3.1-405b from model aliases

* feat(g4f/Provider/Blackbox2.py): add image generation support

- Add image model 'flux' with dedicated API endpoint
- Refactor generator to support both text and image outputs
- Extract headers into reusable static method
- Add type hints for AsyncGenerator return type
- Split generation logic into _generate_text and _generate_image methods
- Add ImageResponse handling for image generation results

BREAKING CHANGE: create_async_generator now returns AsyncGenerator instead of AsyncResult

* refactor(g4f/Provider/ChatGptEs.py): update ChatGptEs model configuration

- Update models list to include gpt-3.5-turbo
- Remove chatgpt-4o-latest from supported models
- Remove model_aliases mapping for gpt-4o

* feat(g4f/Provider/DeepInfraChat.py): add Accept-Language header support

- Add Accept-Language header for internationalization
- Maintain existing header configuration
- Improve request compatibility with language preferences

* refactor(g4f/Provider/needs_auth/Gemini.py): add ProviderModelMixin inheritance

- Add ProviderModelMixin to class inheritance
- Import ProviderModelMixin from base_provider
- Move BaseConversation import to base_provider imports

* refactor(g4f/Provider/Liaobots.py): update model details and aliases

- Add version suffix to o1 model IDs
- Update model aliases for o1-preview and o1-mini
- Standardize version format across model definitions

* refactor(g4f/Provider/PollinationsAI.py): enhance model support and generation

- Split generation logic into dedicated image/text methods
- Add additional text models including sur and claude
- Add width/height parameters for image generation
- Add model existence validation
- Add hasattr checks for model lists initialization

* chore(gitignore): add provider cache directory

- Add g4f/Provider/.cache to gitignore patterns

* refactor(g4f/Provider/ReplicateHome.py): update model configuration

- Update default model to gemma-2b-it
- Add default_image_model configuration
- Remove llava-13b from supported models
- Simplify request headers

* feat(g4f/models.py): expand provider and model support

- Add new providers DarkAI and PollinationsAI
- Add new models for Mistral, Flux and image generation
- Update provider lists for existing models
- Add P1 and Evil models with experimental providers

BREAKING CHANGE: Remove llava-13b model support

* refactor(Airforce): Update type hint for split_message return

- Change return type of  from  to  for consistency with import.
- Maintain overall functionality and structure of the  class.
- Ensure compatibility with type hinting standards in Python.

* refactor(g4f/Provider/Airforce.py): Update type hint for split_message return

- Change return type of 'split_message' from 'list[str]' to 'List[str]' for consistency with import.
- Maintain overall functionality and structure of the 'Airforce' class.
- Ensure compatibility with type hinting standards in Python.

* feat(g4f/Provider/RobocodersAPI.py): Add support for optional BeautifulSoup dependency

- Introduce a check for the BeautifulSoup library and handle its absence gracefully.
- Raise a  if BeautifulSoup is not installed, prompting the user to install it.
- Remove direct import of BeautifulSoup to avoid import errors when the library is missing.

* fix: Updating provider documentation and small fixes in providers

* Disabled the provider (RobocodersAPI)

* Fix: Conflicting file g4f/models.py

* Update g4f/models.py g4f/Provider/Airforce.py

* Update docs/providers-and-models.md g4f/models.py g4f/Provider/Airforce.py g4f/Provider/PollinationsAI.py

* Update docs/providers-and-models.md

* Update .gitignore

* Update g4f/models.py

* Update g4f/Provider/PollinationsAI.py

---------

Co-authored-by: kqlio67 <>
  • Loading branch information
kqlio67 authored Dec 9, 2024
1 parent 76c3683 commit bb9132b
Show file tree
Hide file tree
Showing 33 changed files with 311 additions and 392 deletions.
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -66,4 +66,3 @@ bench.py
to-reverse.txt
g4f/Provider/OpenaiChat2.py
generated_images/
g4f/Provider/.cache
272 changes: 120 additions & 152 deletions docs/providers-and-models.md

Large diffs are not rendered by default.

6 changes: 4 additions & 2 deletions g4f/Provider/Airforce.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,22 +42,24 @@ class Airforce(AsyncGeneratorProvider, ProviderModelMixin):

hidden_models = {"Flux-1.1-Pro"}

additional_models_imagine = ["flux-1.1-pro", "dall-e-3"]
additional_models_imagine = ["flux-1.1-pro", "midjourney", "dall-e-3"]

model_aliases = {
# Alias mappings for models
"gpt-4": "gpt-4o",
"openchat-3.5": "openchat-3.5-0106",
"deepseek-coder": "deepseek-coder-6.7b-instruct",
"hermes-2-dpo": "Nous-Hermes-2-Mixtral-8x7B-DPO",
"hermes-2-pro": "hermes-2-pro-mistral-7b",
"openhermes-2.5": "openhermes-2.5-mistral-7b",
"lfm-40b": "lfm-40b-moe",
"discolm-german-7b": "discolm-german-7b-v1",
"german-7b": "discolm-german-7b-v1",
"llama-2-7b": "llama-2-7b-chat-int8",
"llama-3.1-70b": "llama-3.1-70b-turbo",
"neural-7b": "neural-chat-7b-v3-1",
"zephyr-7b": "zephyr-7b-beta",
"evil": "any-uncensored",
"sdxl": "stable-diffusion-xl-lightning",
"sdxl": "stable-diffusion-xl-base",
"flux-pro": "flux-1.1-pro",
"llama-3.1-8b": "llama-3.1-8b-chat"
Expand Down
2 changes: 0 additions & 2 deletions g4f/Provider/AmigoChat.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,6 @@ class AmigoChat(AsyncGeneratorProvider, ProviderModelMixin):
"mythomax-13b": "Gryphe/MythoMax-L2-13b",

"mixtral-7b": "mistralai/Mistral-7B-Instruct-v0.3",
"mistral-tiny": "mistralai/mistral-tiny",
"mistral-nemo": "mistralai/mistral-nemo",

"deepseek-chat": "deepseek-ai/deepseek-llm-67b-chat",
Expand All @@ -127,7 +126,6 @@ class AmigoChat(AsyncGeneratorProvider, ProviderModelMixin):


### image ###
"flux-realism": "flux-realism",
"flux-dev": "flux/dev",
}

Expand Down
6 changes: 3 additions & 3 deletions g4f/Provider/Blackbox.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,12 +98,12 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
models = list(dict.fromkeys([default_model, *userSelectedModel, *list(agentMode.keys()), *list(trendingAgentMode.keys())]))

model_aliases = {
"gpt-4": "blackboxai",
### chat ###
"gpt-4": "gpt-4o",
"gpt-4o-mini": "gpt-4o",
"gpt-3.5-turbo": "blackboxai",
"gemini-flash": "gemini-1.5-flash",
"claude-3.5-sonnet": "claude-sonnet-3.5",

### image ###
"flux": "ImageGeneration",
}

Expand Down
2 changes: 1 addition & 1 deletion g4f/Provider/ChatGptEs.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ class ChatGptEs(AsyncGeneratorProvider, ProviderModelMixin):
supports_message_history = True

default_model = 'gpt-4o'
models = ['gpt-3.5-turbo', 'gpt-4o', 'gpt-4o-mini']
models = ['gpt-4', 'gpt-4o', 'gpt-4o-mini']

@classmethod
def get_model(cls, model: str) -> str:
Expand Down
1 change: 1 addition & 0 deletions g4f/Provider/DDG.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ def __init__(self, model: str):
self.model = model

class DDG(AsyncGeneratorProvider, ProviderModelMixin):
label = "DuckDuckGo AI Chat"
url = "https://duckduckgo.com/aichat"
api_endpoint = "https://duckduckgo.com/duckchat/v1/chat"
working = True
Expand Down
1 change: 1 addition & 0 deletions g4f/Provider/DeepInfraChat.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
class DeepInfraChat(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://deepinfra.com/chat"
api_endpoint = "https://api.deepinfra.com/v1/openai/chat/completions"

working = True
supports_stream = True
supports_system_message = True
Expand Down
5 changes: 3 additions & 2 deletions g4f/Provider/Flux.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,14 @@
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin

class Flux(AsyncGeneratorProvider, ProviderModelMixin):
label = "Flux Provider"
label = "HuggingSpace (black-forest-labs-flux-1-dev)"
url = "https://black-forest-labs-flux-1-dev.hf.space"
api_endpoint = "/gradio_api/call/infer"
working = True
default_model = 'flux-dev'
models = [default_model]
image_models = [default_model]
model_aliases = {"flux-dev": "flux-1-dev"}

@classmethod
async def create_async_generator(
Expand Down Expand Up @@ -55,4 +56,4 @@ async def create_async_generator(
yield ImagePreview(url, prompt)
else:
yield ImageResponse(url, prompt)
break
break
2 changes: 2 additions & 0 deletions g4f/Provider/FreeGpt.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,11 @@

class FreeGpt(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://freegptsnav.aifree.site"

working = True
supports_message_history = True
supports_system_message = True

default_model = 'gemini-pro'

@classmethod
Expand Down
2 changes: 1 addition & 1 deletion g4f/Provider/GizAI.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@
class GizAI(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://app.giz.ai/assistant"
api_endpoint = "https://app.giz.ai/api/data/users/inferenceServer.infer"

working = True
supports_stream = False
supports_system_message = True
supports_message_history = True

default_model = 'chat-gemini-flash'
models = [default_model]

model_aliases = {"gemini-flash": "chat-gemini-flash",}

@classmethod
Expand Down
2 changes: 1 addition & 1 deletion g4f/Provider/Liaobots.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,9 +143,9 @@ class Liaobots(AsyncGeneratorProvider, ProviderModelMixin):
working = True
supports_message_history = True
supports_system_message = True

default_model = "gpt-4o-2024-08-06"
models = list(models.keys())

model_aliases = {
"gpt-4o-mini": "gpt-4o-mini-free",
"gpt-4o": "gpt-4o-2024-08-06",
Expand Down
5 changes: 3 additions & 2 deletions g4f/Provider/PerplexityLabs.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ class PerplexityLabs(AsyncGeneratorProvider, ProviderModelMixin):
"sonar-online": "sonar-small-128k-online",
"sonar-chat": "llama-3.1-sonar-large-128k-chat",
"sonar-chat": "llama-3.1-sonar-small-128k-chat",
"llama-3.3-70b": "llama-3.3-70b-instruct",
"llama-3.1-8b": "llama-3.1-8b-instruct",
"llama-3.1-70b": "llama-3.1-70b-instruct",
"lfm-40b": "/models/LiquidCloud",
Expand Down Expand Up @@ -78,9 +79,9 @@ async def create_async_generator(
assert(await ws.receive_str())
assert(await ws.receive_str() == "6")
message_data = {
"version": "2.5",
"version": "2.13",
"source": "default",
"model": cls.get_model(model),
"model": model,
"messages": messages
}
await ws.send_str("42" + json.dumps(["perplexity_labs", message_data]))
Expand Down
16 changes: 9 additions & 7 deletions g4f/Provider/PollinationsAI.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
from .helper import format_prompt

class PollinationsAI(OpenaiAPI):
label = "Pollinations.AI"
label = "Pollinations AI"
url = "https://pollinations.ai"

working = True
Expand All @@ -22,36 +22,38 @@ class PollinationsAI(OpenaiAPI):

default_model = "openai"

additional_models_image = ["unity", "midijourney", "rtist"]
additional_models_image = ["midjourney", "dall-e-3"]
additional_models_text = ["sur", "sur-mistral", "claude"]

model_aliases = {
"gpt-4o": "openai",
"mistral-nemo": "mistral",
"llama-3.1-70b": "llama", #
"gpt-3.5-turbo": "searchgpt",
"gpt-4": "searchgpt",
"gpt-3.5-turbo": "claude",
"gpt-4": "claude",
"qwen-2.5-coder-32b": "qwen-coder",
"claude-3.5-sonnet": "sur",
}

headers = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36"
}

@classmethod
def get_models(cls):
if not hasattr(cls, 'image_models'):
cls.image_models = []
if not cls.image_models:
url = "https://image.pollinations.ai/models"
response = requests.get(url)
response = requests.get(url, headers=cls.headers)
raise_for_status(response)
cls.image_models = response.json()
cls.image_models.extend(cls.additional_models_image)
if not hasattr(cls, 'models'):
cls.models = []
if not cls.models:
url = "https://text.pollinations.ai/models"
response = requests.get(url)
response = requests.get(url, headers=cls.headers)
raise_for_status(response)
cls.models = [model.get("name") for model in response.json()]
cls.models.extend(cls.image_models)
Expand Down Expand Up @@ -94,7 +96,7 @@ async def _generate_image(cls, model: str, messages: Messages, prompt: str = Non
@classmethod
async def _generate_text(cls, model: str, messages: Messages, api_base: str, api_key: str = None, proxy: str = None, **kwargs):
if api_key is None:
async with ClientSession(connector=get_connector(proxy=proxy)) as session:
async with ClientSession(connector=get_connector(proxy=proxy), headers=cls.headers) as session:
prompt = format_prompt(messages)
async with session.get(f"https://text.pollinations.ai/{quote(prompt)}?model={quote(model)}") as response:
await raise_for_status(response)
Expand Down
3 changes: 2 additions & 1 deletion g4f/Provider/RubiksAI.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ class RubiksAI(AsyncGeneratorProvider, ProviderModelMixin):
label = "Rubiks AI"
url = "https://rubiks.ai"
api_endpoint = "https://rubiks.ai/search/api/"

working = True
supports_stream = True
supports_system_message = True
Expand Down Expand Up @@ -127,4 +128,4 @@ async def create_async_generator(
yield content

if web_search and sources:
yield Sources(sources)
yield Sources(sources)
10 changes: 3 additions & 7 deletions g4f/Provider/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,25 +22,21 @@
from .DarkAI import DarkAI
from .DDG import DDG
from .DeepInfraChat import DeepInfraChat
from .Flux import Flux
from .Free2GPT import Free2GPT
from .FreeGpt import FreeGpt
from .GizAI import GizAI
from .Liaobots import Liaobots
from .MagickPen import MagickPen
from .Mhystical import Mhystical
from .PerplexityLabs import PerplexityLabs
from .Pi import Pi
from .Pizzagpt import Pizzagpt
from .PollinationsAI import PollinationsAI
from .Prodia import Prodia
from .Reka import Reka
from .ReplicateHome import ReplicateHome
from .RobocodersAPI import RobocodersAPI
from .RubiksAI import RubiksAI
from .TeachAnything import TeachAnything
from .Upstage import Upstage
from .You import You
from .Mhystical import Mhystical
from .Flux import Flux

import sys

Expand All @@ -61,4 +57,4 @@
])

class ProviderUtils:
convert: dict[str, ProviderType] = __map__
convert: dict[str, ProviderType] = __map__
8 changes: 8 additions & 0 deletions g4f/Provider/needs_auth/Gemini.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,14 +51,22 @@
}

class Gemini(AsyncGeneratorProvider, ProviderModelMixin):
label = "Google Gemini"
url = "https://gemini.google.com"

needs_auth = True
working = True

default_model = 'gemini'
image_models = ["gemini"]
default_vision_model = "gemini"
models = ["gemini", "gemini-1.5-flash", "gemini-1.5-pro"]
model_aliases = {
"gemini-flash": "gemini-1.5-flash",
"gemini-pro": "gemini-1.5-pro",
}
synthesize_content_type = "audio/vnd.wav"

_cookies: Cookies = None
_snlm0e: str = None
_sid: str = None
Expand Down
10 changes: 8 additions & 2 deletions g4f/Provider/needs_auth/GeminiPro.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,20 @@
from ..helper import get_connector

class GeminiPro(AsyncGeneratorProvider, ProviderModelMixin):
label = "Gemini API"
label = "Google Gemini API"
url = "https://ai.google.dev"

working = True
supports_message_history = True
needs_auth = True

default_model = "gemini-1.5-pro"
default_vision_model = default_model
models = [default_model, "gemini-pro", "gemini-1.5-flash", "gemini-1.5-flash-8b"]
model_aliases = {
"gemini-flash": "gemini-1.5-flash",
"gemini-flash": "gemini-1.5-flash-8b",
}

@classmethod
async def create_async_generator(
Expand Down Expand Up @@ -108,4 +114,4 @@ async def create_async_generator(
if candidate["finishReason"] == "STOP":
yield candidate["content"]["parts"][0]["text"]
else:
yield candidate["finishReason"] + ' ' + candidate["safetyRatings"]
yield candidate["finishReason"] + ' ' + candidate["safetyRatings"]
6 changes: 4 additions & 2 deletions g4f/Provider/needs_auth/GithubCopilot.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,12 @@ def __init__(self, conversation_id: str):
self.conversation_id = conversation_id

class GithubCopilot(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://copilot.microsoft.com"
url = "https://github.com/copilot"

working = True
needs_auth = True
supports_stream = True

default_model = "gpt-4o"
models = [default_model, "o1-mini", "o1-preview", "claude-3.5-sonnet"]

Expand Down Expand Up @@ -90,4 +92,4 @@ async def create_async_generator(
if line.startswith(b"data: "):
data = json.loads(line[6:])
if data.get("type") == "content":
yield data.get("body")
yield data.get("body")
Loading

0 comments on commit bb9132b

Please sign in to comment.