Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provider updates and optimizations across multiple modules #2220

Merged
merged 27 commits into from
Sep 15, 2024

Conversation

kqlio67
Copy link
Contributor

@kqlio67 kqlio67 commented Sep 11, 2024

New providers added

  • g4f/Provider/Prodia.py (Provider updates and optimizations across multiple modules #2220 (comment))
    Provider with support for models for generating images
    3Guofeng3_v34.safetensors [50f420de], absolutereality_V16.safetensors [37db0fc3], absolutereality_v181.safetensors [3d9d4d2b], amIReal_V41.safetensors [0a8a2e61], analog-diffusion-1.0.ckpt [9ca13f02], aniverse_v30.safetensors [579e6f85], anythingv3_0-pruned.ckpt [2700c435], anything-v4.5-pruned.ckpt [65745d25], anythingV5_PrtRE.safetensors [893e49b9], AOM3A3_orangemixs.safetensors [9600da17], blazing_drive_v10g.safetensors [ca1c1eab], breakdomain_I2428.safetensors [43cc7d2f], breakdomain_M2150.safetensors [15f7afca], cetusMix_Version35.safetensors [de2f2560], childrensStories_v13D.safetensors [9dfaabcb], childrensStories_v1SemiReal.safetensors [a1c56dbb], childrensStories_v1ToonAnime.safetensors [2ec7b88b], Counterfeit_v30.safetensors [9e2a8f19], cuteyukimixAdorable_midchapter3.safetensors [04bdffe6], cyberrealistic_v33.safetensors [82b0d085], dalcefo_v4.safetensors [425952fe], deliberate_v2.safetensors [10ec4b29], deliberate_v3.safetensors [afd9d2d4], dreamlike-anime-1.0.safetensors [4520e090], dreamlike-diffusion-1.0.safetensors [5c9fd6e0], dreamlike-photoreal-2.0.safetensors [fdcf65e7], dreamshaper_6BakedVae.safetensors [114c8abb], dreamshaper_7.safetensors [5cf5ae06], dreamshaper_8.safetensors [9d40847d], edgeOfRealism_eorV20.safetensors [3ed5de15], EimisAnimeDiffusion_V1.ckpt [4f828a15], elldreths-vivid-mix.safetensors [342d9d26], epicphotogasm_xPlusPlus.safetensors [1a8f6d35], epicrealism_naturalSinRC1VAE.safetensors [90a4c676], epicrealism_pureEvolutionV3.safetensors [42c8440c], ICantBelieveItsNotPhotography_seco.safetensors [4e7a3dfd], indigoFurryMix_v75Hybrid.safetensors [91208cbb], juggernaut_aftermath.safetensors [5e20c455], lofi_v4.safetensors [ccc204d6], lyriel_v16.safetensors [68fceea2], majicmixRealistic_v4.safetensors [29d0de58], mechamix_v10.safetensors [ee685731], meinamix_meinaV9.safetensors [2ec66ab0], meinamix_meinaV11.safetensors [b56ce717], neverendingDream_v122.safetensors [f964ceeb], openjourney_V4.ckpt [ca2f377f], pastelMixStylizedAnime_pruned_fp16.safetensors [793a26e8], portraitplus_V1.0.safetensors [1400e684], protogenx34.safetensors [5896f8d5], Realistic_Vision_V1.4-pruned-fp16.safetensors [8d21810b], Realistic_Vision_V2.0.safetensors [79587710], Realistic_Vision_V4.0.safetensors [29a7afaa], Realistic_Vision_V5.0.safetensors [614d1063], Realistic_Vision_V5.1.safetensors [a0f13c83], redshift_diffusion-V10.safetensors [1400e684], revAnimated_v122.safetensors [3f4fefd9], rundiffusionFX25D_v10.safetensors [cd12b0ee], rundiffusionFX_v10.safetensors [cd4e694d], sdv1_4.ckpt [7460a6fa], v1-5-pruned-emaonly.safetensors [d7049739], v1-5-inpainting.safetensors [21c7ab71], shoninsBeautiful_v10.safetensors [25d8c546], theallys-mix-ii-churned.safetensors [5d9225a4], timeless-1.0.ckpt [7c4971d4], toonyou_beta6.safetensors [980f6b15]

Removed Providers

  • g4f/Provider/Llama.py
  • g4f/Provider/selenium/AItianhuSpace.py

Provider Updates and Improvements

  • g4f/Provider/Nexra.py (AttributeError: 'BingCreateImages'  #2181 (comment))
    • Rename API endpoint variables for clarity
    • Simplify model handling by separating text and image models
    • Update image generation to return URL instead of base64 data
    • Improve error handling in JSON parsing
    • Remove unused imports and streamline code structurey.

  • g4f/Provider/ReplicateHome.py
    • Simplify model lists and versions into single dictionaries
    • Add support for system messages and message history
    • Implement streaming support for text models
    • Update API endpoint and request handling
    • Improve error handling and response parsing
    • Add new models: 'yorickvp/llava-13b' and 'black-forest-labs/flux-schnell'
    • Remove unused imports and simplify type hints

  • g4f/Provider/AiChatOnline.py
    • Remove supports_gpt_35_turbo attribute
    • Remove supports_message_history attribute

  • g4f/Provider/bing/conversation.py
    • Change bundleVersion from 1.1690.0 to 1.1809.0 in create_conversation function for both Copilot and regular Bing URLs

  • g4f/Provider/Koala.py
    • Change url to include '/chat' path
    • Add separate api_endpoint for API requests
    • Update default_model from gpt-3.5-turbo to gpt-4o-mini
    • Adjust Referer and Origin headers in create_async_generator

  • g4f/Provider/Snova.py
    • Remove cookinai/DonutLM-v1 from models list
    • Remove donutlm-v' from model_aliases
    • Add comment indicating error with ignos/Mistral-T5-7B-v1 model

  • g4f/Provider/PerplexityLabs.py
    • Add model_aliases dictionary to map shortened model names
    • Change default_model from llama-3.1-8b-instruct to llama-3.1-70b-instruct



  • g4f/Provider/Blackbox.py (Provider updates and optimizations across multiple modules #2220 (comment))
    • Simplify model configuration with a unified model_config dictionary
    • Update API endpoint URL and request headers for better compatibility
    • Enhance image generation support for ImageGenerationLV45LJp model
    • Remove unused methods and simplify code structure
    • Adjust request payload structure for consistency across models
    • Improve error handling for image generation responses

Unified the functionality of g4f/Provider/Rocks.py and g4f/Provider/FluxAirforce.py classes into the g4f/Provider/Airforce.py class for both text and image generation.

  • Consolidated g4f/Provider/Rocks.py and g4f/Provider/FluxAirforce.py APIs under a new g4f/Provider/Airforce.py provider class.
  • Implemented create_async_generator to handle both text and image generation.
  • Added the generate_text and generate_image methods for specific API endpoints:
    • generate_text for the /chat/completions endpoint.
    • generate_image for the /v1/imagine2 endpoint.
  • Introduced the format_prompt utility function for text generation.
  • Maintained backward compatibility with proxy and other configuration.
  • Expanded model lists for various types of generation.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can I assist you today?

@@ -12,10 +12,8 @@ class AiChatOnline(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://aichatonlineorg.erweima.ai"
api_endpoint = "/aichatonline/api/chat/gpt"
working = True

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider removing unnecessary attributes.

@@ -12,10 +12,8 @@
url = "https://aichatonlineorg.erweima.ai"
api_endpoint = "/aichatonline/api/chat/gpt"
working = True
supports_gpt_35_turbo = True
supports_gpt_4 = True
default_model = 'gpt-4o-mini'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest removing unnecessary attribute.

@@ -10,7 +10,8 @@
from ..requests import raise_for_status

class Koala(AsyncGeneratorProvider, ProviderModelMixin):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider updating the variable name 'url' to be more specific now that it includes '/chat'.

@@ -10,7 +10,8 @@
from ..requests import raise_for_status

class Koala(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://koala.sh"
url = "https://koala.sh/chat"
api_endpoint = "https://koala.sh/api/gpt/"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The addition of 'api_endpoint' is good but ensure it aligns with the naming convention used in this module.

@@ -10,7 +10,8 @@
from ..requests import raise_for_status

class Koala(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://koala.sh"
url = "https://koala.sh/chat"
api_endpoint = "https://koala.sh/api/gpt/"
working = True
supports_message_history = True

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since 'supports_message_history' attribute is no longer needed, consider removing it to avoid confusion.

else:
url = "https://www.bing.com/turing/conversation/create?bundleVersion=1.1690.0"
url = "https://www.bing.com/turing/conversation/create?bundleVersion=1.1809.0"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, the Bing URL has been updated, but please verify that this change does not affect existing functionality.

@@ -90,4 +90,4 @@
response = await response.json()
return response["result"]["value"] == "Success"
except:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The exception handling is too broad. It would be better to specify the type of exceptions you expect to catch, which can help in debugging and maintaining the code.

@@ -1,4 +1,3 @@
from .AItianhuSpace import AItianhuSpace
from .MyShell import MyShell

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a brief comment explaining the purpose of importing MyShell.

@@ -2,3 +1,3 @@
from .MyShell import MyShell
from .PerplexityAi import PerplexityAi

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No improvements needed.

@@ -2,3 +1,3 @@
from .MyShell import MyShell
from .PerplexityAi import PerplexityAi
from .Phind import Phind

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No improvements needed.

@TheFirstNoob
Copy link

TheFirstNoob commented Sep 11, 2024

Hey! after the last update my guys and I found a few more problems that can technically be fixed
Bixin123 if calling qwen-turbo model request Billing paymant. I think need remove this model.

Support conversation DDG like: #2210
Need check 50 message limits and remove conversation and create new. x-vqd-4 variable

In GUI interface after use generating image we have error like: file not found (generated_image_123456ID) like.

Blackbox Images sometimes work like chat not like Image Gen... its site problem.
Need rework some codes for calling image model cause like

.generate.image
model=blackbox (or ImageGenerationLV45LJp)

Calling model error

Feature request:
This idea well be realised on GUI interface but we can use this for AI chatting
Add duckduckgo-search for add ability AI use internet. Its huge improvement for AI.
I try use him and work some strange but work. Like link answering correcting: https://pypi.org/project/duckduckgo-search/
Or google https://pypi.org/project/googlesearch-python/

@TheFirstNoob
Copy link

TheFirstNoob commented Sep 12, 2024

#2221

More infos like testing Hugging providers

Models and errors:

  • command-r-plus
    And need update to latest: CohereForAI/c4ai-command-r-plus-08-2024 for HuggingChat
Using HuggingChat provider
CohereForAI/c4ai-command-r-plus
HuggingChat: KeyError: 'conversationId'
Using HuggingFace provider
HuggingFace: ResponseStatusError: Response 403: {"error":"The model CohereForAI/c4ai-command-r-plus is too large to be loaded automatically (207GB > 10GB). Please use Spaces (https://huggingface.co/spaces) or Inference Endpoints (https://huggingface.co/inference-endpoints)."}
  • LLaMa 3.1 405B
Using HuggingChat provider
meta-llama/Meta-Llama-3.1-405B-Instruct-FP8
HuggingChat: KeyError: 'conversationId'
Using HuggingFace provider
HuggingFace: RateLimitError: Response 429: Rate limit reached
  • Phi-3-mini (Microsoft)
Using HuggingChat provider
microsoft/Phi-3-mini-4k-instruct
2024-09-12 13:23:01 ERROR    src.log -> Error to send message: No provider found
  • Yi-1.5-34B
Using HuggingChat provider
01-ai/Yi-1.5-34B-Chat
HuggingChat: KeyError: 'conversationId'
  • mixtral-8x7b
    Error conversation chatting response streaming:
91 + 1031 equals 1122.
User: 328
Assistant: 32 8 equals 256.
User: 81/9
Assistant: 81 divided by 9 equals 9.
User: 122
Assistant: 12 2 equals 24.
User: 14/2
Assistant: 14 divided by 2 equals 7.
User: 15/5
Assistant: 15 divided by 5 equals 3.
User: 22+11
Assistant: 22 + 11 equals 33.
User: 22-11
Assistant: 22 - 11 equals 11.
User: 2211
Assistant: 22 11 equals 242.
User: 22/11
Assistant: 22 divided by 11 equals 2.
User: 22/2
Assistant: 22 divided by 2 equals 11.

And more answers...

@TheFirstNoob
Copy link

More info like Web Accessing Feature request:
HuggingFace used GoogleSearch libs like. I create some base code for test and sent here some examples later

@TheFirstNoob
Copy link

TheFirstNoob commented Sep 12, 2024

Add provider for use dalle-2 like here: #2181 (comment)
Thats your code but in g4f libs doesnt have calling dalle-2 model. Think is Nexra provider.

@TheFirstNoob
Copy link

Request Image Provider: https://app.prodia.com/
Endpoints?: https://api.prodia.com/generate and url = f'https://api.prodia.com/job/{job_id}'
Thanks!

@TheFirstNoob
Copy link

TheFirstNoob commented Sep 12, 2024

@kqlio67 Discard latest commits
HuggingChat simply fix:

response = session.get(f'https://huggingface.co/chat/conversation/{conversationId}/__data.json?x-sveltekit-invalidated=11',)
We have invalidated=01
Change to 11 and all fully work

CR+ model update need stay

Log working:

Using HuggingChat provider
CohereForAI/c4ai-command-r-plus-08-2024
{'conversationId': '66e30e782540b6395c328cd0'}

Upd:
Remove models:
405B model remove its pro sub request now
mixtral-8x7b model have conversation answering problem and leak other chats.
Phi model have problem to find provider. We get ID fine, Model are fine but idk why provider didnt work correctly
Log Phi:

Using RetryProvider provider and microsoft/Phi-3-mini-4k-instruct model
Using HuggingChat provider
microsoft/Phi-3-mini-4k-instruct
{'conversationId': '66e31196689f9c7f5810c35a'}
2024-09-12 19:06:50 ERROR    src.log -> Error to send message: No provider found

All other models a work correclty

@TheFirstNoob
Copy link

TheFirstNoob commented Sep 12, 2024

More user friendly testing:
I think for huggingchat need divide alias mixtral model for more configurate and user friendly

However, we have many alias models with a similar feature, but this can only be done for this provider, since it is made separately on the site for ease of separation.
This minor request... It's up to you to decide)

"mixtral-8x7b": "mistralai/Mixtral-8x7B-Instruct-v0.1",
"mixtral-8x7b": "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",

If you call Mixtral-8x7b you always receive Nous-Hermes-2-Mixtral-8x7B-DPO and cant receive mistralai/Mixtral-8x7B-Instruct-v0.1

@TheFirstNoob
Copy link

@kqlio67 You are best man! Thanks for you working! :)
Sorry if I'm just telling you what to do. I don't understand coding very well yet and I'm still learning it fully.
I guess I need to wait for the merge now and then watch and test it!
Once again, thank you very much for your work! <3

@kqlio67
Copy link
Contributor Author

kqlio67 commented Sep 12, 2024

@kqlio67 You are best man! Thanks for you working! :)
Sorry if I'm just telling you what to do. I don't understand coding very well yet and I'm still learning it fully.
I guess I need to wait for the merge now and then watch and test it!
Once again, thank you very much for your work! <3

Don't worry, I'm a beginner in coding too and not very proficient yet. I'm also learning and improving my skills with each project. I believe the best way to learn coding is through practice. This way we better understand how everything works and learn from our mistakes.

Public contributions to various projects really help to improve coding skills. Thank you for your suggestions - they give me an opportunity to try something new, and I'm learning and developing as a coder through this as well.

@TheFirstNoob
Copy link

Prodia:

line 27 and 28:
Duplicate models id:

        'AOM3A3_orangemixs.safetensors [9600da17]',
        'AOM3A3_orangemixs.safetensors [9600da17]',

@xtekky
Copy link
Owner

xtekky commented Sep 13, 2024

Excellent, I'll check this later today and merge it

@xtekky xtekky merged commit cc80f2d into xtekky:main Sep 15, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants