diff options
author | Tekky <98614666+xtekky@users.noreply.github.com> | 2024-10-30 09:54:54 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-10-30 09:54:54 +0100 |
commit | 1c8061af5550a150b1319d04275771698b226cec (patch) | |
tree | 489d5d84e812bd9ba3e2c88c2322b45f016c766d /docs/async_client.md | |
parent | re-include`AsyncClient` for backwards compatibility, with deprecationwarning. Use `Client` instead (diff) | |
parent | Update (docs/async_client.md docs/client.md docs/interference-api.md g4f/client/client.py) (diff) | |
download | gpt4free-0.3.3.3.tar gpt4free-0.3.3.3.tar.gz gpt4free-0.3.3.3.tar.bz2 gpt4free-0.3.3.3.tar.lz gpt4free-0.3.3.3.tar.xz gpt4free-0.3.3.3.tar.zst gpt4free-0.3.3.3.zip |
Diffstat (limited to 'docs/async_client.md')
-rw-r--r-- | docs/async_client.md | 29 |
1 files changed, 26 insertions, 3 deletions
diff --git a/docs/async_client.md b/docs/async_client.md index 0c296c09..0719a463 100644 --- a/docs/async_client.md +++ b/docs/async_client.md @@ -10,6 +10,7 @@ The G4F async client API is designed to be compatible with the OpenAI API, makin - [Key Features](#key-features) - [Getting Started](#getting-started) - [Initializing the Client](#initializing-the-client) + - [Creating Chat Completions](#creating-chat-completions) - [Configuration](#configuration) - [Usage Examples](#usage-examples) - [Text Completions](#text-completions) @@ -51,6 +52,28 @@ client = Client( ) ``` + +## Creating Chat Completions +**Here’s an improved example of creating chat completions:** +```python +response = await async_client.chat.completions.create( + model="gpt-3.5-turbo", + messages=[ + { + "role": "user", + "content": "Say this is a test" + } + ] + # Add other parameters as needed +) +``` + +**This example:** + - Asks a specific question `Say this is a test` + - Configures various parameters like temperature and max_tokens for more control over the output + - Disables streaming for a complete response + +You can adjust these parameters based on your specific needs. ### Configuration @@ -164,7 +187,7 @@ async def main(): response = await client.images.async_generate( prompt="a white siamese cat", - model="dall-e-3" + model="flux" ) image_url = response.data[0].url @@ -185,7 +208,7 @@ async def main(): response = await client.images.async_generate( prompt="a white siamese cat", - model="dall-e-3", + model="flux", response_format="b64_json" ) @@ -217,7 +240,7 @@ async def main(): ) task2 = client.images.async_generate( - model="dall-e-3", + model="flux", prompt="a white siamese cat" ) |