summaryrefslogtreecommitdiffstats
path: root/docs/interference.md
diff options
context:
space:
mode:
authorTekky <98614666+xtekky@users.noreply.github.com>2024-10-22 23:32:27 +0200
committerGitHub <noreply@github.com>2024-10-22 23:32:27 +0200
commita63c18de796bd4f3e818ff170b6ff595304f95e0 (patch)
tree844dbb9a8d3526a8b60564b78f7a19a4e0f605d9 /docs/interference.md
parentMerge pull request #2282 from Karasiq/patch-1 (diff)
parentUpdated docs/providers-and-models.md g4f/models.py g4f/Provider/Upstage.py (diff)
downloadgpt4free-0.3.3.2.tar
gpt4free-0.3.3.2.tar.gz
gpt4free-0.3.3.2.tar.bz2
gpt4free-0.3.3.2.tar.lz
gpt4free-0.3.3.2.tar.xz
gpt4free-0.3.3.2.tar.zst
gpt4free-0.3.3.2.zip
Diffstat (limited to '')
-rw-r--r--docs/interference.md69
1 files changed, 0 insertions, 69 deletions
diff --git a/docs/interference.md b/docs/interference.md
deleted file mode 100644
index 1b4f0c11..00000000
--- a/docs/interference.md
+++ /dev/null
@@ -1,69 +0,0 @@
-### Interference openai-proxy API
-
-#### Run interference API from PyPi package
-
-```python
-from g4f.api import run_api
-
-run_api()
-```
-
-#### Run interference API from repo
-
-Run server:
-
-```sh
-g4f api
-```
-
-or
-
-```sh
-python -m g4f.api.run
-```
-
-```python
-from openai import OpenAI
-
-client = OpenAI(
- api_key="",
- # Change the API base URL to the local interference API
- base_url="http://localhost:1337/v1"
-)
-
- response = client.chat.completions.create(
- model="gpt-3.5-turbo",
- messages=[{"role": "user", "content": "write a poem about a tree"}],
- stream=True,
- )
-
- if isinstance(response, dict):
- # Not streaming
- print(response.choices[0].message.content)
- else:
- # Streaming
- for token in response:
- content = token.choices[0].delta.content
- if content is not None:
- print(content, end="", flush=True)
-```
-
-#### API usage (POST)
-Send the POST request to /v1/chat/completions with body containing the `model` method. This example uses python with requests library:
-```python
-import requests
-url = "http://localhost:1337/v1/chat/completions"
-body = {
- "model": "gpt-3.5-turbo",
- "stream": False,
- "messages": [
- {"role": "assistant", "content": "What can you do?"}
- ]
-}
-json_response = requests.post(url, json=body).json().get('choices', [])
-
-for choice in json_response:
- print(choice.get('message', {}).get('content', ''))
-```
-
-[Return to Home](/)