Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] <title>CORS跨域问题,前端无法跨域访问 #121

Open
2 tasks done
Cordy27 opened this issue Feb 18, 2024 · 4 comments
Open
2 tasks done

[BUG] <title>CORS跨域问题,前端无法跨域访问 #121

Cordy27 opened this issue Feb 18, 2024 · 4 comments

Comments

@Cordy27
Copy link

Cordy27 commented Feb 18, 2024

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

使用本地模型时一切正常,但是调用openAI的API接口一旦进行问答就报错:
Access to fetch at 'http://8.130.51.181:8777/api/local_doc_qa/local_doc_chat' from origin 'http://8.130.51.181:5052' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
:8777/api/local_doc_qa/local_doc_chat:1

   Failed to load resource: net::ERR_FAILED

期望行为 | Expected Behavior

正确运行,回答问题

运行环境 | Environment

- OS: Ubuntu 22.04
- NVIDIA Driver:
- CUDA:
- Docker Compose:
- NVIDIA GPU Memory:

QAnything日志 | QAnything logs

UPLOAD_ROOT_PATH: /workspace/qanything_local/QANY_DB/content
llm_api_serve_port: 7802
rerank_port: 9001
embed_port: 9001
<Logger debug_logger (INFO)> <Logger qa_logger (INFO)>
INFO:root:OPENAI_API_BASE = https://key.wenwen-ai.com
INFO:root:OPENAI_API_MODEL_NAME = GPT3.5-4k
[2024-02-19 00:22:54 +0800] [110] [INFO] Sanic v23.6.0
INFO:sanic.root:Sanic v23.6.0
[2024-02-19 00:22:54 +0800] [110] [INFO] Goin' Fast @ http://0.0.0.0:8777
INFO:sanic.root:Goin' Fast @ http://0.0.0.0:8777
[2024-02-19 00:22:54 +0800] [110] [INFO] mode: production, w/ 10 workers
INFO:sanic.root:mode: production, w/ 10 workers
[2024-02-19 00:22:54 +0800] [110] [INFO] server: sanic, HTTP/1.1
INFO:sanic.root:server: sanic, HTTP/1.1
[2024-02-19 00:22:54 +0800] [110] [INFO] python: 3.10.12
INFO:sanic.root:python: 3.10.12
[2024-02-19 00:22:54 +0800] [110] [INFO] platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.35
INFO:sanic.root:platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.35
[2024-02-19 00:22:54 +0800] [110] [INFO] packages: sanic-routing==23.12.0, sanic-ext==23.6.0
INFO:sanic.root:packages: sanic-routing==23.12.0, sanic-ext==23.6.0
INFO:root:OPENAI_API_BASE = https://key.wenwen-ai.com
INFO:root:OPENAI_API_MODEL_NAME = GPT3.5-4k
INFO:root:OPENAI_API_BASE = https://key.wenwen-ai.com
INFO:root:OPENAI_API_MODEL_NAME = GPT3.5-4k
INFO:root:OPENAI_API_BASE = https://key.wenwen-ai.com
INFO:root:OPENAI_API_MODEL_NAME = GPT3.5-4k
INFO:root:OPENAI_API_BASE = https://key.wenwen-ai.com
INFO:root:OPENAI_API_MODEL_NAME = GPT3.5-4k
INFO:root:OPENAI_API_BASE = https://key.wenwen-ai.com
INFO:root:OPENAI_API_MODEL_NAME = GPT3.5-4k
UPLOAD_ROOT_PATH: /workspace/qanything_local/QANY_DB/content
llm_api_serve_port: 7802
rerank_port: 9001
embed_port: 9001
<Logger debug_logger (INFO)> <Logger qa_logger (INFO)>
[2024-02-19 00:23:14 +0800] [489] [INFO] Sanic Extensions:
INFO:sanic.root:Sanic Extensions:
[2024-02-19 00:23:14 +0800] [489] [INFO] > injection [0 dependencies; 0 constants]
INFO:sanic.root: > injection [0 dependencies; 0 constants]
[2024-02-19 00:23:14 +0800] [489] [INFO] > openapi [http://0.0.0.0:8777/docs]
INFO:sanic.root: > openapi [http://0.0.0.0:8777/docs]
[2024-02-19 00:23:14 +0800] [489] [INFO] > http
INFO:sanic.root: > http
[2024-02-19 00:23:14 +0800] [489] [INFO] > templating [jinja2==3.1.3]
INFO:sanic.root: > templating [jinja2==3.1.3]
INFO:root:OPENAI_API_BASE = https://key.wenwen-ai.com
INFO:root:OPENAI_API_MODEL_NAME = GPT3.5-4k
INFO:root:OPENAI_API_BASE = https://key.wenwen-ai.com
INFO:root:OPENAI_API_MODEL_NAME = GPT3.5-4k
INFO:debug_logger:[SUCCESS] 数据库qanything检查通过
INFO:debug_logger:ADD COLUMN timestamp
INFO:debug_logger:1060 (42S21): Duplicate column name 'timestamp'
INFO:debug_logger:[SUCCESS] 数据库qanything连接成功
init local_doc_qa in online
INFO:root:OPENAI_API_BASE = https://key.wenwen-ai.com
INFO:root:OPENAI_API_MODEL_NAME = GPT3.5-4k
INFO:root:OPENAI_API_BASE = https://key.wenwen-ai.com
INFO:root:OPENAI_API_MODEL_NAME = GPT3.5-4k
[2024-02-19 00:23:15 +0800] [489] [INFO] Starting worker [489]
INFO:sanic.server:Starting worker [489]
UPLOAD_ROOT_PATH: /workspace/qanything_local/QANY_DB/content
llm_api_serve_port: 7802
rerank_port: 9001
embed_port: 9001

复现方法 | Steps To Reproduce

调用openAI的API接口

备注 | Anything else?

No response

@EliaukTM
Copy link

EliaukTM commented Mar 8, 2024

兄弟你解决了吗

@pan003
Copy link

pan003 commented Mar 16, 2024

以开发模式启动前端代码就行了:
1.docker ps 找到freeren/qanything:v1.1.1这个容器
2.docker exec -it 你自己的容器 bash
3.进到/workspace/qanything_local/front_end这个目录
4.改一下vite.config.ts这个文件的代理部分,删掉env.VITE_APP_API_PROXY(对,这是个BUG,会导致后端服务路径多一层)
5.杀掉容器里的node进程,直接npm run dev(后台跑用nohup)

@c123853648
Copy link

以开发模式启动前端代码就行了: 1.docker ps 找到freeren/qanything:v1.1.1这个容器 2.docker exec -it 你自己的容器 bash 3.进到/workspace/qanything_local/front_end这个目录 4.改一下vite.config.ts这个文件的代理部分,删掉env.VITE_APP_API_PROXY(对,这是个BUG,会导致后端服务路径多一层) 5.杀掉容器里的node进程,直接npm run dev(后台跑用nohup)

这个文件中没有找到VITE_APP_API_PROXY 只有VITE_APP_API_PREFIX

@successren
Copy link
Collaborator

this issue has been resolved, please see #188

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Backlog
Development

No branches or pull requests

5 participants