Skip to content

Commit 3073935

Browse files
committed
修改readme 推送version 3.53
1 parent ef6631b commit 3073935

File tree

3 files changed

+62
-13
lines changed

3 files changed

+62
-13
lines changed

README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -149,25 +149,25 @@ python main.py
149149

150150
### 安装方法II:使用Docker
151151

152+
0. 部署项目的全部能力(这个是包含cuda和latex的大型镜像。如果您网速慢、硬盘小或没有显卡,则不推荐使用这个,建议使用方案1)(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
153+
152154
[![fullcapacity](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
153155

154-
1. 仅ChatGPT(推荐大多数人选择,等价于docker-compose方案1)
156+
``` sh
157+
# 修改docker-compose.yml,保留方案0并删除其他方案。修改docker-compose.yml中方案0的配置,参考其中注释即可
158+
docker-compose up
159+
```
160+
161+
1. 仅ChatGPT(推荐大多数人选择)
155162
[![basic](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
156163
[![basiclatex](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
157164
[![basicaudio](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
158165

159-
160166
``` sh
161-
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git # 下载项目
162-
cd gpt_academic # 进入路径
163-
nano config.py # 用任意文本编辑器编辑config.py, 配置 “Proxy”, “API_KEY” 以及 “WEB_PORT” (例如50923) 等
164-
docker build -t gpt-academic . # 安装
165-
166-
#(最后一步-Linux操作系统)用`--net=host`更方便快捷
167-
docker run --rm -it --net=host gpt-academic
168-
#(最后一步-MacOS/Windows操作系统)只能用-p选项将容器上的端口(例如50923)暴露给主机上的端口
169-
docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
167+
# 修改docker-compose.yml,保留方案1并删除其他方案。修改docker-compose.yml中方案1的配置,参考其中注释即可
168+
docker-compose up
170169
```
170+
171171
P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用docker-compose获取Latex功能(修改docker-compose.yml,保留方案4并删除其他方案)。
172172

173173
2. ChatGPT + ChatGLM2 + MOSS + LLAMA2 + 通义千问(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)

docker-compose.yml

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,54 @@
11
#【请修改完参数后,删除此行】请在以下方案中选择一种,然后删除其他的方案,最后docker-compose up运行 | Please choose from one of these options below, delete other options as well as This Line
22

3+
## ===================================================
4+
## 【方案零】 部署项目的全部能力(这个是包含cuda和latex的大型镜像。如果您网速慢、硬盘小或没有显卡,则不推荐使用这个)
5+
## ===================================================
6+
version: '3'
7+
services:
8+
gpt_academic_full_capability:
9+
image: docker pull ghcr.io/binary-husky/gpt_academic_with_all_capacity:master
10+
environment:
11+
# 请查阅 `config.py`或者 github wiki 以查看所有的配置信息
12+
API_KEY: ' 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 '
13+
# USE_PROXY: ' True '
14+
# proxies: ' { "http": "http://localhost:10881", "https": "http://localhost:10881", } '
15+
LLM_MODEL: ' gpt-3.5-turbo '
16+
AVAIL_LLM_MODELS: ' ["gpt-3.5-turbo", "gpt-4", "qianfan", "sparkv2", "spark", "chatglm"] '
17+
BAIDU_CLOUD_API_KEY : ' bTUtwEAveBrQipEowUvDwYWq '
18+
BAIDU_CLOUD_SECRET_KEY : ' jqXtLvXiVw6UNdjliATTS61rllG8Iuni '
19+
XFYUN_APPID: ' 53a8d816 '
20+
XFYUN_API_SECRET: ' MjMxNDQ4NDE4MzM0OSNlNjQ2NTlhMTkx '
21+
XFYUN_API_KEY: ' 95ccdec285364869d17b33e75ee96447 '
22+
ENABLE_AUDIO: ' False '
23+
DEFAULT_WORKER_NUM: ' 20 '
24+
WEB_PORT: ' 12345 '
25+
ADD_WAIFU: ' False '
26+
ALIYUN_APPKEY: ' RxPlZrM88DnAFkZK '
27+
THEME: ' Chuanhu-Small-and-Beautiful '
28+
ALIYUN_ACCESSKEY: ' LTAI5t6BrFUzxRXVGUWnekh1 '
29+
ALIYUN_SECRET: ' eHmI20SVWIwQZxCiTD2bGQVspP9i68 '
30+
# LOCAL_MODEL_DEVICE: ' cuda '
31+
32+
# 加载英伟达显卡运行时
33+
# runtime: nvidia
34+
# deploy:
35+
# resources:
36+
# reservations:
37+
# devices:
38+
# - driver: nvidia
39+
# count: 1
40+
# capabilities: [gpu]
41+
42+
# 与宿主的网络融合
43+
network_mode: "host"
44+
45+
# 不使用代理网络拉取最新代码
46+
command: >
47+
bash -c "python3 -u main.py"
48+
49+
50+
51+
352
## ===================================================
453
## 【方案一】 如果不需要运行本地模型(仅 chatgpt, azure, 星火, 千帆, claude 等在线大模型服务)
554
## ===================================================

version

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
{
2-
"version": 3.52,
2+
"version": 3.53,
33
"show_feature": true,
4-
"new_feature": "提高稳定性&解决多用户冲突问题 <-> 支持插件分类和更多UI皮肤外观 <-> 支持用户使用自然语言调度各个插件(虚空终端) ! <-> 改进UI,设计新主题 <-> 支持借助GROBID实现PDF高精度翻译 <-> 接入百度千帆平台和文心一言 <-> 接入阿里通义千问、讯飞星火、上海AI-Lab书生 <-> 优化一键升级 <-> 提高arxiv翻译速度和成功率"
4+
"new_feature": "支持动态选择不同界面主题 <-> 提高稳定性&解决多用户冲突问题 <-> 支持插件分类和更多UI皮肤外观 <-> 支持用户使用自然语言调度各个插件(虚空终端) ! <-> 改进UI,设计新主题 <-> 支持借助GROBID实现PDF高精度翻译 <-> 接入百度千帆平台和文心一言 <-> 接入阿里通义千问、讯飞星火、上海AI-Lab书生 <-> 优化一键升级 <-> 提高arxiv翻译速度和成功率"
55
}

0 commit comments

Comments
 (0)