Skip to content

Commit 27e4980

Browse files
authored
💄 style: fix provider order (lobehub#6702)
1 parent 4011a1e commit 27e4980

File tree

3 files changed

+18
-21
lines changed

3 files changed

+18
-21
lines changed

README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -191,14 +191,14 @@ We have implemented support for the following model service providers:
191191
- **[Bedrock](https://lobechat.com/discover/provider/bedrock)**: Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.
192192
- **[Google](https://lobechat.com/discover/provider/google)**: Google's Gemini series represents its most advanced, versatile AI models, developed by Google DeepMind, designed for multimodal capabilities, supporting seamless understanding and processing of text, code, images, audio, and video. Suitable for various environments from data centers to mobile devices, it significantly enhances the efficiency and applicability of AI models.
193193
- **[DeepSeek](https://lobechat.com/discover/provider/deepseek)**: DeepSeek is a company focused on AI technology research and application, with its latest model DeepSeek-V2.5 integrating general dialogue and code processing capabilities, achieving significant improvements in human preference alignment, writing tasks, and instruction following.
194-
- **[PPIO](https://lobechat.com/discover/provider/ppio)**: PPIO supports stable and cost-efficient open-source LLM APIs, such as DeepSeek, Llama, Qwen etc.
195194
- **[HuggingFace](https://lobechat.com/discover/provider/huggingface)**: The HuggingFace Inference API provides a fast and free way for you to explore thousands of models for various tasks. Whether you are prototyping for a new application or experimenting with the capabilities of machine learning, this API gives you instant access to high-performance models across multiple domains.
196195
- **[OpenRouter](https://lobechat.com/discover/provider/openrouter)**: OpenRouter is a service platform providing access to various cutting-edge large model interfaces, supporting OpenAI, Anthropic, LLaMA, and more, suitable for diverse development and application needs. Users can flexibly choose the optimal model and pricing based on their requirements, enhancing the AI experience.
197196
- **[Cloudflare Workers AI](https://lobechat.com/discover/provider/cloudflare)**: Run serverless GPU-powered machine learning models on Cloudflare's global network.
198197

199198
<details><summary><kbd>See more providers (+27)</kbd></summary>
200199

201200
- **[GitHub](https://lobechat.com/discover/provider/github)**: With GitHub Models, developers can become AI engineers and leverage the industry's leading AI models.
201+
- **[PPIO](https://lobechat.com/discover/provider/ppio)**: PPIO supports stable and cost-efficient open-source LLM APIs, such as DeepSeek, Llama, Qwen etc.
202202
- **[Novita](https://lobechat.com/discover/provider/novita)**: Novita AI is a platform providing a variety of large language models and AI image generation API services, flexible, reliable, and cost-effective. It supports the latest open-source models like Llama3 and Mistral, offering a comprehensive, user-friendly, and auto-scaling API solution for generative AI application development, suitable for the rapid growth of AI startups.
203203
- **[Together AI](https://lobechat.com/discover/provider/togetherai)**: Together AI is dedicated to achieving leading performance through innovative AI models, offering extensive customization capabilities, including rapid scaling support and intuitive deployment processes to meet various enterprise needs.
204204
- **[Fireworks AI](https://lobechat.com/discover/provider/fireworksai)**: Fireworks AI is a leading provider of advanced language model services, focusing on functional calling and multimodal processing. Its latest model, Firefunction V2, is based on Llama-3, optimized for function calling, conversation, and instruction following. The visual language model FireLLaVA-13B supports mixed input of images and text. Other notable models include the Llama series and Mixtral series, providing efficient multilingual instruction following and generation support.
@@ -570,7 +570,7 @@ $ mkdir lobe-chat-db && cd lobe-chat-db
570570
2. init the LobeChat infrastructure
571571

572572
```fish
573-
bash <(curl -fsSL https://lobe.li/setup.sh) -l zh_CN
573+
bash <(curl -fsSL https://lobe.li/setup.sh)
574574
```
575575

576576
3. Start the LobeChat service
@@ -799,7 +799,7 @@ This project is [Apache 2.0](./LICENSE) licensed.
799799
[docker-size-shield]: https://img.shields.io/docker/image-size/lobehub/lobe-chat-database?color=369eff&labelColor=black&style=flat-square&sort=semver
800800
[docs]: https://lobehub.com/docs/usage/start
801801
[docs-dev-guide]: https://github.com/lobehub/lobe-chat/wiki/index
802-
[docs-docker]: https://lobehub.com/docs/self-hosting/platform/docker
802+
[docs-docker]: https://lobehub.com/docs/self-hosting/server-database/docker-compose
803803
[docs-env-var]: https://lobehub.com/docs/self-hosting/environment-variables
804804
[docs-feat-agent]: https://lobehub.com/docs/usage/features/agent-market
805805
[docs-feat-artifacts]: https://lobehub.com/docs/usage/features/artifacts

README.zh-CN.md

+14-17
Original file line numberDiff line numberDiff line change
@@ -191,14 +191,14 @@ LobeChat 支持文件上传与知识库功能,你可以上传文件、图片
191191
- **[Bedrock](https://lobechat.com/discover/provider/bedrock)**: Bedrock 是亚马逊 AWS 提供的一项服务,专注于为企业提供先进的 AI 语言模型和视觉模型。其模型家族包括 Anthropic 的 Claude 系列、Meta 的 Llama 3.1 系列等,涵盖从轻量级到高性能的多种选择,支持文本生成、对话、图像处理等多种任务,适用于不同规模和需求的企业应用。
192192
- **[Google](https://lobechat.com/discover/provider/google)**: Google 的 Gemini 系列是其最先进、通用的 AI 模型,由 Google DeepMind 打造,专为多模态设计,支持文本、代码、图像、音频和视频的无缝理解与处理。适用于从数据中心到移动设备的多种环境,极大提升了 AI 模型的效率与应用广泛性。
193193
- **[DeepSeek](https://lobechat.com/discover/provider/deepseek)**: DeepSeek 是一家专注于人工智能技术研究和应用的公司,其最新模型 DeepSeek-V3 多项评测成绩超越 Qwen2.5-72B 和 Llama-3.1-405B 等开源模型,性能对齐领军闭源模型 GPT-4o 与 Claude-3.5-Sonnet。
194-
- **[PPIO](https://lobechat.com/discover/provider/ppio)**: PPIO 派欧云提供稳定、高性价比的开源模型 API 服务,支持 DeepSeek 全系列、Llama、Qwen 等行业领先大模型。
195194
- **[HuggingFace](https://lobechat.com/discover/provider/huggingface)**: HuggingFace Inference API 提供了一种快速且免费的方式,让您可以探索成千上万种模型,适用于各种任务。无论您是在为新应用程序进行原型设计,还是在尝试机器学习的功能,这个 API 都能让您即时访问多个领域的高性能模型。
196195
- **[OpenRouter](https://lobechat.com/discover/provider/openrouter)**: OpenRouter 是一个提供多种前沿大模型接口的服务平台,支持 OpenAI、Anthropic、LLaMA 及更多,适合多样化的开发和应用需求。用户可根据自身需求灵活选择最优的模型和价格,助力 AI 体验的提升。
197196
- **[Cloudflare Workers AI](https://lobechat.com/discover/provider/cloudflare)**: 在 Cloudflare 的全球网络上运行由无服务器 GPU 驱动的机器学习模型。
198197

199198
<details><summary><kbd>See more providers (+27)</kbd></summary>
200199

201200
- **[GitHub](https://lobechat.com/discover/provider/github)**: 通过 GitHub 模型,开发人员可以成为 AI 工程师,并使用行业领先的 AI 模型进行构建。
201+
- **[PPIO](https://lobechat.com/discover/provider/ppio)**: PPIO 派欧云提供稳定、高性价比的开源模型 API 服务,支持 DeepSeek 全系列、Llama、Qwen 等行业领先大模型。
202202
- **[Novita](https://lobechat.com/discover/provider/novita)**: Novita AI 是一个提供多种大语言模型与 AI 图像生成的 API 服务的平台,灵活、可靠且具有成本效益。它支持 Llama3、Mistral 等最新的开源模型,并为生成式 AI 应用开发提供了全面、用户友好且自动扩展的 API 解决方案,适合 AI 初创公司的快速发展。
203203
- **[Together AI](https://lobechat.com/discover/provider/togetherai)**: Together AI 致力于通过创新的 AI 模型实现领先的性能,提供广泛的自定义能力,包括快速扩展支持和直观的部署流程,满足企业的各种需求。
204204
- **[Fireworks AI](https://lobechat.com/discover/provider/fireworksai)**: Fireworks AI 是一家领先的高级语言模型服务商,专注于功能调用和多模态处理。其最新模型 Firefunction V2 基于 Llama-3,优化用于函数调用、对话及指令跟随。视觉语言模型 FireLLaVA-13B 支持图像和文本混合输入。其他 notable 模型包括 Llama 系列和 Mixtral 系列,提供高效的多语言指令跟随与生成支持。
@@ -541,27 +541,24 @@ LobeChat 提供了 Vercel 的 自托管版本 和 [Docker 镜像][docker-release
541541
[![][docker-size-shield]][docker-size-link]
542542
[![][docker-pulls-shield]][docker-pulls-link]
543543

544-
我们提供了 Docker 镜像,供你在自己的私有设备上部署 LobeChat 服务。使用以下命令即可使用一键启动 LobeChat 服务:
544+
We provide a Docker image for deploying the LobeChat service on your own private device. Use the following command to start the LobeChat service:
545+
546+
1. create a folder to for storage files
545547

546548
```fish
547-
$ docker run -d -p 3210:3210 \
548-
-e OPENAI_API_KEY=sk-xxxx \
549-
-e ACCESS_CODE=lobe66 \
550-
--name lobe-chat \
551-
lobehub/lobe-chat
549+
$ mkdir lobe-chat-db && cd lobe-chat-db
552550
```
553551

554-
> \[!TIP]
555-
>
556-
> 如果你需要通过代理使用 OpenAI 服务,你可以使用 `OPENAI_PROXY_URL` 环境变量来配置代理地址:
552+
2. 启动一键脚本
553+
554+
```fish
555+
bash <(curl -fsSL https://lobe.li/setup.sh) -l zh_CN
556+
```
557+
558+
3. 启动 LobeChat
557559

558560
```fish
559-
$ docker run -d -p 3210:3210 \
560-
-e OPENAI_API_KEY=sk-xxxx \
561-
-e OPENAI_PROXY_URL=https://api-proxy.com/v1 \
562-
-e ACCESS_CODE=lobe66 \
563-
--name lobe-chat \
564-
lobehub/lobe-chat
561+
docker compose up -d
565562
```
566563

567564
> \[!NOTE]
@@ -822,7 +819,7 @@ This project is [Apache 2.0](./LICENSE) licensed.
822819
[docker-size-shield]: https://img.shields.io/docker/image-size/lobehub/lobe-chat-database?color=369eff&labelColor=black&style=flat-square&sort=semver
823820
[docs]: https://lobehub.com/zh/docs/usage/start
824821
[docs-dev-guide]: https://github.com/lobehub/lobe-chat/wiki/index
825-
[docs-docker]: https://lobehub.com/docs/self-hosting/platform/docker
822+
[docs-docker]: https://lobehub.com/zh/docs/self-hosting/server-database/docker-compose
826823
[docs-env-var]: https://lobehub.com/docs/self-hosting/environment-variables
827824
[docs-feat-agent]: https://lobehub.com/docs/usage/features/agent-market
828825
[docs-feat-artifacts]: https://lobehub.com/docs/usage/features/artifacts

src/config/modelProviders/index.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -107,12 +107,12 @@ export const DEFAULT_MODEL_PROVIDER_LIST = [
107107
GoogleProvider,
108108
VertexAIProvider,
109109
DeepSeekProvider,
110-
PPIOProvider,
111110
HuggingFaceProvider,
112111
OpenRouterProvider,
113112
CloudflareProvider,
114113
GithubProvider,
115114
NovitaProvider,
115+
PPIOProvider,
116116
NvidiaProvider,
117117
TogetherAIProvider,
118118
FireworksAIProvider,

0 commit comments

Comments
 (0)