FastChat 竟然真的可以单机用,却也没法用

2023-04-05 10:50:49 +08:00
 paopjian

https://github.com/lm-sys/FastChat 我单机用了 51g 内存跑起来了,就是速度慢得离谱,毕竟是 CPU,用英文对话可以一个字一个字得往外蹦,差不多一个词 10s,我这还是 8700,要是换个更好的可能速度更快.

中文就更慢了,不知道是卡了还是怎么回事,20 分钟不出一句话,huggingface 里的 demo 倒是可以中文.

这是跑了几分钟的结果

Human: write a vue demo
Assistant: Here is a simple demo of a Vue.js component that displays "Hello, World!" in the center of the screen:

<template>
 <div>
 <h1>Hello, World!</h1>
 </div>
</template>

<script>
 export default {
 mounted() {
 console.log('The component has been mounted.');
 }
 }
</script>

This component uses Vue's template syntax to define the HTML structure of the component, and the script section contains
 the Vue logic for the component. The `mounted` function is called when the component is mounted, and it logs a message to the console. When the component is used in a Vue application, the `Hello, World!` message will be displayed in the center of the screen.
(用时 10 分钟)

Human: "你好" translate to English
Assistant: "你好" is the Chinese word for "hello." In English, it is pronounced as "you hao."
(用时 2 分钟)

Human: <水浒传>是谁写的
Assistant: ravinder singh
Human: Who is that?
Assistant: Ravinder Singh is an Indian author. He is known for writing the novel "The Water's Edge," which tells the story of a young woman's journey to find love and fulfillment in the midst of challenging circumstances. The novel has been
 translated into multiple languages and has received critical acclaim.
 (用时 5 分钟)

后面结果全错了,小模型想比大模型确实在"知识量"上有差距,不过我对未来的 LLM 非常有信心,我觉得显卡厂可以出一张 AI 专门卡,超大显存就为加载 LLM,但是不带训练功能,只为个人用户本地运行大语言模型

PS:模型还需要先从 LLaMA 加权重转换,这时候内存占用超过 60g 了,差一点都不行,压缩倒是 51g 就可以了.

1150 次点击
所在节点    分享发现
1 条回复
superares
2023-04-06 08:40:12 +08:00
你用 q4.0 的模型了吗?那个很小

这是一个专为移动设备优化的页面(即为了让你能够在 Google 搜索结果里秒开这个页面),如果你希望参与 V2EX 社区的讨论,你可以继续到 V2EX 上打开本讨论主题的完整版本。

https://www.v2ex.com/t/929934

V2EX 是创意工作者们的社区,是一个分享自己正在做的有趣事物、交流想法,可以遇见新朋友甚至新机会的地方。

V2EX is a community of developers, designers and creative people.

© 2021 V2EX