Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Support DPO #983

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

Shengqiang-Li
Copy link

@Shengqiang-Li Shengqiang-Li commented Feb 17, 2025

增加DPO的pipeline,经验证,可以提高指令微调模型对于情感的可控性,在某情感数据集上的acc如下(初版结果,未调参):

model angry sad happy
Before DPO 57% 41% 45.5%
After DPO 65.5% 49.5% 59.5%

@aluminumbox
Copy link
Collaborator

非常感谢,请在钉钉群里添加铝箱好友,我们可以进一步讨论细节

@ScottishFold007
Copy link

增加DPO的pipeline,经验证,可以提高指令微调模型对于情感的可控性,在某情感数据集上的acc如下(初版结果,未调参):

model angry sad happy
Before DPO 57% 41% 45.5%
After DPO 65.5% 49.5% 59.5%

你这个针对的是第一代的cosy模型吧?

@Shengqiang-Li
Copy link
Author

增加DPO的pipeline,经验证,可以提高指令微调模型对于情感的可控性,在某情感数据集上的acc如下(初版结果,未调参):
model angry sad happy
Before DPO 57% 41% 45.5%
After DPO 65.5% 49.5% 59.5%

你这个针对的是第一代的cosy模型吧?
没有这么局限,cosyvoice1.0和2.0都可以用。我是基于CosyVoice2.0做的实验,只是DPO的pipeline还没有适配CosyVoice2.0里流式文本输入。

@LiuMingYy
Copy link

增加DPO的pipeline,经验证,可以提高指令微调模型对于情感的可控性,在某情感数据集上的acc如下(初版结果,未调参):

model angry sad happy
Before DPO 57% 41% 45.5%
After DPO 65.5% 49.5% 59.5%

大佬好!非常棒的工作!想向您请教下,这里的acc是通过何种分类模型来计算的呢?

@Shengqiang-Li
Copy link
Author

增加DPO的pipeline,经验证,可以提高指令微调模型对于情感的可控性,在某情感数据集上的acc如下(初版结果,未调参):
model angry sad happy
Before DPO 57% 41% 45.5%
After DPO 65.5% 49.5% 59.5%

大佬好!非常棒的工作!想向您请教下,这里的acc是通过何种分类模型来计算的呢?

增加DPO的pipeline,经验证,可以提高指令微调模型对于情感的可控性,在某情感数据集上的acc如下(初版结果,未调参):
model angry sad happy
Before DPO 57% 41% 45.5%
After DPO 65.5% 49.5% 59.5%

大佬好!非常棒的工作!想向您请教下,这里的acc是通过何种分类模型来计算的呢?

用情感识别模型emo2vec跑推理,再计算情感识别准确率

@HaiFengZeng
Copy link

厉害啦,想问一下,有对比的demo吗,在自然度上会有变化吗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants