Finisky Garden

NLP, 软件工程, 产品设计

2022年随着ChatGPT的大火而结束,最近一年的时间各巨头相继推出了许多表现出色的对话系统,有意思的是大家前进的方向不谋而合,不再专注模型结构和规模,而转向实用性:如何让一个对话系统更有用、更安全、更理解用户意图?

对话系统在过去一年里的主要提升得益如下三点:

  • 大模型:对话系统的基础,规模大才有足够的通用表示能力
  • 从人工反馈学习 (RLHF):通过人工标注不同模型输出,使模型更好地与用户意图align,甚至更小的模型可达到同样效果
  • 搜索API:使回复有所参考,内容更具体更有用,避免胡说八道 (hallucination)
阅读全文 »

Meta AI在2022年8月发布了新一代的对话系统 BlenderBot 3,希望通过这样一个公开的demo收集更多的真实数据来改进对话系统,使它变得更安全、更有用。

BlenderBot 3: A 175B parameter, publicly available chatbot that improves its skills and safety over time

BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage

BlenderBot 3 (BB3) 只对在美国的成人开放,只用英文对话:

We present BlenderBot 3 (BB3), an open-domain dialogue model that we have deployed as an English speaking conversational agent on a public website accessible by adults in the United States.

此研究的主要目的与Sparrow最接近,使对话更responsible & useful:

The goal of this research program is then to explore how to construct models that continue to improve from such interactions both in terms of becoming more responsible and more useful.

这个tech report包括了BB3部署的细节,包括UI设计,本文主要关注模型部分。

阅读全文 »

Sparrow是DeepMind在今年9月底发布的对话系统,主打的点在"helpful, correct, and harmless"。总体来看,思路也是"alignment",即让对话机器人的回复与用户的意图更贴合。在技术路线上,也是采用reinforcement learning from human feedback,通过定义一批规则,让模型更好地向期望的对话方向推进; 此外,对于事实型的问题,参考搜索出的内容给出回复。

Building safer dialogue agents

Improving alignment of dialogue agents via targeted human judgements

阅读全文 »

ChatGPT火爆全网,要是能接到自己的微信公众号后台,岂不美哉?

想必有此想法的同志不止我一人,上周末就研究了一下,有几个问题需要解决。

首先就是ChatGPT API,最关键的问题没有之一,OpenAI并没有官方API支持。不过github上早有人反向工程破解了此API,Python实现:

阅读全文 »

WebGPT是OpenAI在2021年底发布的解决long-form quesion-answering (LFQA) 的方案。比InstructGPT的提出稍早一些。

WebGPT: Improving the Factual Accuracy of Language Models through Web Browsing

WebGPT: Browser-assisted question-answering with human feedback

WebGPT想解决什么问题?让开放域QA回复更长更可靠。

A rising challenge in NLP is long-form question-answering (LFQA), in which a paragraph-length answer is generated in response to an open-ended question. LFQA systems have the potential to become one of the main ways people learn about the world, but currently lag behind human performance.

阅读全文 »

整个圈子最近都被ChatGPT出色的对话和Coding能力惊艳到了,前面写了篇文章简析了下其原理,虽然看起来直观,但国内的对话水平与其差距确非一日之功。下面的知乎回答深以为然:

Why China Doesn't Have ChatGPT

ChatGPT: Optimizing Language Models for Dialogue

既然大家都致力于发掘ChatGPT厉害的地方,就来找找它的不足吧。

阅读全文 »

最近ChatGPT火爆出圈,一众朋友发来各种网红文问我怎么看。ChatGPT的模型与InstructGPT一样,只是数据收集方式有区别。而InstructGPT的提出已差不多有一年了,只不过最近才引起大家的注意。其实,今年已经有不少工作是延续InstructGPT对提升模型效果的,如 Diamonte,参考了human feedback的思路,但将RL的方案替换成了额外的loss fuction项;WeLM,参考了人工编写prompt模板训练大规模语言模型。

话不多说,来看看原始的InstructGPT是如何打败大模型的。原始Paper很长,有68页,而事实上核心思想并不复杂。(PS: 现在训练个大模型要不写个50页以上的Paper,都对不起咱烧的那钱!)

Training language models to follow instructions with human feedback

Aligning Language Models to Follow Instructions

InstructGPT指出,模型并非越大越好:

Making language models bigger does not inherently make them better at following a user’s intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user.

所以InstrcutGPT希望通过人工反馈让语言模型与用户意图更加align

We show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback.

最终训练出来1.3B的InstructGPT模型,人工评测比175B的GPT-3要更好:

In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters.

阅读全文 »

When I install elasticdump, the following error appears:

$ npm install elasticdump
...
npm WARN @1.0.0 No description
npm WARN @1.0.0 No repository field.
npm ERR! Linux 5.4.0-1091-azure
npm ERR! argv "/usr/bin/node" "/usr/bin/npm" "install" "elasticdump"
npm ERR! node v8.10.0
npm ERR! npm  v3.5.2
npm ERR! path /home/finisky/node_modules/.staging/@types/node-1f2b596d/package.json
npm ERR! code ENOTDIR
npm ERR! errno -20
npm ERR! syscall open

npm ERR! ENOTDIR: not a directory, open '/home/finisky/node_modules/.staging/@types/node-1f2b596d/package.json'
阅读全文 »
0%