According to a report by IT House on January 15, Google Research recently used its own BIG-Bench Benchmark to establish a "BIG-Bench Error" dataset, and used the relevant dataset to conduct a series of evaluation studies on the "error probability" and "error correction ability" of popular language models on the market. Google researchers said that because there was no dataset that could evaluate the "error probability" and "self-correction ability" of large language models in the past, they created a dedicated benchmark dataset called "BIG-Bench Error" to evaluate the test. It is reported that the researchers first used the PaLM language model to run 5 tasks in their own BIG-Bench Benchmark task, and then modified the generated "Chain-of-Thought" trajectory to add a "logical error" section, and then re-threw it to the model to determine where there were errors in the chain of thought trajectory. Google researchers claim that the BIG-Bench Mistake dataset is beneficial for improving the self-error correction ability of models, and that models fine-tuned for relevant test tasks "generally perform better than large models with zero-shot prompts."

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 1
  • Repost
  • Share
Comment
0/400
Liton650vip
· 2024-01-15 06:50
BullRun 2024! 🐮
Reply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)