📢 Gate Square #MBG Posting Challenge# is Live— Post for MBG Rewards!
Want a share of 1,000 MBG? Get involved now—show your insights and real participation to become an MBG promoter!
💰 20 top posts will each win 50 MBG!
How to Participate:
1️⃣ Research the MBG project
Share your in-depth views on MBG’s fundamentals, community governance, development goals, and tokenomics, etc.
2️⃣ Join and share your real experience
Take part in MBG activities (CandyDrop, Launchpool, or spot trading), and post your screenshots, earnings, or step-by-step tutorials. Content can include profits, beginner-friendl
According to the webmaster's home report on August 1, researchers from Huawei Cloud, the Chinese Academy of Sciences and Peking University recently proposed a new framework called RRTF (Rank Responses to align Test&_Teacher Feedback), which can effectively improve the pre-trained large-scale Performance of language models (LLMs) for code generation. The RRTF framework improves the performance of code-generating LLMs by means of natural language LLM alignment techniques and ranking feedback. The research team also introduced the PanGu-Coder2 model, which achieved an excellent 62.20% pass rate on the OpenAI Human_ benchmark. This study demonstrates the effectiveness of RRTF by applying the RRTF framework on StarCoder15B, surpassing PanGu-Coder and achieving the best performance among all recorded code LLMs. A thorough analysis of three benchmarks (Human_, Coder_, and LeetCode) shows that Code LLM may be able to outperform natural language models of equal or larger scale in code generation tasks. Research also highlights the value of high-quality data in improving a model's ability to follow instructions and write code.