🎉 Gate Square Growth Points Summer Lucky Draw Round 1️⃣ 2️⃣ Is Live!
🎁 Prize pool over $10,000! Win Huawei Mate Tri-fold Phone, F1 Red Bull Racing Car Model, exclusive Gate merch, popular tokens & more!
Try your luck now 👉 https://www.gate.com/activities/pointprize?now_period=12
How to earn Growth Points fast?
1️⃣ Go to [Square], tap the icon next to your avatar to enter [Community Center]
2️⃣ Complete daily tasks like posting, commenting, liking, and chatting to earn points
100% chance to win — prizes guaranteed! Come and draw now!
Event ends: August 9, 16:00 UTC
More details: https://www
According to the webmaster's home report on August 1, researchers from Huawei Cloud, the Chinese Academy of Sciences and Peking University recently proposed a new framework called RRTF (Rank Responses to align Test&_Teacher Feedback), which can effectively improve the pre-trained large-scale Performance of language models (LLMs) for code generation. The RRTF framework improves the performance of code-generating LLMs by means of natural language LLM alignment techniques and ranking feedback. The research team also introduced the PanGu-Coder2 model, which achieved an excellent 62.20% pass rate on the OpenAI Human_ benchmark. This study demonstrates the effectiveness of RRTF by applying the RRTF framework on StarCoder15B, surpassing PanGu-Coder and achieving the best performance among all recorded code LLMs. A thorough analysis of three benchmarks (Human_, Coder_, and LeetCode) shows that Code LLM may be able to outperform natural language models of equal or larger scale in code generation tasks. Research also highlights the value of high-quality data in improving a model's ability to follow instructions and write code.