🎉 #Gate xStocks Trading Share# Posting Event Is Ongoing!
📝 Share your trading experience on Gate Square to unlock $1,000 rewards!
🎁 5 top Square creators * $100 Futures Voucher
🎉 Share your post on X – Top 10 posts by views * extra $50
How to Participate:
1️⃣ Follow Gate_Square
2️⃣ Make an original post (at least 20 words) with #Gate xStocks Trading Share#
3️⃣ If you share on Twitter, submit post link here: https://www.gate.com/questionnaire/6854
Note: You may submit the form multiple times. More posts, higher chances to win!
📅 End at: July 9, 16:00 UTC
Show off your trading on Gate Squ
According to the qubit report, scholars from Microsoft Asia Research Institute (MSRA) proposed a new large model architecture Retentive Network (RetNet) in the paper "Retentive Network: A Successor to Transformer for Large Language Models", which is regarded as the field of large models Transformer's successor. Experimental data shows that on language modeling tasks: RetNet can achieve perplexity comparable to Transformer, reasoning speed is 8.4 times, memory usage is reduced by 70%, and it has good scalability. And when the model size is larger than a certain scale, RetNet will perform better than Transformer.