📢 Gate Square #Creator Campaign Phase 1# is now live – support the launch of the PUMP token sale!
The viral Solana-based project Pump.Fun ($PUMP) is now live on Gate for public sale!
Join the Gate Square Creator Campaign, unleash your content power, and earn rewards!
📅 Campaign Period: July 11, 18:00 – July 15, 22:00 (UTC+8)
🎁 Total Prize Pool: $500 token rewards
✅ Event 1: Create & Post – Win Content Rewards
📅 Timeframe: July 12, 22:00 – July 15, 22:00 (UTC+8)
📌 How to Join:
Post original content about the PUMP project on Gate Square:
Minimum 100 words
Include hashtags: #Creator Campaign
Amazon cloud launches AI agent function, making AI an assistant instead of just a chat
Source: The Paper
Reporter Shao Wen
Amazon Bedrock's Agents feature will enable companies to build AI applications that can automate specific tasks, such as making restaurant reservations, rather than just getting recommendations on where to eat.
"A lot of people are so focused on these models and the size of the models, but I think what really matters is how to build applications with them, and that's a big reason why we're releasing the Agents feature today."
At the New York Summit, one of Amazon Web Services' (AWS) annual summits, several announcements centered around generative artificial intelligence. "This technology has reached a tipping point," said Swami Sivasubramanian, Amazon's global vice president of cloud technology databases, data analytics and machine learning.
On July 26, Eastern Time, at the New York Summit, AWS launched the Agents (agents) function of Amazon Bedrock, a generative AI service, to help basic models complete complex tasks. "This will allow companies to build AI applications that can automate specific tasks, such as making restaurant reservations, rather than just getting recommendations on where to eat," Sivasubramanian said.
In addition, AWS has also launched new artificial intelligence tools, including the official availability of programming assistant Amazon CodeWhisperer, Amazon HealthScribe, a new smart medical service for generating clinical records after patient visits, and Amazon Entity Resolution, an analysis service. At the same time, it announced that Amazon EC2 P5 instances for accelerated generative AI and high-performance computing applications are officially available.
Vasi Philomin, Global Vice President of Generative AI at Amazon Cloud Technology, shared with The Paper that among all the releases, what he is most concerned about and proud of is the Agents function." A lot of people focus so much on these models and the size of the models, but I think what really matters is how you build applications out of them, and that's a big reason why we're releasing the Agents feature today."
AI Agent Competition
Generative AI models like OpenAI's GPT-4 or Meta's Llama 2 are powerful, but they can't actually automate certain tasks for the user without additional help, such as plugins.
Amazon Bedrock offers a way to build generative AI applications through pre-trained models from startups as well as Amazon's cloud technology itself, without investing in servers. Amazon Bedrock's Agents feature allows companies to use their own data to teach underlying models and then build other applications to complete tasks. The developer can choose which base model to use, provide some instructions, and choose which data the model reads.
This is similar to OpenAI's recently introduced plugin system for GPT-4 and ChatGPT, which extends the capabilities of models by letting them leverage third-party APIs and databases. In fact, there has recently been a trend toward "personalized" generative models, with startups such as Contextual AI building tools to augment models with enterprise data.
For example, a travel company can use generative artificial intelligence to provide travel recommendations, then build another agent (Agents) to receive the user's travel history and interests, then use an agent to find flight schedules, and finally build an agent to Book your selected flight.
AWS is not the only one enthusiastic about Agents. In April, Meta CEO Mark Zuckerberg (Mark Zuckerberg) also told investors that they have an opportunity to bring artificial intelligence agents (Agents) "to billions of people in a useful and meaningful way." In July, OpenAI CEO Sam Altman dived into AI agents and how they might best be implemented in an interview with The Atlantic.
Reuters reported in July that the race for "autonomous" artificial intelligence agents (Agents) is sweeping Silicon Valley. It cites one startup, Inflection AI, which raised $1.3 billion in funding at the end of June. According to its founders in a podcast, the company is developing a personal assistant that it says can act as a mentor or handle tasks such as earning flight credits and hotels after travel delays.
On July 26, Sivasubramanian said in an interview with the media that customers such as Sony (SONY), Ryanair (Ryanair), and Sun Life (Sun Life) have tried Amazon Bedrock. Sivasubramanian said Amazon Bedrock will be available to all customers "soon." He declined to say when, adding that the company aims to address cost allocation and corporate control first.
The Amazon Bedrock service was launched in April, when Amazon Bedrock offered Amazon Titan (AWS' own base model) as well as models created by stable.ai, AI21Labs and Anthropic.
At the New York summit this time, AWS announced the addition of Cohere as a basic model supplier, joining Anthropic and Stability AI's latest basic model chatbot. Cohere's command text generation model is trained to follow user prompts and return summaries, transcripts and conversations, and the tool can also extract information and answer questions.
AWS platform can call Nvidia H100 chip
At the New York summit, AWS also unveiled Amazon EC2 P5 instances powered by Nvidia's H100 chip. In a way, this is an important milestone in the more than ten-year cooperation between AWS and Nvidia.
One of the notable features of the H100 GPU is the optimization of Transformer, a key technology used in large language models. Amazon EC2 P5 instances offer 8 NVIDIA H100 GPUs with 640 GB of high-bandwidth GPU memory, 3rd generation AMD EPYC processors, 2 TB of system memory, and 30 TB of local NVMe storage to accelerate generative AI and high-performance computing applications .
Amazon EC2 P5 reduces training time by up to 6x (from days to hours) compared to previous generation GPU-based instances. According to AWS, this performance boost will reduce training costs by 40% compared to the previous generation.
In fact, since the launch of the first Amazon Nitro chip in 2013, AWS is the first cloud vendor to set foot in self-developed chips. It already has three product lines of network chips, server chips, and artificial intelligence machine learning self-developed chips. In early 2023, the purpose-built Amazon Inferentia 2 (which can support distributed inference through direct ultra-high-speed connections between chips) is released, supporting up to 175 billion parameters, making it a strong contender for large-scale model inference .
Regarding whether he was worried that providing Nvidia's H100 chip would reduce the attractiveness of AWS's self-developed chip, Ferromin responded to Pengpai Technology ("We welcome competition. Hardware will get better every few years. This is a The norm. A big problem right now is that generative AI is quite expensive, which is why no one is actually putting it into production workloads, everyone is still in the experimental stage. Once they really put it into production workloads, they will Realize that 90% of the cost is generated by it. The best case scenario is that you don’t lose money on every call, you actually make money. To achieve this, I think we need to compete.”