|
Today, we announce the general availability of Amazon Elastic Compute Cloud (AMAZON EC2) P6-B200 Accelerated NVIDIA Blackwell GPUs to solve customer needs for high performance and scalability in artificial intelligence (AI), machine learning (ML) and high-performance computational computational computational computational applications (HPC).
The AMAZON EC2 P6-B200 instances accelerate a wide range of workloads with GPU support, but are particularly well equipped for extensive distributed and inference for FMS with strengthening learning (RL) and distillation, multimodal training and inferences and HPC by modeling insurance.
Combined with the Elastic Manufacturing Adapter adapter (EFAV4), Ultraclusters EC2 hyperscal and advanced virtualization and safety capabilities with AWS Nitro, you can train and administer FMS with increased speed, scale and security. These cases also provide up to twice the performance of AI training (training time) and inference (tokens/s) compared to the instances of EC2 P5EN.
You can speed up the FMS market time and provide faster inference throughput, reducing inference costs and helping to increase the acceptance of generative AI applications and increased processing performance for HPC applications.
Special instance EC2 P6-B200
The new EC2 P6-B200 instances are provided by eight NVIDIA Blackwell GPUs with 1440 GB high GPU memory bandwidth, 5 generation Intel Xeon Scalable processors (Emerald Rapids), 2 TIB system memory and 30 TB of local NVME storage.
Here are specifications for instances EC2 P6-B200:
| Instance of size | GPUS (NVIDIA B200) | GPU Memory (GB) |
Pcpus | GPU Peer To Peer (GB/S) | Storage instance (TB) | The bandwidth of the band (GBPS) | EBS bandwidth (GBPS) |
| P6-B200.48xlarge | 8 | 1440 HBM3E | 192 | 1800 | 8 x 3,84 NVMe SSD | 8 x 400 | 100 |
These instances have up to 125 pierce improvements in GPU TFLOPS, 27 PIERIENT GPU memory size and 60 pieriant Increhe in GPU bandwidth compared to P5EN instances.
P6-B200 instance in action
You can use the P6-B200 instance in the American West (Oregon) AWS through the EC2 for ML. If you want to book EC2 capacity blocks, choose Reservation of capacity On Amazon EC2.

Choose Purchase capacity blocks for ml and then select your total capacity and specific how long you need an EC2 capacity block P6-B200.48xlarge bodies. The total number of days you can book EC2 capacity is 1-14 days, 21 days, 28 days or multiple for 7 to 182 days. You can choose the earliest start date for up to 8 weeks in advance.
Now your EC2 capacity block will be successfully scheduled. The total price of the EC2 capacity block is loaded in advance and the price will not change after purchase. Payment will be the ball for your birthday within 12 hours after buying the EC2 capacity blocks. If you want to learn more, visit the ML capacity blocks in the Amazon EC2 user manual.
You can use the AWS Deep Learning Friends (PLIMI) to support the EC2 P6-B200 instances to support the P6-B200 instances. It provides tiles to experts and researchers ML infrastructure and tools for rapid construction of scalable, secure and distributed ML applications in pre -configured.
You want to start instances, you can use the AWS Management Console, AWS Command Line Interface (AWS CLI) or AWS SDKS.
You can integrate the EC2 P6-B200 with different AWS services such as Amazon Elastic Kubernetes Services (Amazon EKS), Amazon Simple Storage Service (Amazon S3) and Amazon FSX Pro Luster. Support Amazon Sagemaker Hyperpod is also coming soon.
Now available
The Amazon EC2 P6-B200 instances are available today in the US West (Oregon) and can be purchased as EC2 blocks for ML.
Try the Amazon EC2 P6-B200 instance in the Amazon EC2. If you want to learn more, start on the instance page Amazon EC2 P6 and send feedback to AWS Re: Post for EC2 or through the usual AWS support contacts.
– Channels
How’s the Blog of news? Take this 1 minute survey!
(This survey is hosted by an external company. AWS processes your information as described in the AWS Privacy Notice. AWS will own data collected via this survey and will not share the collection of Lissel survey.)