dl2q.24xlarge vs g7e.24xlarge
| dl2q.24xlarge | g7e.24xlarge |
| Machine Learning ASIC | GPU instance |
| Deep learning / Machine Learning (ML) inference and training dl – Deep Learning 2 – Generation q – Qualcomm inference accelerator 24xlarge – Size | Machine Learning (ML), 3D visualizations, graphics-heavy remote workstations, 3D rendering, application streaming, video encoding, and server-side graphics tasks. g – GPU (Graphics Processing Unit) 7 – Generation e – Extra storage or memory 24xlarge – Size |
| 96 | 96 |
| 768 GiB | 1024 GiB |
| n/a | x86_64 |
| Intel Xeon Platinum 8259 (Cascade Lake) | Intel Xeon Scalable (Emerald Rapids) |
| 2.4 GHz | |
| 0 | 2 |
| no | yes |
| RTX PRO Server 6000 | |
| NVIDIA | |
| 8 | 4 |
| 96 GiB | |
| 384 GiB | |
| no | no |
| 128 GiB | 384 GiB |
| no | no |
| dl2q.24xlarge | g7e.12xlarge, g7e.24xlarge, g7e.2xlarge, g7e.48xlarge, g7e.4xlarge, g7e.8xlarge |
| g6e.24xlarge, g7e.24xlarge | |
| ml.g7e.24xlarge | |
| no | no |
| no | no |
| no | no |
| EBS only | 2 x 3800 NVMe SSD |
| default | |
| supported | |
| required | |
| 19000 MBps | 50000 MBps |
| 50000 Mbps | |
| 6250 Mbps | |
| 200000 IOPS | |
| 200000 IOPS | |
| 50000 Mbps | |
| 6250 Mbps | |
| 100 Gigabit | 800 Gigabit |
| yes | yes |
| 20 | |
| 2 | |
| 64 | |
| 64 | |
| no | yes |
| required | |
| no | yes |
| 2 | |
| no | yes |
| no | yes |
| 1.00x | 1.80x |
Regional Prices
Geography | Region | dl2q.24xlarge | g7e.24xlarge |
|---|---|---|---|
| Europe | Europe (Frankfurt) (eu-central-1) | 11.5952 | n/a |
| Hidden | Hidden (hidden-1) | n/a | 24.0345 |
| US | US East (Ohio) (us-east-2) | n/a | 16.5722 |
| US | US East (Virginia) (us-east-1) | n/a | 16.5722 |
| US | US West (Oregon) (us-west-2) | 8.9194 | 16.5722 |