dl2q.24xlarge vs i8ge.24xlarge
| dl2q.24xlarge | i8ge.24xlarge |
| Machine Learning ASIC | Storage optimized |
| Deep learning / Machine Learning (ML) inference and training dl – Deep Learning 2 – Generation q – Qualcomm inference accelerator 24xlarge – Size | NoSQL databases (like Cassandra, MongoDB, Redis), in-memory databases (such as Aerospike), scalable transactional databases, data warehousing, Elasticsearch, and analytics tasks. i – IOPS / storage optimized 8 – Generation g – AWS Graviton processor e – Extra storage or memory 24xlarge – Size |
| 96 | 96 |
| 768 GiB | 768 GiB |
| n/a | arm64 |
| Intel Xeon Platinum 8259 (Cascade Lake) | AWS Graviton4 Processor |
| 2.8 GHz | |
| 0 | 1 |
| no | no |
| 8 | |
| no | no |
| 128 GiB | |
| no | no |
| dl2q.24xlarge | i8ge.12xlarge, i8ge.18xlarge, i8ge.24xlarge, i8ge.2xlarge, i8ge.3xlarge, i8ge.48xlarge, i8ge.6xlarge, i8ge.large, i8ge.metal-24xl, i8ge.metal-48xl, i8ge.xlarge |
| i8g.24xlarge | |
| no | no |
| no | no |
| no | no |
| EBS only | 8 x 7500 NVMe SSD |
| default | |
| supported | |
| required | |
| 19000 MBps | 30000 MBps |
| 30000 Mbps | |
| 3750 Mbps | |
| 120000 IOPS | |
| 120000 IOPS | |
| 30000 Mbps | |
| 3750 Mbps | |
| 100 Gigabit | 150 Gigabit |
| yes | yes |
| 16 | |
| 1 | |
| 50 | |
| 50 | |
| no | yes |
| required | |
| no | no |
| no | yes |
| no | yes |
| 1.00x | 1.16x |
Regional Prices
Geography | Region | dl2q.24xlarge | i8ge.24xlarge |
|---|---|---|---|
| Europe | Europe (Frankfurt) (eu-central-1) | 11.5952 | n/a |
| Hidden | Hidden (hidden-1) | n/a | 13.6080 |
| US | US East (Ohio) (us-east-2) | n/a | 11.3904 |
| US | US East (Virginia) (us-east-1) | n/a | 11.3904 |
| US | US West (Oregon) (us-west-2) | 8.9194 | 11.3904 |