m8id.24xlarge vs dl2q.24xlarge
| m8id.24xlarge | dl2q.24xlarge |
| General purpose | Machine Learning ASIC |
| Small to medium size databases, data processing, caching, backends. m – Multi purpose 8 – Generation i – Intel processor d – Instance store volume 24xlarge – Size | Deep learning / Machine Learning (ML) inference and training dl – Deep Learning 2 – Generation q – Qualcomm inference accelerator 24xlarge – Size |
| 96 | 96 |
| 384 GiB | 768 GiB |
| x86_64 | n/a |
| Intel Xeon Scalable (Granite Rapids) | Intel Xeon Platinum 8259 (Cascade Lake) |
| 3.9 GHz | |
| 2 | 0 |
| no | no |
| 8 | |
| no | no |
| 128 GiB | |
| no | no |
| m8id.12xlarge, m8id.16xlarge, m8id.24xlarge, m8id.2xlarge, m8id.32xlarge, m8id.48xlarge, m8id.4xlarge, m8id.8xlarge, m8id.96xlarge, m8id.large, m8id.metal-48xl, m8id.metal-96xl, m8id.xlarge | dl2q.24xlarge |
| m6id.24xlarge, m8id.24xlarge | |
| m8a.24xlarge, m8azn.24xlarge, m8g.24xlarge, m8gb.24xlarge, m8gd.24xlarge, m8gn.24xlarge, m8i.24xlarge | |
| no | no |
| no | no |
| no | no |
| 2 x 2850 NVMe SSD | EBS only |
| default | |
| supported | |
| required | |
| 30000 MBps | 19000 MBps |
| 30000 Mbps | |
| 3750 Mbps | |
| 120000 IOPS | |
| 120000 IOPS | |
| 30000 Mbps | |
| 3750 Mbps | |
| 40000 Megabit | 100 Gigabit |
| yes | yes |
| 16 | |
| 1 | |
| 64 | |
| 64 | |
| yes | no |
| required | |
| no | no |
| yes | no |
| yes | no |
| 1.00x | 1.49x |
Regional Prices
Geography | Region | m8id.24xlarge | dl2q.24xlarge |
|---|---|---|---|
| Europe | Europe (Frankfurt) (eu-central-1) | n/a | 11.5952 |
| Hidden | Hidden (hidden-1) | 6.9854 | n/a |
| Hidden | Hidden (hidden-2) | 7.5398 | n/a |
| Hidden | Hidden (hidden-3) | 8.0942 | n/a |
| US | US East (Ohio) (us-east-2) | 6.2650 | n/a |
| US | US East (Virginia) (us-east-1) | 6.2650 | n/a |
| US | US West (Oregon) (us-west-2) | 6.2650 | 8.9194 |