inf2.24xlarge vs dl2q.24xlarge
| inf2.24xlarge | dl2q.24xlarge |
| Machine Learning ASIC | Machine Learning ASIC |
| Machine Learning (ML) inference inf – AWS Inferentia 2 – Generation 24xlarge – Size | Deep learning / Machine Learning (ML) inference and training dl – Deep Learning 2 – Generation q – Qualcomm inference accelerator 24xlarge – Size |
| 96 | 96 |
| 384 GiB | 768 GiB |
| x86_64 | n/a |
| AMD EPYC 7R13 Processor | Intel Xeon Platinum 8259 (Cascade Lake) |
| 3.6 GHz | |
| 2 | 0 |
| no | no |
| 6 | 8 |
| yes | no |
| Inferentia2 | |
| AWS | |
| 6 | |
| 192 GiB | 128 GiB |
| no | no |
| inf2.24xlarge, inf2.48xlarge, inf2.8xlarge, inf2.xlarge | dl2q.24xlarge |
| inf1.24xlarge, inf2.24xlarge | |
| ml.inf2.24xlarge | |
| no | no |
| no | no |
| no | no |
| EBS only | EBS only |
| default | |
| supported | |
| required | |
| 30720 MBps | 19000 MBps |
| 30000 Mbps | |
| 3750 Mbps | |
| 120000 IOPS | |
| 120000 IOPS | |
| 30000 Mbps | |
| 3750 Mbps | |
| 50 Gigabit | 100 Gigabit |
| yes | yes |
| 15 | |
| 1 | |
| 50 | |
| 50 | |
| yes | no |
| required | |
| no | no |
| yes | no |
| no | no |
| 1.00x | 1.21x |
Regional Prices
Geography | Region | inf2.24xlarge | dl2q.24xlarge |
|---|---|---|---|
| Europe | Europe (Frankfurt) (eu-central-1) | n/a | 11.5952 |
| Hidden | Hidden (hidden-1) | 7.1397 | n/a |
| Hidden | Hidden (hidden-10) | 11.0341 | n/a |
| Hidden | Hidden (hidden-2) | 8.1133 | n/a |
| Hidden | Hidden (hidden-3) | 8.4378 | n/a |
| Hidden | Hidden (hidden-4) | 8.4378 | n/a |
| Hidden | Hidden (hidden-5) | 9.0869 | n/a |
| Hidden | Hidden (hidden-6) | 9.0869 | n/a |
| Hidden | Hidden (hidden-7) | 9.7360 | n/a |
| Hidden | Hidden (hidden-8) | 9.7360 | n/a |
| Hidden | Hidden (hidden-9) | 9.7360 | n/a |
| US | US East (Ohio) (us-east-2) | 6.4906 | n/a |
| US | US East (Virginia) (us-east-1) | 6.4906 | n/a |
| US | US West (Oregon) (us-west-2) | 6.4906 | 8.9194 |