dl2q.24xlarge vs dl1.24xlarge

Instance Typedl2q.24xlargedl1.24xlarge
Instance FamilyMachine Learning ASICMachine Learning ASIC
DetailsDeep learning / Machine Learning (ML) inference and training

dlDeep Learning
2 – Generation
q – Qualcomm inference accelerator
24xlarge – Size
Deep learning / Machine Learning (ML) inference and training

dlDeep Learning
1 – Generation
24xlarge – Size
vCPUs9696
Memory768 GiB768 GiB
CPU Architecturen/ax86_64
ProcessorIntel Xeon Platinum 8259 (Cascade Lake)Intel Xeon Platinum 8275L
Clock Speed3 GHz
Threads Per Core02
Accelerators
GPUnoyes
GPU NameGaudi HL-205
GPU ManufacturerHabana
GPUs8
GPU Memory32 GiB
GPU Total Memory256 GiB
Inference Acceleratornono
Accelerator Memory128 GiB
FPGAnono
Instances
Sizesdl2q.24xlargedl1.24xlarge
Free Tiernono
Burstablenono
Hibernationnono
Instance StorageEBS only4 x 1000 GB NVMe SSD
EBS (Elastic Block Store)
EBS Optimizeddefault
Encryptionsupported
NVMerequired
Dedicated Throughput19000 MBps19000 MBps
Optimized Baseline Bandwidth19000 Mbps
Optimized Baseline Throughput2375 Mbps
Optimized Baseline Performance80000 IOPS
Optimized Max Performance80000 IOPS
Optimized Max Bandwidth19000 Mbps
Optimized Max Throughput2375 Mbps
Networking
Networking Performance100 Gigabit400 Gigabit
Enhanced Networkingyesno
Max Interfaces60
Max Cards4
IPv4 Addresses per Interface50
IPv6 Addresses per Interface50
IPv6 Supportnoyes
ENA Supportrequired
EFA Supportnoyes
EFA Max Interfaces4
Encryption in Transit Supportnoyes
ENA SRD Supportnono
Avg Price Diff1.00x1.28x

Regional Prices

Geography
Region
dl2q.24xlarge
dl1.24xlarge
EuropeEurope (Frankfurt) (eu-central-1)11.5952n/a
USUS East (Virginia) (us-east-1)n/a13.1090
USUS West (Oregon) (us-west-2)8.919413.1090
Activate subscription to see all regions and unlock awesome features