c8ine.2xlarge vs trn1.2xlarge
| c8ine.2xlarge | trn1.2xlarge |
| Compute optimized | Machine Learning ASIC |
| High-performance web servers, scientific modeling, batch processing, distributed analytics, HPC, machine/deep learning inference, ad serving, scalable multiplayer gaming, and video encoding. c – Compute optimized 8 – Generation i – Intel processor n – Network and EBS optimized e – Extra storage or memory 2xlarge – Size | Machine Learning (ML) training trn – AWS Trainium 1 – Generation 2xlarge – Size |
| 8 | 8 |
| 16 GiB | 32 GiB |
| x86_64 | x86_64 |
| Intel Xeon Scalable (Granite Rapids) | Intel Xeon Scalable (Icelake) |
| 3.9 GHz | 3.5 GHz |
| 2 | 2 |
| no | no |
| 1 | |
| no | no |
| 32 GiB | |
| no | no |
| c8ine.12xlarge, c8ine.2xlarge, c8ine.4xlarge, c8ine.8xlarge, c8ine.large, c8ine.xlarge | trn1.2xlarge, trn1.32xlarge |
| c8a.2xlarge, c8g.2xlarge, c8gb.2xlarge, c8gd.2xlarge, c8gn.2xlarge, c8i-flex.2xlarge, c8i.2xlarge, c8ib.2xlarge, c8id.2xlarge, c8in.2xlarge | |
| ml.trn1.2xlarge | |
| no | no |
| no | no |
| no | no |
| EBS only | 1 x 475 NVMe SSD |
| default | default |
| supported | supported |
| required | required |
| up to 10000 MBps | up to 20480 MBps |
| 2500 Mbps | 5000 Mbps |
| 312.5 Mbps | 625 Mbps |
| 12000 IOPS | 16250 IOPS |
| 40000 IOPS | 65000 IOPS |
| 10000 Mbps | 20000 Mbps |
| 1250 Mbps | 2500 Mbps |
| 12.5 Gigabit | 12500 Megabit |
| yes | yes |
| 4 | 4 |
| 1 | 1 |
| 30 | 15 |
| 30 | 15 |
| yes | yes |
| required | required |
| no | no |
| yes | yes |
| no | no |
| 1.00x | 1.82x |
Regional Prices
Geography | Region | c8ine.2xlarge | trn1.2xlarge |
|---|---|---|---|
| Asia Pacific | Asia Pacific (Tokyo) (ap-northeast-1) | 0.8568 | n/a |
| US | US East (Ohio) (us-east-2) | n/a | 1.3438 |
| US | US East (Virginia) (us-east-1) | 0.6804 | 1.3438 |
| US | US West (Oregon) (us-west-2) | 0.6804 | 1.3438 |