BERT Large Uncased Whole Word Masking is
Google's language model. A BERT Large Uncased encoder trained with whole-word masking, producing richer token representations for English text by masking complete words during pretraining.
Capabilities
Input1/5
β
Β·
Β·
Β·
Β·
Output1/5
β
Β·
Β·
Β·
Β·
Capabilities0/13
Β·
Β·
Β·
Β·
Β·
Β·
Β·
Β·
Β·
Β·
Β·
Β·
Β·
Versions
| Version | Released | Context | Input / 1M | Output / 1M | Status |
|---|---|---|---|---|---|
| BERT Base Uncased | β | β | β | β | Available |
| BERT 2 EN Uncased L-10 H-256 A-4 | β | β | β | β | Available |
| BERT 2 EN Uncased L-10 H-512 A-8 | β | β | β | β | Available |
| BERT 2 EN Uncased L-10 H-768 A-12 | β | β | β | β | Available |
| BERT 2 EN Uncased L-12 H-128 A-2 | β | β | β | β | Available |
| BERT 2 EN Uncased L-12 H-512 A-8 | β | β | β | β | Available |
| BERT 2 EN Uncased L-2 H-128 A-2 | β | β | β | β | Available |
| BERT 2 EN Uncased L-2 H-512 A-8 | β | β | β | β | Available |
| BERT 2 EN Uncased L-2 H-768 A-12 | β | β | β | β | Available |
| BERT Base Cased | β | β | β | β | Available |
| BERT Large Uncased Whole Word Masking | β | β | β | β | Current |