Google logo

BERT Large Uncased Whole Word Masking


BERT Large Uncased Whole Word Masking is Google logoGoogle's language model. A BERT Large Uncased encoder trained with whole-word masking, producing richer token representations for English text by masking complete words during pretraining.
Spec
Canonical IDgoogle-bert-en-wwm-uncased-l-24-h-1024-a-16
TypeLanguage
StatusActive
CreatorGoogleGoogle
Input ModalitiesText
Output ModalitiesText

Capabilities

Input1/5
Textβœ“
ImageΒ·
AudioΒ·
VideoΒ·
PDFΒ·
Output1/5
Textβœ“
ImageΒ·
AudioΒ·
VideoΒ·
EmbeddingΒ·
Capabilities0/13
ReasoningΒ·
Adaptive ReasoningΒ·
Function CallingΒ·
Parallel Function CallingΒ·
Structured OutputsΒ·
Native JSON SchemaΒ·
Web SearchΒ·
URL ContextΒ·
Computer UseΒ·
Code ExecutionΒ·
File SearchΒ·
Prompt CachingΒ·
Assistant PrefillΒ·

Versions

VersionReleasedContextInput / 1MOutput / 1MStatus
BERT Base Uncasedβ€”β€”β€”β€”Available
BERT 2 EN Uncased L-10 H-256 A-4β€”β€”β€”β€”Available
BERT 2 EN Uncased L-10 H-512 A-8β€”β€”β€”β€”Available
BERT 2 EN Uncased L-10 H-768 A-12β€”β€”β€”β€”Available
BERT 2 EN Uncased L-12 H-128 A-2β€”β€”β€”β€”Available
BERT 2 EN Uncased L-12 H-512 A-8β€”β€”β€”β€”Available
BERT 2 EN Uncased L-2 H-128 A-2β€”β€”β€”β€”Available
BERT 2 EN Uncased L-2 H-512 A-8β€”β€”β€”β€”Available
BERT 2 EN Uncased L-2 H-768 A-12β€”β€”β€”β€”Available
BERT Base Casedβ€”β€”β€”β€”Available
BERT Large Uncased Whole Word Maskingβ€”β€”β€”β€”Current

Model IDs