L3 Lunaris 1.8B Turbo


L3 Lunaris 1.8B Turbo is Sao10K's language model with a 8K context window and up to 8K output tokens, starting at $0.040 / 1M input and $0.050 / 1M output. A compact 1.8B Llama 3-based generalist and roleplay model from Sao10k, optimized for fast inference in the Lunaris series.
Spec
Canonical IDsao10k-l3-lunaris-1-8b-turbo
TypeLanguage
StatusActive
CreatorSao10K
Providers
Context Window8K tokens
Max Output8K tokens
Input ModalitiesText
Output ModalitiesText
Parameters8B

Capabilities

Input1/5
Text
Image·
Audio·
Video·
PDF·
Output1/5
Text
Image·
Audio·
Video·
Embedding·
Capabilities0/13
Reasoning·
Adaptive Reasoning·
Function Calling·
Parallel Function Calling·
Structured Outputs·
Native JSON Schema·
Web Search·
URL Context·
Computer Use·
Code Execution·
File Search·
Prompt Caching·
Assistant Prefill·

Pricing by Provider

ProviderStandard
Input
$ / 1M
Output
$ / 1M
DeepInfra logo
DeepInfra
deepinfra/Sao10K/L3-8B-Lunaris-v1-Turbo
$0.040$0.050

Cost Calculator

Preset:
Compares every provider & tier in USD

Versions

VersionReleasedContextInput / 1MOutput / 1MStatus
Llama 3.2 11B128K$0.160$0.160Available
Llama 3.2 11B Instruct128K$0.350$0.350Deprecated
Llama 3.2 1B Instruct128K$0.027$0.080Deprecated
Llama 3.2 3B Instruct131K$0.015$0.020Deprecated
Llama 3.2 90B128K$0.720$0.720Available
Llama 3.2 90B Instruct128K$2.00$2.00Deprecated
Llama 3.2 1B131K$0.100$0.100Available
Llama 3.2 3B131K$0.040$0.080Available
Llama 3.1 405B Instruct131K$0.120$0.300Deprecating
Llama 3.1 70B128K$0.600$0.600Available
L3 Lunaris 1.8B Turbo8K$0.040$0.050Current

Model IDs