Available Models
Built-in Models
These models can be downloaded directly from inside the app. Each model has variants for different chips — see Getting Started for how to pick one.
| Model | Type | CPU/GPU | NPU | Clip Skip | Source |
|---|---|---|---|---|---|
| SDXL Base 1.0 | SDXL | ❌ | ✅ | - | HuggingFace |
| Illustrious v16 | SDXL | ❌ | ✅ | - | CivitAI |
| AnythingV5 | SD1.5 | ✅ | ✅ | 2 | CivitAI |
| ChilloutMix | SD1.5 | ✅ | ✅ | 1 | CivitAI |
| Absolute Reality | SD1.5 | ✅ | ✅ | 2 | CivitAI |
| QteaMix | SD1.5 | ✅ | ✅ | 2 | CivitAI |
| CuteYukiMix | SD1.5 | ✅ | ✅ | 2 | CivitAI |
TIP
Refer to each model's original page for recommended prompts, samplers, and CFG values.
Pre-converted Community Models
Community-maintained NPU model collections (for SD1.5 and SDXL):
The suffix on each file indicates the minimum chip tier required to run that model. A higher-tier chip can run any model at or below its tier; pick the highest suffix your device supports for the best performance.
SD1.5:
_min— any Snapdragon NPU with Hexagon V68 or newer (non-flagship chips)_8gen1— Snapdragon 8 Gen 1 / 8+ Gen 1_8gen2— Snapdragon 8 Gen 2 or newer (2/3/4/5)
SDXL:
_8gen3— Snapdragon 8 Gen 3 or newer (SDXL only supports 8 Gen 3/4/5, so this is the only SDXL suffix)
Custom Models
You can also use your own checkpoints.
CPU/GPU mode (in-app)
Import any SD1.5 checkpoint directly in the app — conversion runs on-device. LoRA weights can be merged at import time.
NPU mode (host-side conversion)
NPU models must be converted on a Linux/WSL host before they can be loaded. See:
- SD1.5 Conversion Guide
- SDXL Conversion Guide (experimental)
Technical Details
NPU
- SDK: Qualcomm QNN SDK on the Hexagon NPU
- Quantization: W8A16 static quantization
- Resolution: SD1.5 fixed at 512×512 base; additional resolutions via patches. SDXL fixed at 1024×1024.
- Performance: Extremely fast inference
CPU/GPU
- Framework: MNN
- Quantization: W8 dynamic quantization
- Resolution: 128×128, 256×256, 384×384, 512×512
- Performance: Moderate speed, high compatibility