Skip to content

Available Models

Built-in Models

These models can be downloaded directly from inside the app. Each model has variants for different chips — see Getting Started for how to pick one.

ModelTypeCPU/GPUNPUClip SkipSource
SDXL Base 1.0SDXL-HuggingFace
Illustrious v16SDXL-CivitAI
AnythingV5SD1.52CivitAI
ChilloutMixSD1.51CivitAI
Absolute RealitySD1.52CivitAI
QteaMixSD1.52CivitAI
CuteYukiMixSD1.52CivitAI

TIP

Refer to each model's original page for recommended prompts, samplers, and CFG values.

Pre-converted Community Models

Community-maintained NPU model collections (for SD1.5 and SDXL):

The suffix on each file indicates the minimum chip tier required to run that model. A higher-tier chip can run any model at or below its tier; pick the highest suffix your device supports for the best performance.

SD1.5:

  • _min — any Snapdragon NPU with Hexagon V68 or newer (non-flagship chips)
  • _8gen1 — Snapdragon 8 Gen 1 / 8+ Gen 1
  • _8gen2 — Snapdragon 8 Gen 2 or newer (2/3/4/5)

SDXL:

  • _8gen3 — Snapdragon 8 Gen 3 or newer (SDXL only supports 8 Gen 3/4/5, so this is the only SDXL suffix)

Custom Models

You can also use your own checkpoints.

CPU/GPU mode (in-app)

Import any SD1.5 checkpoint directly in the app — conversion runs on-device. LoRA weights can be merged at import time.

NPU mode (host-side conversion)

NPU models must be converted on a Linux/WSL host before they can be loaded. See:

Technical Details

NPU

  • SDK: Qualcomm QNN SDK on the Hexagon NPU
  • Quantization: W8A16 static quantization
  • Resolution: SD1.5 fixed at 512×512 base; additional resolutions via patches. SDXL fixed at 1024×1024.
  • Performance: Extremely fast inference

CPU/GPU

  • Framework: MNN
  • Quantization: W8 dynamic quantization
  • Resolution: 128×128, 256×256, 384×384, 512×512
  • Performance: Moderate speed, high compatibility