open-design
Repository with 20 AI design skill templates for Claude Code and AI agents. Covers audio, documents, dashboards, web pages, and presentations. Each SKILL.md defines workflow, model routing, and dispatch commands.
This rank signal uses GitHub stars, measured star growth, and recent maintenance. It is not a safety score or install approval.
Worth reviewing before you install
Worth a closer look if the use case fits. It has adoption, measured growth, and recent maintenance. Install notes are available, but you should still inspect the source.
Teams wanting ready-made AI agent skill templates for design production work.
Inspect design-templates/audio-jingle/SKILL.md and the install command before adding it to a shared agent workflow. No actionable warning was returned for this snapshot.
Compare nearby skills in the AI Agent, Claude Code, Codex channel when 38,675 GitHub stars, source freshness, or install notes are close. This one has a clearer install path, but a nearby skill may still fit your agent setup better.
How to install open-design
| **Imports** | Drop a [Claude Design][cd] export ZIP onto the welcome dialog — `POST /api/import/claude-design` parses it into a real project so your agent can keep editing where Anthropic left off || **Lifecycle** | One entry point: `pnpm tools-dev` (start / stop / run / status / logs / inspect / check) — boots daemon + web (+ desktop) under typed sidecar stamps |2 · Skills are files, not plugins.SKILL.md and source review
Primary path: design-templates/audio-jingle/SKILL.md
91/100 from GitHub star count, star growth rate, and recent update.
91/100 from GitHub star count, star growth rate, and recent update.
41.3/45 points. Star count is log-scaled so large repos lead without completely hiding newer projects.
30/35 points from 969 net stars over 7.1 observed day(s).
20/20 points. Most recent GitHub activity: 2026-05-13T06:35:23Z.
- install/adoption evidence is present
- README can support real page analysis
- Actual SKILL.md hard gate passed.
- GitHub stars hard gate passed (37,706 >= 500).
Source evidence preview
We show selected README/SKILL.md excerpts, not a full mirror of the repo. Use the focus cards for install notes, usage, and skill rules, then open GitHub before installing.
Command extracted from README.md.
| **Imports** | Drop a [Claude Design][cd] export ZIP onto the welcome dialog — `POST /api/import/claude-design` parses it into a real project so your...Sections found: Workflow.
Sections found: Workflow.
Quickstart
Download the desktop app (no build required)
The fastest way to try Open Design is the prebuilt desktop app — no Node, no pnpm, no clone:
- **open-design.ai** — official download page
- **GitHub releases**
Run with Docker
Run Open Design without installing Node.js or pnpm locally.
Requirements
- Docker Desktop
- Docker Compose v2
Verify Docker:
docker compose versionStart Open Design
git clone https://github.com/nexu-io/open-design.git
cd open-design/deploy
docker compose up -dOpen in your browser:
http://localhost:7456Common Commands
Need the full source? Read full README on GitHub
nexu-io/open-design contains 20 observed SKILL.md files. Treat it as a repository-level skill/template collection, not as only the sampled primary skill.
Sample SKILL.md paths: design-templates/audio-jingle/SKILL.md, design-templates/blog-post/SKILL.md, design-templates/clinical-case-report/SKILL.md, design-templates/critique/SKILL.md, design-templates/dashboard/SKILL.md, design-templates/dating-web/SKILL.md, design-templates/dcf-valuation/SKILL.md, design-templates/digital-eguide/SKILL.md, design-templates/docs-page/SKILL.md, design-templates/email-marketing/SKILL.md, design-templates/eng-runbook/SKILL.md, design-templates/finance-report/SKILL.md, design-templates/flowai-live-dashboard-template/SKILL.md, design-templates/gamified-app/SKILL.md, design-templates/github-dashboard/SKILL.md, design-templates/guizang-ppt/SKILL.md, design-templates/hr-onboarding/SKILL.md, design-templates/html-ppt-course-module/SKILL.md, design-templates/html-ppt-dir-key-nav-minimal/SKILL.md, design-templates/html-ppt-graphify-dark-graph/SKILL.md
Fetched primarySkillPath is only one sample: design-templates/audio-jingle/SKILL.md.
Sample primarySkillPath excerpt: # Audio Jingle Skill
Three sub-modes. The active project's audioKind decides which one runs:
audioKind | Models we route to | Plan focus |
|---|---|---|
music | Suno V5 (default), Udio, Lyria 2 | genre + tempo + instrumentation |
speech | MiniMax TTS (default), Fish, ElevenLabs V3 | script + voice + pacing |
sfx | ElevenLabs SFX (default), AudioCraft | texture + impact + duration |
Resource map
audio-jingle/
├── SKILL.md
└── example.htmlWorkflow
Step 0 — Read the project metadata
audioKind, audioModel, audioDuration (seconds), and (for speech) voice. Branch by audioKind and use the values verbatim — no clarifying form unless something is marked (unknown — ask).
Important: voice is provider-specific. For minimax-tts, --voice must be a valid MiniMax voice_id (for example male-qn-qingse), not a natural-language description. If you only have a prose voice brief ("warm female narrator", "neutral Mandarin"), keep that in your plan but omit --voice so the daemon's default voice id applies, or ask the user to choose a specific id.
Step 1 — Plan
Music
- Genre + reference artists (1-2)
- Tempo (BPM) + key
- Instrumentation (3-5 instruments max)
- Vocals: yes / no / hummed / choir
- Mood arc (intro → chorus → outro)
Speech
- Script (final, not draft — TTS runs verbatim)
- Voice target + pacing
For MiniMax this means a real voice_id, not prose in --voice
- Pronunciation hints for proper nouns / acronyms
SFX
- Texture (impact / whoosh / ambience / foley)
- Duration + envelope (sharp attack vs. gentle swell)
- Layering note (single hit vs. stacked)
State the plan in 2-3 sentences before dispatching.
Step 2 — Compose the prompt
Use the format the upstream model prefers. Bind audioDuration to the API parameter directly; never put "make it 30 seconds" in prose.
Step 3 — Dispatch via the media contract
Use the unified dispatcher — do not call provider APIs by hand:
"$OD_NODE_BIN" "$OD_BIN" media generate \
--project "$OD_PROJECT_ID" \
--surface audio \
--audio-kind "<music|speech|sfx>" \
--model "<audioModel from metadata>" \
--duration <audioDuration seconds> \
[--voice "<provider voice id (speech only)>"] \
--output "<short-slug>-<duration>s.mp3" \
--prompt "<assembled prompt from Step 2 — for speech, the literal script>"The command prints one line of JSON: {"file": {"name": "...", ...}}. The bytes land in the project; the FileViewer renders the audio transport controls automatically.
Step 4 — Hand off
Reply with: plan summary, the filename returned by the dispatcher, and one sentence on what to try if the user wants a variation (e.g. "swap tempo from 92 to 108 BPM" rather than "make it different").
Hard rules
- TTS runs your script literally. Proof it before dispatching —
even one stray comma changes the cadence.
- MiniMax TTS rejects free-form voice prose in
--voice. Use a real
MiniMax voice_id (for example male-qn-qingse) or omit the flag and let the daemon's default voice apply.
- Music: under 30s = single section; 30–90s = intro + body; 90s+ =
full arc. Don't try to fit a 3-act song into 15 seconds.
- SFX: prefer one well-described layer over a paragraph of "make it
cool" — generators reward specific texture words.
- Save the file every turn. The audio viewer shows transport controls
the moment the file lands.
Need the full source? Read full SKILL.md on GitHub
