The Problem: Context Costs Tokens
Anyone integrating large language models like Claude or ChatGPT into data-driven workflows faces a fundamental challenge: the AI doesn't know your data structure. Every API call must include the complete schema – field names, types, nesting, formats, dependencies. For a template with 40–60 fields, that's easily 2,000–5,000 tokens just for context, before the actual question is even asked.
For a batch processing run of 500 products, that means 1–2.5 million tokens solely for repeated schema descriptions. That costs money, slows processing, and is error-prone – because who ensures the prompt exactly matches the current template?
The math is simple: sending the data model with every request means paying multiple times – in tokens, in latency, and in maintenance effort. Storing it once as a Skill means paying once.
The Solution: One Click, One Skill
publixx analyzes the loaded template completely – every element, every data binding, every visibility rule, every formatting rule – and generates a structured AI Skill in Claude-compatible format. The export contains not just the data schema, but the semantic context: which field controls which element, which fields trigger rules, what real example data looks like.
The result: Claude knows your data model before you ask the first question. The difference between "let me explain my data structure for 10 minutes" and "just start."
What the Skill Contains
The exported Skill is not a flat field list. It contains the semantic context of the template – the information a human would need to understand the data structure and work with it correctly:
| Component | Content | Value for the AI |
|---|---|---|
| schema.md | All fields with type, nesting, example values, and usage context | The AI knows that price is a number, image is a URL, and specs.power is a nested object with value and unit |
| template.md | All template elements with position, size, data binding, visibility rules, formatting rules | The AI knows which field appears in which element, which fields trigger rules, and how the layout is structured |
| examples.md | Real example datasets as JSON | The AI sees not just theory but actual data – and can derive patterns, formats, and conventions from it |
| SKILL.md | Overview, description, references to detail documents | Progressive Disclosure: Claude loads only the references relevant to the current question |
The critical element is the "Used in" column in the schema: for every field, the export documents whether it is bound to an element via data binding, whether it controls a visibility rule, whether it is referenced in a formatting rule, whether it serves as a PTL column (Smart Table), or whether it appears in a link binding or URL template. This information doesn't exist in any Excel file or PIM system – it only emerges through template analysis.
Token Efficiency: The Numbers
The quantifiable impact of Skill Export shows in API-based usage. We calculated token consumption for a typical product data sheet template with 40 fields, 3 Smart Tables, and visibility rules:
| Metric | Schema in Prompt | Schema as Skill | Difference |
|---|---|---|---|
| Tokens per request | ~3,500 (3,000 schema + 500 data) | ~500 (data only) | −86% |
| 500 requests (batch) | 1,750,000 tokens | 250,000 tokens | −1,500,000 |
| Cost (Claude Sonnet) | ~$5.25 input | ~$0.75 input | −85% |
| Schema consistency | Manually maintained, drift risk | Generated from template, always current | Deterministic |
For larger templates (60+ fields, nested objects, extensive PTL configurations), savings grow disproportionately because schema descriptions scale exponentially while payload data per request remains constant.
Use Cases
Skill Export follows a consistent pattern: template exists in publixx → Skill is exported → Claude knows the data model → work begins immediately. The following scenarios show what this looks like in practice.
| Scenario | Without Skill | With Skill |
|---|---|---|
| Data querying "Which records have an image but no price?" |
Explain schema first: which fields exist, their types, nesting structure. With 60 fields, the explanation takes longer than the actual query. | Claude knows every field from schema.md. Immediately generates a PQL query: SELECT * WHERE image EXISTS AND price NOT EXISTS |
| Data preparation Excel → Publixx JSON for 200 products |
Manual mapping or hours explaining the target structure: nested objects (specs.power.value), arrays as [{label, value}]. |
Upload Excel, say "Convert to our data sheet format" – Claude knows the target structure from the Skill and delivers valid JSON. |
| Content generation Marketing copy for 20 products |
Claude writes generic text. Too long, doesn't fit the layout, manual rework needed. | Claude knows element dimensions from template.md: 200px width at 11pt ≈ 80–90 characters. Text fits on first attempt. |
| Data validation Check supplier data against template |
Nobody knows which fields the template actually uses, which control visibility rules, which are optional. | Claude distinguishes: name = bound → critical. ce_mark = controls visibility → if empty, section is hidden. optional_note = unreferenced → irrelevant. |
| Smart Tables New team member needs to deliver data |
Someone must explain the PTL configuration: 12 columns, 3 group headers, pivot configuration, calculated fields. | New team member asks Claude: "What data do I need for the table?" – Claude knows all PTL columns and generates example JSON. |
| Multilingual Translate missing language variants |
Without context, Claude also translates technical fields like article numbers or units of measurement – producing wrong results. | Claude recognizes language fields (*_de, *_en, *_fr) from the schema and translates only text fields; technical fields remain unchanged. |
API Integration: Three Architecture Patterns
Skill Export achieves its full potential in automated pipelines where Claude is integrated via API. The following three patterns cover the most common enterprise scenarios:
Batch Data Preparation
ERP delivers raw data → Claude API converts to Publixx JSON → publixx renders documents. One API call per record. For 500 products, the Skill saves approximately 1.5 million tokens and ensures that Claude uses the exact same schema for every single call – no copy-paste drift across prompt versions.
Content Pipeline
PIM system provides technical data → Claude API generates marketing copy matching the template layout → text is written back as a data field. The Skill provides layout context (text field width, font size) without the pipeline needing to include this in every call.
Validation Microservice
Incoming webhook receives supplier data → Claude API validates against schema → returns validation report. The Skill defines what "complete" means – not abstractly, but relative to the specific template: which fields are referenced via binding, which control visibility rules, which are irrelevant.
Why We're Early
Custom AI Skills are a new paradigm in human-AI interaction. Since their introduction by Anthropic in October 2025, Skills are primarily created in three ways: manually written, converted from existing documentation, or interactively built in dialogue. All three require a human to curate the Skill content.
publixx takes a different approach: the template generates its own Skill. No human needs to describe the data structure, no prompt engineer needs to maintain a schema. The analysis is automatic, the output deterministic, the export reproducible. When the template changes, you export a new Skill – and the entire AI pipeline immediately works with the updated data model.
We recognized this opportunity, understood the mechanics, and shipped the solution. publixx is – to current knowledge – the first template automation platform that generates Custom AI Skills directly from production templates. Not as a concept, but as a shippable feature.
Technical Details
| Export format | ZIP archive with SKILL.md + references/ (schema.md, template.md, examples.md) |
| Compatibility | Claude Projects, Claude API, Claude Code |
| Analyzed features | Data bindings, visibility rules, formatting rules, PTL configurations, link bindings, URL templates |
| Detected field types | string, number, boolean, date, url, email, array, object, array<object> |
| Progressive Disclosure | Claude loads only the reference documents relevant to the current question |
| Prerequisite | Loaded template with data in publixx |
| Usage | One click: Menu → Export → AI Skill Export |
Try Skill Export
Load any template with data into publixx and export the AI Skill. Upload the ZIP file as Project Knowledge into a Claude project – and ask Claude a question about your data. The difference is immediate.