Prompting with examples means giving an AI tool concrete references — screenshots, URLs, code snippets, data samples, or user journey descriptions — instead of abstract instructions. When you show the AI what “good” looks like, it produces output closer to what you actually need. When you only describe in general terms, the model guesses, and guesses compound.
This post covers how to prompt AI to build apps using the show-don’t-tell approach: why abstract prompts fail, which types of examples work, how to use them across tools like AI Studio, Cursor, Claude Code, and Lovable, and when showing examples backfires.
Why “build me a dashboard” fails without examples
A prompt like “build me a dashboard” gives the model a noun and nothing else. It will pick a layout, invent metrics, choose colors, and arrange navigation based on the most common patterns in its training data. The result is generic. It looks like a dashboard, but not your dashboard.
The problem is ambiguity. “Dashboard” could mean a Stripe-style analytics view, a Notion-style workspace, a CRM pipeline board, or a monitoring console. The model cannot distinguish between these without a reference point. It defaults to the average of everything it has seen.
This happens across all AI app building tools. Whether you prompt in Google AI Studio, Cursor, Claude Code, or Lovable, the model responds to specificity. Abstract input produces abstract output.
Types of examples that improve AI prompting
Not all examples serve the same purpose. Each type closes a different gap in the model’s understanding.
Screenshots. A screenshot or hand-drawn wireframe gives the model a visual target. “Build a settings page that looks like this” with an attached image is far more precise than “build a settings page with user preferences.” The model sees layout, spacing, component types, and hierarchy.
URL references. Describing an existing product narrows the design space. “The sidebar navigation should work like Linear’s — collapsible sections, keyboard shortcuts listed inline, active state highlighted” gives the model a concrete benchmark even without a screenshot.
Code snippets. Pasting a component, a data model, or an API response tells the model exactly what structure to match. “Here is my User model. Build a profile editing form that maps to these fields” removes all guesswork about data shape.
Data samples. Showing actual or realistic mock data clarifies field types, edge cases, and display formatting. “Here are three sample orders: [table]. Build an orders list that handles these statuses and formats currency this way” produces a list that fits your domain.
User journey descriptions. Walking through a flow step by step (“the user lands on the empty state, clicks Create, fills the form, submits, sees a confirmation, and returns to the list”) gives the model a sequence instead of a pile of features.
The strongest prompts combine two or three of these. A screenshot plus a data sample plus a journey description produces output that needs minor tweaks rather than a rewrite.
Show-don’t-tell prompting checklist
Use this before submitting a prompt that builds or modifies a screen:
- At least one concrete example included. A screenshot, a URL reference, a code snippet, a data sample, or a journey walkthrough.
- Example matches intent. The reference illustrates the outcome you want, not a loosely related app.
- Differences from the example stated. If your version should differ from the reference, say how. (“Like Stripe’s dashboard, but with only two metrics and no date range picker.“)
- Data shape provided. Field names, types, and sample values included so the model does not invent them.
- One screen or one flow per prompt. Scope stays small enough that the example clearly applies.
- Contradictions removed. No two examples in the same prompt that suggest conflicting layouts or behaviors.
- Negative examples noted. If a reference app has elements you do not want, say so. (“Use Linear’s sidebar style but skip the keyboard shortcut hints.“)
A prompt that passes this checklist will outperform a longer prompt that describes the same feature in abstract terms.
How to show examples across tools
The technique works everywhere, but the mechanics differ.
Google AI Studio. You can paste screenshots directly into the prompt. Combine a screenshot with a written description of what should differ. AI Studio generates React and Tailwind CSS, so a visual reference gives Gemini strong layout guidance.
Cursor. Reference existing files in your project. “Look at the ProfilePage component and build a SettingsPage that follows the same layout and form patterns” uses your own codebase as the example. You can also paste code blocks from other projects into the chat.
Claude Code. Paste code snippets, terminal output, or data samples directly. Claude Code excels at matching patterns from provided code. “Here is the API response for /orders. Build a table component that renders this data with sortable columns” works well because the model receives the exact data structure.
Lovable. Describe reference apps by name and specifics. “Build a project board like Trello’s, with three columns: To Do, In Progress, Done. Cards show title, assignee avatar, and a due date badge.” Lovable uses the description to scaffold the initial layout.
Across all tools, the principle is the same: reduce the model’s decision space by showing what you want instead of hoping it infers correctly.
When showing examples backfires
Examples improve output, but three patterns cause problems.
Copying too closely. If you provide a competitor’s screenshot and ask the model to replicate it exactly, you get a pixel-level copy that carries the same interaction patterns, branding cues, and structural assumptions. This creates legal risk and a derivative product. Show the reference, then state what must differ.
Contradictory examples. Providing two screenshots that suggest different layouts for the same screen confuses the model. It will attempt to merge both, producing something that matches neither. Pick one primary reference per screen. Use a second example only to illustrate a specific component, not an entire layout.
Outdated or irrelevant references. An example from a desktop-only SaaS product will mislead the model if your app targets mobile. A data sample from a different domain will introduce field names and structures that do not apply. Match the example to your context.
Symptoms your prompts need more examples
These signs indicate abstract prompting is producing generic or broken output:
- The generated layout looks like a template, not your product. The model had no reference point and fell back on defaults.
- Fields have placeholder names like “Item 1” or “Description goes here.” No data sample was provided.
- Navigation structure changes between iterations. The model reinvents hierarchy because no reference anchored it.
- The app works for the happy path but collapses on empty states, errors, or edge cases. The prompt described what should exist but never showed what absence looks like.
- You re-prompt three or more times to get close to what you imagined. Each round adds words but not clarity. A single example would have closed the gap faster.
- Components look inconsistent across screens. No existing component or screenshot established a visual standard.
- The model asks clarifying questions (in tools that support it) about layout, spacing, or data types. It is signaling that it needs an example, not more adjectives.
If three or more of these describe your experience, the fix is not a longer prompt. The fix is a better example.
The show-don’t-tell technique: outcome plus example
The most reliable pattern for show-don’t-tell prompting follows two steps:
Describe the outcome. State what the user should see and be able to do. Keep this short and behavioral. “After logging in, the user sees a dashboard with their active projects, each showing title, status, and last-updated date.”
Provide one or two examples of what good looks like. Attach a screenshot, paste a code snippet, or name a reference app with specifics. “The layout should follow the pattern in this screenshot. The project cards should use the same component style as the ProfileCard in our codebase.”
This pattern works because it separates intent from reference. The outcome tells the model what to build. The example tells it how “good” looks in your context. Together they close the gap between your vision and the model’s default assumptions.
When examples stop being enough
Show-don’t-tell prompting dramatically improves what AI tools produce. But the technique has a ceiling. Once your app involves conditional workflows, role-based access, real data persistence, and edge cases that compound across screens, no set of examples replaces engineering judgment.
The generated code is a starting point. A good one, if your examples were good. But still a starting point.
At Spin by Fryga, we pick up where prompting leaves off. We audit AI-generated code, stabilize the flows your users depend on, and turn prototypes into products that hold up under real traffic. If your app outgrew what examples can express, that is the work we do.