When I want to quickly validate whether people can find key features in a product, I reach for a rapid tree‑test built on low‑fidelity prototypes. It’s fast, cheap, and—done well—gives clear signals about product discoverability long before engineering has sunk time into visual design or interactions. In this post I’ll walk you through how I run a focused tree‑test that answers the question: “Can users find X within seven clicks or fewer?” You’ll get a repeatable process, concrete task examples, recommended tools, and simple success criteria to help you iterate with confidence.

Why a tree‑test, and why low‑fidelity?

Tree‑testing isolates the information architecture (IA) and label clarity of your product by removing UI distractions like visuals, layout, and microinteractions. That makes it ideal early in a discovery or redesign phase. Low‑fidelity prototypes—think barebones navigation lists or simplified sitemap screens—are quick to produce in Figma, Miro, or even Google Slides. They’re fast to change and keep participants focused on the menu labels and structure rather than the visual design.

My goal with a rapid tree‑test is not to be exhaustive. I want a quick, directional answer to whether key paths are discoverable—and if not, where they break. Setting a pragmatic constraint like “seven clicks or fewer” helps keep tasks realistic and measurable.

Define your scope and pick your test tasks

Start by listing the critical tasks or destinations you want people to find. These should be things that directly impact your product goals—onboarding settings, a pricing page, a core feature like “create project,” or a support article. Aim for 6–10 tasks; fewer if you’re truly in a rapid cycle.

Good task wording matters. I write them as realistic goals, not as navigational prompts. Examples I use:

  • “You want to share a project with a teammate. Where would you go to do that?”
  • “You’re trying to upgrade from the free plan to Pro—where would you start?”
  • “You need to change your notification preferences to stop email summaries.”
  • Avoid language that gives away where the item lives (don’t say “Go to Settings to…”). The point is to observe the participant’s mental model, not to test their reading comprehension.

    Design the low‑fidelity prototype

    I typically build a simple sitemap view: a left column with primary nav labels and expandable secondary items. Each label is clickable and reveals the next level. The interface is intentionally minimal—no headers, no hero images, no badges. This clarifies whether labels and hierarchy communicate the right affordances.

    Tools I like for quick builds:

  • Figma: Create clickable frames and simple text‑based menus; share with a public link.
  • Miro: Great for whiteboardy sitemaps and quick branching flows.
  • Maze or Optimal Workshop: Both offer built-in tree‑testing features so you can skip manual wiring.
  • If you build the prototype in Figma, you can export the structure into Maze or use Figma’s prototyping links for moderated sessions. If you go straight to Optimal Workshop (Treejack), you’ll get classic tree‑testing analytics out of the box.

    Recruit participants and set expectations

    For a rapid test, 20–30 participants give a reasonable balance between speed and signal. If you have access to your target user base, prioritize them. Otherwise use general audiences but screen for relevant behaviors (e.g., “uses project management tools weekly”).

    Recruitment sources I use:

  • Product mailing list or in‑app recruitment (best quality).
  • User testing marketplaces like UserTesting, Respondent, or User Interviews.
  • Social channels or a small panel of colleagues for an internal sanity check (low quality, but fast).
  • Give participants a 10–15 minute estimate up front, and be explicit that they’ll be asked to find things in a navigation structure. For unmoderated tests, provide clear instructions and one task at a time.

    Run moderated or unmoderated sessions

    Decide whether to moderate. Moderated sessions (via Zoom, Lookback, or Hotjar’s session tools) let you probe participants’ thinking when they get stuck. That’s invaluable the first time you test a new IA. Unmoderated tests (via Maze, Optimal Workshop) scale faster and are fine when you already know the general pain points.

    My rapid approach mixes both: I run 5–8 moderated sessions to uncover qualitative issues, then push a cleaned unmoderated tree to 20–25 more participants to capture quantitative patterns.

    Measure discoverability in seven clicks or fewer

    Here’s the practical rule I use: a task is “discoverable” if at least 70% of participants reach the intended destination within seven clicks. Why seven? It’s a balance—longer paths frustrate users and reduce conversion potential; shorter paths align with common usability heuristics.

    Key metrics to capture:

  • Success rate: Percentage of participants who reached the correct item.
  • Clicks to find: Average number of clicks until success.
  • Directness: Percentage who go directly to the target path vs. recover from dead‑ends.
  • Common wrong paths: Which labels or branches attracted users who missed the target?
  • MetricAcceptable thresholdWhy it matters
    Success rate>= 70%Indicates general discoverability
    Avg clicks to find<= 4–5Shorter paths correlate with better conversion
    Directness>= 60%Shows whether labels map to user expectations

    Analyze results and identify fixes

    When I review results, I look for patterns, not anomalies. A few misclicks are fine. But if multiple people get stuck in the same branch or choose the same wrong label, that’s a design smell.

    Common fixes I iterate on:

  • Rename labels: Swap jargon for plain language. “Workspace settings” might become “Account & settings” if users naturally think “Account.”
  • Reorder items: Move high‑priority actions closer to the top of the nav.
  • Change hierarchy: Flatten or nest differently—sometimes collapsing an obscure item under a more obvious parent improves discoverability.
  • Add affordances: Consider adding a prominent CTA or spotlight for critical tasks that shouldn’t be buried (e.g., “Create project”).
  • If renaming or reordering doesn’t work, create alternative labels and run A/B tree‑tests in the next iteration.

    Examples from my work

    On a recent redesign of a creative collaboration product, users consistently failed to find “export comments.” The tree‑test showed most people looked under “Project” instead of “Export,” so we tried two things: rename “Export” to “Share & export,” and surface “Export comments” as an action under the project toolbar. Success rate jumped from 42% to 78% within two iterations—well within our seven‑click target.

    In another case, a SaaS onboarding flow hid pricing tiers under “Plans & billing.” Users expected “Pricing” at the top level. A quick rename and move pushed the find rate from 55% to 83% and increased upgrade clicks in the prototype funnel.

    Practical tips to save time

  • Keep prototypes text‑only. If it looks like a polished UI, participants get distracted by design details.
  • Run a 3‑participant pilot to validate task wording and prototype flow before full recruitment.
  • Use heatmaps from session replays to spot where participants hesitate even if they ultimately succeed.
  • Record qualitative comments during moderated sessions—those quotes are gold when you need to justify IA decisions to stakeholders.
  • What to do next

    After you implement changes, rerun the tree‑test until the success threshold is met. Then bridge the IA into a higher‑fidelity prototype and run a click‑through usability test that includes real UI elements and interactions. The two‑phase approach—IA validation first, visual/interaction validation later—keeps expensive iterations where they belong: after you’ve proven the structure.

    If you want, I can share a template Figma file and a lab script I use for the moderated sessions. Tell me which tool you prefer (Maze, Optimal Workshop, Figma) and I’ll tailor the assets so you can run your first rapid tree‑test today.