Low confidence analysis
The report is based on partial data. Levri reduced confidence rather than fabricate scores from a page it didn't fully see.
In a hurry? Try this first.
Retry or analyse a simpler page.
Why this happens
The page didn't fully load, so Levri reduced confidence instead of guessing. Rather than fabricate scores from incomplete signal, the report is delivered with a flag that tells you to read the numbers as directional, not authoritative.
A low-confidence report is still useful — copy weaknesses, broken hierarchy, missing trust elements all surface. Expect the lift estimates and pillar scores to shift if you re-run on a cleaner version of the same page.
Common reasons
- The page didn't fully render before timeout
- Critical content reveals after a user interaction (tabs, accordions, scroll)
- An A/B test variant was served and Levri saw a partial fold
- Major sections couldn't be reliably identified
Improve it
- RetryEnsure full load. A second fetch often hits a warmer cache and renders the full page.
- Avoid dynamic pagesUse static content where possible. Marketing and product pages render more reliably than dashboards or account pages.
When to ignore the flag
If the experiments listed in your report match what you already suspected — weak hero, hidden CTA, missing trust signals — the low-confidence flag doesn't change what to ship first. It matters most when you're comparing two pages or baselining a score over time, where you want full-confidence runs on both sides for the comparison to be fair.
Was this helpful?
We read every answer — it shapes the next article.
Talk to a human. Today.
Our team can unblock a specific site, walk you through a screenshot, or diagnose a tricky failure. We reply to most messages within one business day.
- Unblock WAF / Cloudflare issues
- Diagnose failed analyses
- Help with your report + next steps