Chapter 12: Troubleshooting and Support
Good troubleshooting in 7DEA starts with interpretation, not with random experimentation. Because the product is explicit about route purpose, plan state, and deliverable state, the shortest path to a solution is usually to identify which layer you are actually looking at. Is the issue about account state, billing state, deliverable state, security confirmation, or system timing? Once you answer that, most problems become much easier to reason about.
Troubleshooting Workflow
- Identify the route you are on and read the page heading and summary copy.
- Capture the exact state phrase, gate text, or path label that seems surprising.
- Decide which category owns the truth: billing, deliverables, account, Aegis, or system state.
- Open the matching manual section and compare the expected behavior to the live behavior.
- Only after that, escalate with route, wording, timestamp, and reproduction steps if the behavior still looks inconsistent.
This workflow works because 7DEA is not trying to hide its logic from the user. A locked button, a trial countdown, a Session required state, or a PATH: llm_fallback label all mean something specific. If you read those signals in the right order, troubleshooting becomes much less emotional and much more efficient.
Diagnosing Sign-In and Trial Activation Questions
Questions that feel like product failures often begin with sign-in or trial state. If a user expected live guidance but remains in invitation mode, confirm the signed-in account first and then confirm whether the billing page actually shows an active trial or paid state. If a trial was just activated, read the visible billing language before concluding that nothing changed. The commercial state should move first in billing, and the rest of the experience should then follow that truth.
This is one reason support packets should never skip account identity. A user can describe the right behavior on the wrong account and accidentally create a false bug report. Calm confirmation of the signed-in identity and current commercial posture prevents that mistake.
Diagnosing Aegis Questions
Start with the path label if one is visible. llm_live means the account is reaching live guidance. llm_fallback means the product is intentionally serving the controlled invitation path instead. If you expected live guidance but see fallback, the next place to check is billing state. If the account is explorer or otherwise not entitled, the fallback is normal. If the account is active trial or active paid and you still see fallback, then you have a real inconsistency worth investigating.
Also read the route before you blame Aegis. A very generic prompt on a low-context public page will always feel less helpful than a route-specific prompt on the dashboard or billing page. Often the fix is to keep the same question but move to the route that best reflects the decision you are actually making.
Diagnosing Deliverable Questions
The first deliverable troubleshooting question is almost always: what state is the deliverable in? If the answer is Draft, then PDF and share behaviors may legitimately be unavailable even though creation succeeded. If the answer is Final and export still looks locked, check entitlement. If no deliverable exists at all, go back to creation readiness rather than debugging export paths.
The second question is about the scope statement. If the output feels too generic, confirm whether the input was too generic. Users sometimes skip the possibility that the system did exactly what the scope allowed it to do. A better scope is often the fastest fix.
Diagnosing Billing Questions
Billing questions should begin with the billing page itself. Read the visible plan state, trial wording, trial day count if present, and any upgrade or cancellation language. Then compare that to the behavior you are seeing elsewhere. Many product surprises become much less surprising once you realize the billing page is describing a state the user did not realize was active.
If the billing page and the rest of the product disagree, that is when you likely have a real issue. Support will still want the exact wording and route sequence because state mismatches are only diagnosable when the visible evidence is captured carefully.
Diagnosing Account and Security Questions
Account questions are usually about one of three things: whether the user is really signed in, whether a sensitive action requires re-auth, or whether the session list reflects reality. Each of those has a dedicated visual clue on the Account page. Read the page for token or session language before trying to guess from behavior alone.
Diagnosing Docs and Context-Help Questions
If the manual and the live route appear to diverge, start by checking whether the difference is conceptual or cosmetic. A changed heading or slightly revised marketing line is not the same as a changed workflow rule. The manual is designed to explain workflow truth, not to preserve every pixel or phrase forever. If the route meaning is still the same, the issue may be editorial rather than functional.
If the divergence affects what a user should do next, capture both the route and the manual chapter or anchor that appears inconsistent. That makes the issue actionable. Vague claims such as “the docs are outdated” are much harder to resolve than specific comparisons between a visible page state and a named chapter section.
Support Reporting Template
- Route you were on.
- Exact message or label you saw.
- Your commercial state as best you understood it: explorer, trial, or paid.
- Whether the deliverable was Draft, Final, or absent.
- UTC or local timestamp.
- The three to five steps that produced the issue.
Support packet checklist
A good support packet is small, exact, and chronological.
- Route you were on.
- Exact text or state label you saw.
- Whether the account was exploratory, trial, or paid.
- Whether the artifact was absent, Draft, or Final.
- Local time or UTC time.
- The shortest sequence that reproduces the issue.
What not to do while troubleshooting
Do not bounce randomly across routes hoping one will reveal the answer. Do not assume a paid plan eliminates every other state boundary. Do not treat a locked control as meaningless decoration. Do not describe the issue only in emotional terms when the product is already giving you exact wording. And do not ask support to infer which account or artifact you mean when you can provide that information directly.
These anti-patterns matter because they are the main reason otherwise straightforward issues become difficult to reproduce. A small amount of discipline at the user side creates a huge amount of clarity at the support side.
When to escalate
Escalate when the visible story is internally inconsistent: the billing page says one thing while the rest of the app behaves as though a different state is active, a paid or trial account receives non-entitled Aegis behavior without explanation, a Final artifact remains blocked from actions the current plan should allow, or a security flow requests confirmation that cannot be completed even though the route says it should.
What good evidence looks like
Good evidence is boring in the best way. It names the route, the wording, the sequence, and the time. It does not rely on emotional summary alone. That discipline helps both users and operators, because it keeps troubleshooting anchored to the live product rather than to memory or assumption.
When a problem is real but not urgent
Not every inconsistency is a production emergency. Some issues are important precisely because they affect user confidence, comprehension, or commercial clarity rather than immediate system safety. Documentation drift, ambiguous route wording, or a help panel that points to a weaker explanation may all deserve correction even if the core workflow still functions. Users should feel comfortable reporting these issues without needing to frame them as disasters.
That perspective is useful because it keeps support culture healthy. The goal is not to dramatize every defect. The goal is to classify it accurately, preserve the relevant evidence, and route it to the people best positioned to improve the user experience.