AI Governance Dashboard

AI Dashboard

CLICK LINK ABOVE

How to add information to the dashboard: Start on the Start tab, then click the section you want to update (Portfolio, Performance, Fairness, Drift, Workflow, Incidents, Change Control, or Audit). At the top-right, choose Import CSV if you already have a spreadsheet; be sure your CSV’s first row uses the exact header names for that section (e.g., Performance needs columns like date, site, model_label, adjudicated_label, age_band, sex, time_to_action_s). The importer will recognize the headers and automatically place the rows in the right panel. Prefer to type directly? Scroll to Manual Input; Add Row in any section, fill in the fields, and click Add Row. Nothing is pre-loaded by default, so every number you see comes from the rows you add. If you’re doing a demo, you can enable the optional Load Sample Data button. Finally, open Benchmarks to set your targets (e.g., PPV and Sensitivity thresholds, subgroup gap limits, PSI drift limits, incident closure goals); these drive the “Met / Not Met” summaries you’ll see on each tab.
What the dashboard tells you once data is in: Each tab translates raw rows into clear, decision-ready indicators. You’ll see KPI tiles (counts, percentages, medians) and a concise Conditions box that states whether your policy targets are Met or Not Met (for example, “PPV ≥ 0.85 — Met,” “Input drift PSI < 0.20 — Not Met”). The Portfolio view shows what models are live, where, and which versions. Performance computes core accuracy measures (TP, FP, FN, PPV, Sensitivity) and time-to-action. Fairness highlights subgroup gaps (by age and sex) and flags any gap beyond your limit. Drift surfaces distribution shifts (e.g., PSI) and changes in alert rate. The workflow summarizes frontline burden (alerts per reader and median acknowledgment time) for incidents, tracks, near-misses, and closure rates. Change Control displays release notes and governance actions; Audit reports logging coverage and missing events. Use the filters (date, site, model, version, mode) to narrow the view and click through to see the underlying records. A small Data badge at the top of each section shows where the information came from (manual entry, CSV file name, or sample data), so the team can trust what they’re seeing and act with confidence.

Comments