Lighthouse CI Baseline Configuration
Establishing a reproducible accessibility baseline requires deterministic execution within your deployment pipeline. Implementing a structured configuration automates regression detection. It enforces strict compliance gates before code reaches production.
This approach aligns directly with established Web Accessibility Testing Fundamentals & Tool Selection methodologies. Engineering teams can translate automated scoring into actionable workflows.
Key implementation steps include:
- Initializing
lhci autorunandlhci wizardfor project scaffolding - Defining static vs dynamic assertion thresholds in
lighthouserc.json - Configuring CI runners for deterministic headless Chrome execution
- Integrating PR comment bots for automated developer feedback
Project Initialization & Environment Setup
Scaffolding begins with deterministic package installation. Execute these commands in your project root:
npm install -D @lhci/cli
npx lhci wizard
The wizard generates a foundational lighthouserc.json file. Immediately configure ci.collect.numberOfRuns to 3. Network variance causes significant score volatility. Running multiple iterations allows the median calculation to filter transient spikes.
Cross-reference rule coverage with axe-core Configuration & Setup. This ensures your baseline captures both automated DOM violations and performance-adjacent barriers.
Baseline Configuration & Assertion Mapping
The assertions object dictates your CI gating logic. Lighthouse CI evaluates each audit against three states. error blocks the pipeline. warn logs without failing. off disables execution.
Target a categories.accessibility score of 0.90. This threshold accommodates minor variations while preventing regressions. Disable resource-heavy audits using skipAudits to optimize runner time.
{
"ci": {
"collect": {
"numberOfRuns": 3,
"url": ["https://staging.app.internal"],
"settings": {
"formFactor": "desktop",
"throttling": "desktop-dense-4g"
}
},
"assert": {
"preset": "lighthouse:recommended",
"assertions": {
"categories:accessibility": ["error", {"minScore": 0.90}],
"color-contrast": "error",
"aria-allowed-role": "warn"
}
},
"upload": {
"target": "temporary-public-storage"
}
}
}
This configuration maps the accessibility category to a hard gate. It flags critical contrast failures as blocking. Align your specific metric targets with Setting Up Lighthouse CI Thresholds for WCAG 2.2 AA for regulatory compliance.
When thresholds fail, CI logs output exact audit IDs and delta scores. Review the assertionResults array in the runner output to pinpoint failing DOM nodes. The details.items array provides the specific element selectors and violation context required for rapid remediation.
CI/CD Pipeline Gating & Workflow Integration
Integrating Lighthouse CI requires a dedicated workflow step. Use ci.upload.serverBaseUrl to persist historical score tracking. This enables trend analysis across deployments.
For SPAs, pass multiple endpoints via --collect.url flags. Lighthouse CI iterates through each route. It applies the same assertion matrix consistently.
jobs:
lhci:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npx lhci autorun
env:
LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}
LHCI_SERVER_BASE_URL: ${{ secrets.LHCI_SERVER_URL }}
This workflow executes lhci autorun. It attaches assertion results directly to the pull request diff. For authenticated routes, orchestrate execution alongside Playwright Accessibility Plugin Integration. Capture protected DOM snapshots before handing off to Lighthouse.
Troubleshooting Flaky Scores & Cache Management
CI environments introduce execution artifacts that skew baselines. Increase ci.collect.maxWaitForLoad from 45000ms for heavy hydration.
Docker runners require explicit Chrome sandbox overrides:
lhci collect --settings.chromeFlags='["--no-sandbox", "--disable-dev-shm-usage"]'
Validate environment compatibility using lhci healthcheck. This verifies Node.js and Chrome binary alignment. Clear ~/.config/lighthouse and node_modules/.cache between runs. Stale snapshots produce false-positive violations.
Override static configuration for branch-specific testing:
lhci assert --preset=lighthouse:recommended --assertions.categories:accessibility=error --assertions.categories:accessibility.minScore=0.95
This CLI override enforces stricter thresholds on feature branches. It maintains standard gates on main.
Common Implementation Pitfalls
- Single-Run Volatility: Setting
numberOfRuns: 1guarantees false-positive CI failures due to network jitter and shared runner CPU contention. - Form Factor Mismatch: Running mobile audits in a desktop CI runner skews viewport-dependent accessibility audits like tap target sizing.
- Over-Constrained Gates: Marking every audit as
errorblocks deployments for minor, non-blocking violations. Reserveerrorfor critical WCAG failures. - Stale Cache Accumulation: Failing to purge Lighthouse cache in ephemeral CI containers leads to outdated DOM snapshots and inaccurate baseline comparisons.
Frequently Asked Questions
How do I handle dynamic content that Lighthouse CI misses during collection?
Use ci.collect.startServerCommand to spin up a local dev server. Combine with Playwright to trigger JavaScript hydration before Lighthouse captures the DOM snapshot.
What is the recommended numberOfRuns for stable accessibility baselines?
Set numberOfRuns to 3 or 5. Lighthouse CI calculates the median score across runs. This effectively filters out transient network and CPU spikes that cause flaky thresholds.
Can I bypass CI gates for minor accessibility regressions?
Yes. Configure specific audits as warn instead of error in lighthouserc.json. Use branch-specific overrides with lhci assert --assertions.<audit>=warn to allow non-blocking merges while tracking regressions.