Skip to main content

ax-audit

Lighthouse for AI Agents

npm versionlicensenode version

Audit any website's AI Agent Experience (AX) readiness in seconds. ax-audit runs 9 weighted checks — from llms.txt compliance to structured data, AI crawler configuration, MCP server config, and Agent Card validation — and produces a single 0–100 score with actionable findings. Supports batch audits and HTML reports.


What it checks

LLMs.txt

15%

/llms.txt presence and llmstxt.org spec compliance

  • /llms.txt file exists and returns 200
  • H1 heading present per llmstxt.org spec
  • Description and URL sections defined
  • /llms-full.txt extended version (bonus)

Robots.txt

15%

AI crawler configuration (GPTBot, ClaudeBot, Google-Extended, etc.)

  • 6 core AI crawlers explicitly configured
  • Additional AI crawlers (Bytespider, CCBot, etc.)
  • Sitemap directive present
  • No blanket Disallow: / for AI bots

Structured Data

13%

JSON-LD on homepage (schema.org, @graph, entity types)

  • JSON-LD script tag present in <head>
  • @context set to schema.org
  • @graph array with multiple entities
  • Entity types: Person/Organization, WebSite, WebPage

HTTP Headers

13%

Security headers + AI discovery Link headers + CORS

  • Strict-Transport-Security (HSTS)
  • X-Content-Type-Options: nosniff
  • Link: </llms.txt>; rel="ai-agent"
  • CORS Access-Control-Allow-Origin

Agent Card

10%

/.well-known/agent.json A2A protocol compliance

  • /.well-known/agent.json exists and returns 200
  • name and description fields present
  • capabilities array defined
  • url and provider fields

MCP

10%

/.well-known/mcp.json Model Context Protocol server config

  • /.well-known/mcp.json exists and returns 200
  • Valid JSON with server metadata
  • Capabilities and tools defined
  • Endpoint URL specified

Security.txt

8%

/.well-known/security.txt RFC 9116 compliance

  • /.well-known/security.txt exists
  • Contact field (required by RFC 9116)
  • Expires field with valid future date
  • Preferred-Languages field

Meta Tags

8%

AI meta tags (ai:*), rel="alternate" to llms.txt, identity links

  • <meta name="ai:site"> tag present
  • <link rel="alternate" href="/llms.txt">
  • <link rel="me"> identity links
  • <meta name="ai:purpose"> tag

OpenAPI

8%

/.well-known/openapi.json presence and schema validity

  • /.well-known/openapi.json exists
  • Valid OpenAPI 3.x schema structure
  • Info object with title and version
  • At least one path defined

Quick Start

$ npx ax-audit https://your-site.com

Sample Output


JSON Output

Use the --json flag to get machine-readable output for programmatic integration.

$ npx ax-audit https://lucioduran.com --json
{
  "url": "https://lucioduran.com",
  "overallScore": 98,
  "grade": { "label": "Excellent", "color": "green", "min": 90 },
  "timestamp": "2026-02-27T10:30:00.000Z",
  "duration": 1842,
  "results": [
    {
      "id": "llms-txt",
      "name": "LLMs.txt",
      "score": 100,
      "findings": [
        { "status": "pass", "message": "/llms.txt exists" },
        { "status": "pass", "message": "H1 heading found" },
        { "status": "pass", "message": "/llms-full.txt available" }
      ]
    },
    {
      "id": "robots-txt",
      "name": "Robots.txt",
      "score": 100,
      "findings": [
        { "status": "pass", "message": "6 core AI crawlers configured" },
        { "status": "pass", "message": "31 known crawlers have rules" }
      ]
    }
  ]
}

CLI Reference

All available command-line flags and options.

--jsonOutput results as JSON to stdout
--output <format>Output format: terminal, json, html
--checks <ids>Run only specified checks (comma-separated IDs)
--timeout <ms>HTTP request timeout in milliseconds (default: 10000)
--verboseShow detailed request and check execution logs
--only-failuresOnly show checks/findings with failures or warnings
--help, -hShow help information
--version, -vPrint version number and exit

Programmatic Usage

Import ax-audit as a library for custom integrations and automation workflows.

TypeScript
import { audit, batchAudit } from 'ax-audit';

// Single URL
const result = await audit({
  url: 'https://your-site.com',
  checks: ['llms-txt', 'robots-txt', 'structured-data'],
  timeout: 15000,
});

console.log(result.overallScore);  // 98
console.log(result.grade.label);   // "Excellent"

for (const check of result.results) {
  console.log(`${check.name}: ${check.score}/100`);
}

// Batch audit
const batch = await batchAudit(
  ['https://site-a.com', 'https://site-b.com'],
  { timeout: 10000 }
);
console.log(batch.summary.averageScore);

Scoring

Excellent90 – 100
Good70 – 89
Fair50 – 69
Poor0 – 49

CI/CD Integration

Add ax-audit to your pipeline to enforce AI readiness standards on every deploy.

name: AX Audit
on: [push]

jobs:
  ax-audit:
    runs-on: ubuntu-latest
    steps:
      - name: Run AX Audit
        run: npx ax-audit ${{ secrets.SITE_URL }} --json > report.json

      - name: Check score threshold
        run: |
          SCORE=$(jq .overallScore report.json)
          echo "AX Score: $SCORE/100"
          [ "$SCORE" -ge 70 ] || exit 1

Generate Missing Files

ax-init is the companion CLI to ax-audit. It generates the AX files your site is missing — llms.txt, robots.txt, agent.json, security.txt, structured data, and AI meta tags — through an interactive prompt.

$ npx ax-init
llms.txtrobots.txt.well-known/agent.json.well-known/security.txtJSON-LDAI Meta Tags

Tech Stack

TypeScript2 runtime depsNode.js 18+97 testsESLint + PrettierApache 2.0