Agent Experience (AX) as First-Class Concern
Context
The portfolio exists in an era where content is consumed not only by humans via browsers but increasingly by AI agents, LLMs, and automated systems that parse web content to answer queries about professionals. LinkedIn profiles, GitHub READMEs, and personal sites are being ingested by AI models to generate summaries, evaluate candidates, and power conversational interfaces. A portfolio optimized exclusively for human consumption misses this growing channel entirely. The question is not whether AI agents will read the portfolio — they already do — but whether the portfolio is structured to give them high-fidelity, unambiguous data rather than forcing them to infer from loosely structured HTML.
Decision
Implement a comprehensive Agent Experience (AX) layer alongside the human-facing UI. This includes: llms.txt and llms-full.txt at the root for LLM discovery (following the emerging llms.txt standard). An ai-plugin.json manifest conforming to the OpenAI plugin specification. A full OpenAPI spec (openapi.json) documenting every API endpoint with schemas, parameters, and example responses. An /api/ai-profile endpoint that returns a structured JSON payload optimized for LLM consumption — containing all career data, skills, and metadata in a single request. Enriched JSON-LD Schema.org markup on every page (Person, Organization, EducationalOccupationalCredential, CollectionPage, ItemList, BreadcrumbList, TechArticle). AI-specific meta tags in the document head. Robots.txt configured to explicitly allow 12 known AI crawlers (GPTBot, Claude-Web, PerplexityBot, etc.).
Consequences
Positive: The portfolio is now machine-readable at multiple levels of granularity — from the full structured data dump (ai-profile endpoint) to page-level semantic markup (JSON-LD) to discovery files (llms.txt). AI agents querying about Lucio Duran receive precise, structured data rather than scraped HTML fragments. The OpenAPI spec enables potential integration with AI assistants that can query the API directly. Search engines that consume JSON-LD benefit from richer knowledge graph entries. Negative: Maintaining AX artifacts in parallel with the UI adds a documentation burden. Every schema change in the API requires updating openapi.json, ai-plugin.json, and the ai-profile endpoint. The llms.txt standard is not yet formalized, so the format may require future revisions. The additional files add ~15KB to the static assets. This is a forward-looking investment: the cost of maintaining these artifacts is low compared to the visibility gain in an AI-mediated information landscape.
Predictions at Decision Time
Predicted that AI agent consumption of portfolio content would increase significantly within 12 months, making the AX investment valuable. Expected the llms.txt standard to gain broader adoption. Assumed the OpenAI plugin specification would remain relevant as a discovery mechanism. Predicted the maintenance burden of 4 AX artifacts would be manageable for a single developer. Assumed AI agents would prefer structured data over scraped HTML when both are available.
Measured Outcomes
Too early for definitive measurement — the AX layer has been live for less than two weeks. Early signals: the /api/ai-profile endpoint is receiving requests from non-browser user agents, confirming that AI systems are discovering and consuming the structured data. The llms.txt standard has gained traction in the developer community (multiple high-profile sites have adopted it). The OpenAI plugin specification has not gained the universal adoption initially anticipated — most LLM providers have developed their own discovery mechanisms. The JSON-LD markup has improved Google search result presentation with richer snippets.
Unknowns at Decision Time
The fundamental unknown: which AI consumption channel will dominate. At decision time, it was unclear whether AI agents would prefer llms.txt (text-based, simple), OpenAPI (structured, queryable), JSON-LD (embedded, passive), or direct API endpoints (programmatic, complete). The multi-layer strategy was a hedge against this uncertainty. Also unknown: whether personal portfolios would become a meaningful AI consumption category or remain a niche use case. The broader unknown: how quickly AI systems will shift from web scraping to structured API consumption as their primary data acquisition method.
Reversibility Classification
Each AX artifact is independently removable. Deleting llms.txt, ai-plugin.json, or openapi.json has zero impact on the human-facing site. The /api/ai-profile endpoint can be deprecated by removing a single API route. JSON-LD removal requires editing page components but doesn't affect functionality. The AX layer is purely additive — it extends the site's reach without creating dependencies. Estimated removal effort: 2-3 hours for all artifacts.
Strongest Counter-Argument
The strongest counter-argument: premature optimization for an unproven channel. The time spent implementing 4 AX layers, 12 JSON-LD schemas, and 17 AI crawler policies could have been spent on content quality — which is the actual signal AI agents extract. A simpler approach: well-structured HTML with semantic headings and clean markup provides 80% of the AI-readability benefit at 10% of the implementation cost. Many successful developers have excellent AI discoverability with zero AX-specific infrastructure. The counter-counter: the portfolio is explicitly positioned as a demonstration of technical capabilities, and the AX layer itself is a portfolio piece that demonstrates forward-thinking architecture.
Technical Context
- Must stay in sync with API schema
- llms.txt standard still evolving
- No authentication for AI endpoints