The Lexington Times

Free, AI-powered local news for Lexington, Kentucky

How we make these articles

Last updated: 2026-05-01

feeds.lexingtonky.news is the machine-readable surface of The Lexington Times. It exists so researchers, AI agents, and downstream republishers can pick up Lexington civic news in structured form (HTML, JSON, RSS, Markdown, llms.txt). The human-edited edition lives at lexingtonky.news — that is where most original reporting is filed and edited. This page explains what is different on this surface and what is not.

What this site does

Every article on this surface starts as a public-record document or a public news source. We poll a small fixed set of feeds — Lexington-Fayette Urban County Government meeting agendas and minutes, Kentucky State Police press releases, Kentucky Office of the Attorney General notices, KYTC traffic alerts, and a handful of selected local news RSS feeds — and re-summarise each item into a short AP-style brief. The full source list and live poll status are visible at /sources/status.

The rewrite pipeline is roughly: poll → de-duplicate against existing slugs → fetch full text or PDF → ask Claude to produce a title + ≤200-character summary + paraphrased body + citation list → write the result to disk as JSON → render the HTML, JSON, RSS, and Markdown variants from that JSON. Articles are stored as plain files on disk; there is no database.

Which model and why

Most short briefs are rewritten by Claude Haiku (Anthropic) because Haiku is fast, cheap, and accurate enough for short-form summarisation of source material we have already fetched and trust. Longer pieces — meeting recaps and Civic Memory ("Ask Lex") answers — are produced by Claude Sonnet. Audio segments aired on the LexBot livestream are transcribed via Whisper and lightly cleaned by Haiku before being indexed here.

We chose AI summarisation deliberately: Lexington has more public-meeting and public-records output than any volunteer-run civic site can hand-edit. The alternative is the status quo — most of these documents go uncovered. AI-assisted summarisation closes that gap while preserving the chain back to the original source.

What is and isn't fact-checked

Every article carries a visible source link to the document it was summarised from. The intent is that any claim in the article can be verified against that source in seconds.

We do not, on this surface, run a separate human fact-check pass on every brief before it is published — the volume makes that infeasible. We do run automated dedupe, source-validation, and "did the rewrite invent something not in the source" checks during ingest, and the more substantive analysis pieces are hand-edited on lexingtonky.news before being syndicated here. If you find a brief that misrepresents its source, please flag it — see corrections.

Briefs that are produced from a recurring scheduled livestream segment (trivia, restaurant-of-the-day, daily briefings, listener shoutouts) are tagged with noindex so they do not appear in general web search results. They remain reachable via direct URL and via /llms.txt for AI agents that consume them as transcript data.

What is not generated by AI

Disclosure on every article

Every article rendered here carries a per-article disclosure block at the bottom of the body, naming the model that produced it and pointing back to the upstream source. The article JSON also exposes aiGenerated, aiModel, and aiDisclosure fields so downstream consumers can surface the provenance in their own UI.

Hand-edits

Article files on disk are mutable. When an article is edited by a human after publication — typically to fix a factual error, clarify ambiguous phrasing, or remove a paraphrase that drifted too close to source wording — the article gets an edited: true badge in the listing and a dateModified timestamp on the structured-data graph. Original publication dates are not changed.

How publishedAt is set

For each scraped item, publishedAt is the upstream source's published timestamp when the source provides one (RSS pubDate, agenda meeting time, press-release date, etc.). When the source does not expose a publication time at all, we fall back to scrape time — which the article also records separately as source.scrapedAt, so downstream consumers can detect when the two are equal and discount apparent freshness accordingly. We do not bump publishedAt to make stale items look fresh.

Corrections

If you spot an error, email editor@lexingtonky.news with the article URL and the issue. Corrections, retractions, and substantive edits are logged at /corrections.

Contact