Add a video or a live demo, there's still too much friction on this readme.
Always Show then Ask.
FabianCarbonara 4 hours ago [-]
and I meant to say: tinkerdown looks pretty cool!
FabianCarbonara 6 hours ago [-]
[dead]
pbkhrv 7 hours ago [-]
Very cool. I'm imagining using this with Claude Code, allowing it to wire this up to MCP or to CLI commands somehow and using that whole system as an interactive dashboard for administering a kubernetes cluster or something like that - and the hypothetical first feature request is to be able to "freeze" one of these UI snippets and save it as some sort of a "view" that I can access later. Use case: it happens to build a particularly convenient way to do a bunch of calls to kubectl, parse results and present them in some interactive way - and I'd like to reuse that same widget later without explaining/iterating on it again.
FabianCarbonara 6 hours ago [-]
Exactly this!
Right now this uses React for Web but could also see it in the terminal via Ink.
And I love the "freeze" idea — maybe then you could even share the mini app.
mncharity 2 hours ago [-]
Brainstorming, perhaps `<<named-block-code-transclusion>>`? It goes against the grain of "eval() line-by-line", even if it's handled ASAP. But it might relax the order constraint on codegen. Especially if the UI gets complex, or rendered on a "pane off to the side".
joelres 7 hours ago [-]
I quite like this! I've been incrementally building similar tooling for a project I've been working on, and I really appreciate the ideas here.
I think the key decision for someone implementing a flexible UI system like this is the required level of expressiveness. To me, the chief problem with having agents build custom html pages (as another comment suggested) is far too unconstrained. I've been working with a system of pre-registered blocks and callbacks that are very constrained. I quite like this as a middleground, though it may still be too dynamic for my use case. Will explore a bit more!
FabianCarbonara 7 hours ago [-]
Thanks! Really interesting to hear you're working on something similar.
You're right that the level of expressiveness is the key design decision. There's a real spectrum:
- pre-registered blocks (safe, predictable)
- code execution with a component library (middle ground)
- full arbitrary code (maximum flexibility).
My approach can slide along that spectrum: you could constrain the agent to only use a specific set of pre-imported components rather than writing arbitrary JSX. The mount() primitive and data flow patterns still work the same way, you just limit what the LLM is allowed to render.
Would love to hear what you learn if you explore it!
joelres 6 hours ago [-]
Will do! I'm using a JSON DSL currently, I wonder if there's a best choice for format that is both at the correct level of expressiveness and also easy enough for the LLM to generate in a valid way. I do think markdown has advantage of being very trivial for LLMs, but my current JSON blocks strategy might be better for more complex data.... will play around.
mncharity 2 hours ago [-]
Here's[1] the rest of the prompt which begins the video.
I will say I came upon this same design pattern to make all my chats into semantic Markdown that is backward compatible with markdown. I did:
````assistant
<Short Summary title>
gemini/3.1-pro - 20260319T050611Z
Response from the assistant
````
with a similar block for tool calling
This can be parsed semantically as part of the conversation
but also is rendered as regular Markdown code block when needed
Helps me keep AI chats on the filesystem, as a valid document, but also add some more semantic meaning atop of Markdown
Markdown UI and my approach share the "markdown as the medium" insight, but they're fundamentally different bets:
Markdown UI is declarative — you embed predefined widget types in markdown. The LLM picks from a catalog. It's clean and safe, but limited to what the catalog supports.
My approach is code-based — the LLM writes executable TypeScript in markdown code fences, which runs on the server and can render any React UI. It also has server-side state, so the UI can do forms, callbacks, and streaming data — not just display widgets.
threatofrain 8 hours ago [-]
I'd much prefer MDX.
theturtletalks 8 hours ago [-]
OpenUI and JSON-render are some other players in this space.
I’m building an agentic commerce chat that uses MCP-UI and want to start using these new implementations instead of MCP-UI but can’t wrap my head around how button on click and actions work? MCP-UI allows onClick events to work since you’re “hard coding” the UI from the get-go vs relying on AI generating undertemistic JSON and turning that into UI that might be different on every use.
FabianCarbonara 8 hours ago [-]
In my approach, callbacks are first-class. The agent defines server-side functions and passes them to the UI:
When the user clicks the button, it invokes the server-side function. The callback fetches fresh data, updates state via reactive proxies, and the UI reflects it — all without triggering a new LLM turn.
So the UI is generated dynamically by the LLM, but the interactions are real server-side code, not just display. Forms work the same way — "await form.result" pauses execution until the user submits.
The article has a full walkthrough of the four data flow patterns (forms, live updates, streaming data, callbacks) with demos.
smahs 7 hours ago [-]
In an agentic loop, the model can keep calling multiple tools for each specialized artifact (like how claude webapp renders HTML/SVG artifacts within a single turn). Models are already trained for this (tested this approach with qwen 3.5 27B and it was able to follow claude's lead from the previous turns).
Lws803 6 hours ago [-]
I see potential to take over Notion's / Obsidian's business here. Imagine highly customizable notebooks people can generate on the fly with the right kind of UI they need. Compared to fixed blocks in Notion
rthrfrd 5 hours ago [-]
That’s what I’m building, along with the invisible unified data model underneath, that is needed to tie everything together. Always glad for feedback, reach out in my profile if it sounds interesting!
iusethemouse 8 hours ago [-]
There’s definitely a lot of merit to this idea, and the gifs in the article look impressive. My strong opinion is that there’s a lot more to (good) UIs than what an LLM will ever be able to bring (happy to be proven wrong in a few years…), but for utilitarian and on-the-fly UIs there’s definitely a lot of promise
FabianCarbonara 8 hours ago [-]
[dead]
4ndrewl 7 hours ago [-]
The bots that read the instruction and yet add the emoji to the _beginning_ of the PR title though. Even bigger red flag I guess?
8 hours ago [-]
nthypes 5 hours ago [-]
Why not MDX?
FabianCarbonara 4 hours ago [-]
The goal isn't really a better markdown format — it's bringing code execution and generative UI together. The code fences run on the server: calling APIs, processing data, doing agentic work. And they can also mount reactive UIs with full data flow between client, server, and LLM.
MDX is a compile-time format for static content. This is a runtime protocol where the LLM writes code that executes as it streams, and the UIs it creates stay connected to the server.
dominotw 6 hours ago [-]
would be nice if it wasnt just ui but other form like voice narration, sounds ect
13 hours ago [-]
ayeteas54 1 hours ago [-]
[dead]
AiStockAgent62 5 hours ago [-]
[dead]
kevindo9x19 6 hours ago [-]
[dead]
YANGBOKEE56 1 hours ago [-]
[dead]
ZakDavydov30 6 hours ago [-]
[dead]
wangmander 7 hours ago [-]
[flagged]
Retr0id 7 hours ago [-]
What's the going rate these days for decade-old HN accounts to repurpose as AI spambots?
It embodies the whole idea of having data, code and presentation at the same place.
If you're open for contributions I already have an idea for cascading styles system in mind.
Maybe one day someone will invent a rounder wheel.
The wheel is what I would call, passé.
Soon we'll be optimizing for minimizing the sides of a wheel (triangles are not the final form here...) /s
I have been working on something with a similar goal:
https://github.com/livetemplate/tinkerdown
Always Show then Ask.
Right now this uses React for Web but could also see it in the terminal via Ink.
And I love the "freeze" idea — maybe then you could even share the mini app.
I think the key decision for someone implementing a flexible UI system like this is the required level of expressiveness. To me, the chief problem with having agents build custom html pages (as another comment suggested) is far too unconstrained. I've been working with a system of pre-registered blocks and callbacks that are very constrained. I quite like this as a middleground, though it may still be too dynamic for my use case. Will explore a bit more!
You're right that the level of expressiveness is the key design decision. There's a real spectrum:
- pre-registered blocks (safe, predictable)
- code execution with a component library (middle ground)
- full arbitrary code (maximum flexibility).
My approach can slide along that spectrum: you could constrain the agent to only use a specific set of pre-imported components rather than writing arbitrary JSX. The mount() primitive and data flow patterns still work the same way, you just limit what the LLM is allowed to render.
Would love to hear what you learn if you explore it!
[1] https://github.com/FabianKuebler/fenced/blob/main/packages/l...
````assistant
<Short Summary title>
gemini/3.1-pro - 20260319T050611Z
Response from the assistant
````
with a similar block for tool calling This can be parsed semantically as part of the conversation but also is rendered as regular Markdown code block when needed
Helps me keep AI chats on the filesystem, as a valid document, but also add some more semantic meaning atop of Markdown
https://markdown-ui.com/
Markdown UI is declarative — you embed predefined widget types in markdown. The LLM picks from a catalog. It's clean and safe, but limited to what the catalog supports.
My approach is code-based — the LLM writes executable TypeScript in markdown code fences, which runs on the server and can render any React UI. It also has server-side state, so the UI can do forms, callbacks, and streaming data — not just display widgets.
I’m building an agentic commerce chat that uses MCP-UI and want to start using these new implementations instead of MCP-UI but can’t wrap my head around how button on click and actions work? MCP-UI allows onClick events to work since you’re “hard coding” the UI from the get-go vs relying on AI generating undertemistic JSON and turning that into UI that might be different on every use.
So the UI is generated dynamically by the LLM, but the interactions are real server-side code, not just display. Forms work the same way — "await form.result" pauses execution until the user submits.
The article has a full walkthrough of the four data flow patterns (forms, live updates, streaming data, callbacks) with demos.
MDX is a compile-time format for static content. This is a runtime protocol where the LLM writes code that executes as it streams, and the UIs it creates stay connected to the server.