Turning LlamaBot into a full blown MCP client
Why am I wanting LlamaBot to become an MCP Client, similar to Claude and ChatGPT?
I don’t want to rely on Claude and ChatGPT to be my only access to the MCP server world. (I want control over the full stack).
I want access to the client-side agentic workflows. (ReAct, CodeAct, etc).
I want other people to have access to the client-side agentic workflows (so that Claude & OpenAI aren’t black-box magic. I assume that they’re mostly implementing a similar type of ReAct and CodeAct type of agent workflow out of the box).
Creating TypeScript/React front-end for LlamaBot
I decided I want conversation threads for LlamaBot, so we can have multiple, unique persistent conversations with our agent (similar to other Chat agents like ChatGPT and Claude Sonnet).
While having a basic HTML front-end for our initial project made sense, if we’re turning this into a full-blown MCP client, including multiple conversation threads, previewing LlamaBot projects, etc. Then it makes sense that we branch into something that put our project on stable ground. Hence, React with TypeScript.
I actually haven’ t personally used TypeScript previously, but I’m a big fan of React (my first startup’s front-end relied heavily on React and React-Native).
React is amazing because of it’s hook and state update propagation — makes for 10X cleaner and re-usable front-end code once you understand it.
Typed languages in general are nice, because they provide compilation level checking that prevents bugs before run-time. Although I haven’ t used TypeScript extensively, I’m excited to implement it into LlamaBot, because it will lead to a more stable user experience and allow us to build in some amazing functionality to the front-end.
Being a lazy vibe-coder, I decided to let Cursor Agent take the first stab at the entire setup of creating our front-end with TypeScript and React. Let’s see how it does!
^ Cursor banging out a TypeScript/React front-end effortlessly.

Cursor and Claude Sonnet 4 coming up with a banger design for the interface.
It wouldn’t be LlamaBot without our beloved mascot staring down the user with his piercing gaze.

Let’s display our list of Agents that the user can select & run (pulled from langgraph.json).

Adding front-end to detect tool calls and format it as a message:

Next step, getting LlamaBot to write these in as “artifacts”, similar to how Claude creates artifacts.

I created a new folder structure “artifacts” that can house individual projects.
From here, we can equip our agent to write directly to artifacts/<artifact_id>/page.html
, artifacts/<artifact_id>/assets/script.js
,and artifacts/<artifact_id>/assets/styles.css
We could also have a model.py file, and a controller.py file, that could allow backend functionality for our front-end to interact with (maybe even giving it the ability to trigger additional agent flows and display output!)
One example of this would be a storybook generator side project that I’ve worked on previously, that generates “chapters” of text, and then an associated audio recording of the chapters, and pictures to go along with it.
A very fun project that lived in it’s own FastAPI application and used LangGraph, but once we have artifacts properly working, we could have LlamaBot recreate it as an artifact!
Leave a Reply