Monday, 08/18/2025.
I’ve been steadily working towards this idea for the last 12 months. Back when I first started I knew this was technically feasible, even inevitable, but there were so many unknowns that I hadn’t worked out.
The first version was letting LLMs generate HTML/CSS/JavaScript code, in a fully formatted HTML document, like so:
<!DOCTYPE html> <html> ... (llm generated content) </html>
By saving this LLM generated code into a database table named “Pages”, we simply dump that raw HTML content into the response when a user requests the page, like so:
class PagesController < ApplicationController
# GET /pages/1 or /pages/1.json
def show
content = @page.render_content
render inline: content.html_safe, layout: 'page'
end
end
This was the easiest way to release a product without needing to worry about partitioning file systems for each user, etc. Me and a friend ultimately wrapped this all into a Ruby on Rails app, and launched it under the name: “LlamaPress”, and marketed it as a webpage builder for people who want to build simple websites & webpages.
Using OpenAI’s free credit program (free credits by opting into sharing our prompts with them), we offered this as a freemium service, and have had over 8,000 user registrations, and over 40,000 pages created.
We even got featured on Business Insider, because it turns out scammers had used LlamaPress to clone a victims online store, download the HTML out of our platform, and then used it to phish them. Yikes. More about that later.
While having an LLM write HTML code directly into a database and rendering it in real time is a great start, it still fell short of the original dream of letting LLMs write and execute Rails scaffolding commands, and build Rails apps.
Now, I’m ready to make the leap.
Here’s how it’s going to work, starting with a screenshot of what I’m currently running locally.

This is a FastAPI app running on localhost:8000, with a chat interface on the left, and an iFrame loading the front-page of the Rails app, LlamaPress, that’s running on localhost:3000.
The chat interface on the left is similar to ChatGPT or Lovable, where you can chat with an LLM that’s tied into the filesystem of the Rails app.
There’s a websocket connection from this webpage to FastAPI that sends messages to LangGraph. Within LangGraph, there’s a ReAct agent, with tool calls to read & write files from the file-system, run Rails terminal commands (like scaffolding, rake tasks, console, etc), commit code to the git history, etc.
This all runs from a single docker-compose.yml
file, where there are four containers.
- A Ruby on Rails docker container that runs on localhost:3000.
- A FastAPI/LangGraph container that runs on localhost:8000
- A Postgres container that acts as the database for Rails and to save LangGraph agent checkpoints.
- A Redis container for ActionCable web socket communication from the browser to Rails.
I don’t have a final version to show yet, but I will soon.
This is an ongoing effort, and I’ll continue to make updates on this post as I make progress.
Kody
Tue, 08/19/2025.
I have a breakpoint triggering inside the FastAPI application, and we can view the file contents inside the Ruby on Rails app docker container.
We do this through doing a shared, mounted volume from the Ruby on Rails docker container, to the FastAPI docker container.
llamapress:
# image: kody06/llamapress:0.1.16
build:
context: LlamaPress
dockerfile: Dockerfile.dev
stdin_open: true # <-- enables byebug/irb input
tty: true # <-- attaches terminal output
env_file:
- .env
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails db:prepare && bundle exec rails s -b '0.0.0.0'"
platform: linux/amd64 # ← tells Compose “run this container as amd64 under QEMU”
volumes:
- ./app:/rails/app
- ./config/routes.rb:/rails/config/routes.rb # mount just one file
- ./db:/rails/db # mount entire db folder
ports:
- "3000:3000"
networks:
- llama-network
environment:
- RAILS_ENV=development
- DATABASE_URL=postgres://user:password@db:5432/llamapress
- BUNDLE_PATH=/usr/local/bundle
- REDIS_URL=redis://redis:6379/1
- BOOTSNAP_CACHE_DIR=/rails/tmp/cache/bootsnap
depends_on:
- db
- redis
llamabot:
# image: kody06/llamabot:0.1.16
build:
context: LlamaBot
dockerfile: Dockerfile
volumes:
- ./app:/app/app/rails/app
- ./config/routes.rb:/app/app/rails/config/routes.rb # mount just one file
- ./db:/app/app/rails/db # mount entire db folder
- ./LlamaBot/app:/app/app
env_file:
- .env
And here’s a picture where we can see the beginnings of how our LangGraph agent could view the contents of the Rails app.

Here’s a breakpoint where we can view and inspect all of the internals of the Rails app, from the breakpoint()

INFO:app.websocket.web_socket_handler:Received message: {'message': 'test', 'thread_id': '2025-08-19_09-57-36', 'agent_name': 'llamabot'}
INFO:app.websocket.web_socket_handler:Waiting for message from LlamaPress
INFO:app.websocket.request_handler:Created state with keys: ['messages']
> /app/app/agents/llamabot/nodes.py(42)read_rails_file()
-> print(os.listdir(APP_DIR / "rails"))
(Pdb)
(Pdb) n
['app', 'config', 'db']
> /app/app/agents/llamabot/nodes.py(43)read_rails_file()
-> # ['app', 'config', 'db'] <- these are the folders we're able to see in the container.
(Pdb) os.listdir(APP_DIR / "rails" / "app")
['.DS_Store', 'mailers', 'strategies', 'models', 'instruments', 'llama_bot', 'javascript', 'jobs', 'controllers', 'views', 'assets', 'helpers', 'services', 'channels']
(Pdb) os.listdir(APP_DIR / "rails" / "app" / "controllers")
['llama_bot_controller.rb', 'application_controller.rb', 'organizations_controller.rb', 'pages_controller.rb', 'admin', 'users_controller.rb', 'users', 'sites_controller.rb', 'api', 'page_histories_controller.rb', 'submissions_controller.rb', 'concerns']
(Pdb) os.listdir(APP_DIR / "rails" / "app" / "models")
['site.rb', 'page_history.rb', 'page.rb', 'organization.rb', 'submission.rb', 'checkpoint.rb', 'application_record.rb', 'concerns', 'user.rb']
(Pdb) os.listdir(APP_DIR / "rails" / "app" / "views")
['organizations', 'devise', '.DS_Store', 'admin', 'shared', 'sites', 'layouts', 'chat_conversations', 'users', 'pwa', 'submissions', 'page_histories', 'llama_bot', 'pages']
(Pdb)
This is interesting because it means we can take these files, and feed them into the LLM if we want to, and it would be able to understand the Rails application.
We could also let the LLM overwrite these files directly, with new code. And the user would be able to see the changes in real time through the iFrame preview.
Let’s see if we can get the LLM to update the current home-page title from “Build a website in seconds!” to something more based like: “Hello from Leonardo da Llama”.
Let’s change the system prompt to tell the LLM that it can view Rails files, and let’s give it a tool call to read.
Old System Prompt:
You are Leonardo, a Llama that reads and writes HTML/CSS/JavaScript code.
Your task is to help the user to modify and create webpages, using HTML/CSS/JavaScript.
All outputted code is saved into a single HTML file, including the CSS and JavaScript code.
Here is the current page that the user is viewing: <HTML_PAGE>{content}</HTML_PAGE>
New System Prompt:
You are Leonardo, a Llama that can read and write changes to a Ruby on Rails application.
Your task is to help the user with their Ruby on Rails application, by answering questions, making modifications, etc.
You can list the contents of the Rails directory to explore the app.
And here’s the tool calls we’ll pass the agent, using LangGraph’s tool decorator.
![```
@tool
def read_rails_file(filepath: str) -> str:
"""Read the contents of a Rails file."""
# Construct the full path
full_path = APP_DIR / "rails" / filepath
# Check if file exists
if not full_path.exists():
return f"File not found: {filepath}"
# Read the file contents
try:
# Option 1: Using pathlib (recommended)
contents = full_path.read_text()
# Option 2: Using traditional open()
# with open(full_path, 'r') as f:
# contents = f.read()
return contents
except Exception as e:
return f"Error reading file: {e}"
@tool
def list_directory_contents(directory: str = "") -> str:
"""
List the contents of a directory.
If directory is empty, lists the rails root directory.
"""
# Build path - if directory is empty, just use rails root
dir_path = APP_DIR / "rails" / directory if directory else APP_DIR / "rails"
if not dir_path.exists():
return f"Directory not found: {dir_path}"
return os.listdir(dir_path)
# Global tools list
tools = [list_directory_contents, read_rails_file]
# System message
sys_msg = SystemMessage(content=f"""You are Leonardo,
a Llama that can read and write changes to a Ruby on Rails application.
Your task is to help the user with their Ruby on Rails application,
by answering questions, making modifications, etc.
You can list the contents of the Rails directory to explore the app.
""")```](https://www.kodykendall.com/wp-content/uploads/2025/08/Screenshot-2025-08-19-at-10.10.11 AM.png)
Will it work? Let’s test it.



Ok, great. The agent is reading directory contents and file contents properly. Next step, let’s see if we can get the LLM to overwrite a file.
I’ve added an additional tool method for the LLM to use.

We’ll help the LLM by giving it the path to the file in question, which is: app/views/devise/registrations/new.html.erb
.
This is acting as the home page for unregistered users, and says: “Build a website in seconds!”

Here’s the message I’ll send Leonardo:
Can you please view the contents of the file app/views/devise/registrations/new.html.erb, and tell me what it says?
Leonardo responds with:
🔨 Using tool: read_rails_file


The app/views/devise/registrations/new.html.erb file provides a modern, multi-step user experience for new user registration and sign-in, with a focus on quickly building a website using an AI prompt. Here’s what it contains: 1. Landing/Prompt Section: – Displays the LlamaPress logo. – Headline: “Build a website in seconds!” – Users are prompted to enter their website idea in a textarea, with a typing effect suggesting example prompts (e.g., “Build me a landing page for my plumbing business…”). – Button: “Generate your website” (triggers the signup form). 2. Signup Section (hidden by default, revealed after clicking the button): – Form fields for email, password, confirmation, and a hidden prompt (the idea entered above). – Automatically fills a hidden “organization name” based on the user’s email. – Error and validation messages are clearly styled. – Button: “Create my website”. – Option to switch to sign-in form for returning users. 3. Signin Section (hidden by default, revealed via links): – Standard Devise login form with email and password. – Displays validation/flash messages. – Option to switch to sign-up. 4. JavaScript Enhancements: – Placeholder typing animation for example prompts in the textarea. – Transition logic to slide between prompt, signup, and sign-in sections, carrying the user’s prompt through the process. – Organization name is dynamically set when a user enters their email. 5. Styling: – Modern, clean interface with Tailwind CSS and FontAwesome styling. – Responsive, mobile-friendly layout. Summary: This page is designed to quickly capture a website idea (prompt), then guide the user through signup or login while making the process engaging and clear. It keeps the user’s input all the way to account creation, enabling a fast start for generating their website. Let me know if you want to see the actual rendered HTML, a certain section, or need the content in a different format!
I then said:
Can you please change the h1 from “Build a website in seconds!” to say: “Hello world from Leonardo?”
He responded with:

The heading (h1) in your registrations new user page has been changed from “Build a website in seconds!” to: Hello world from Leonardo? You’ll now see this new heading when users visit the signup page! If you’d like any further customizations, just let me know.
And here’s the change Leonardo made:

In order to view the changes, I had to bring down the Rails docker container, and and then bring it up again. Ideally, the iFrame could refresh automatically instead of needing to run:
docker compose down -v llamapress
docker compose up -d llamapress
I’m sure there’s a way to hotload the files in, or give a tool-call for Leonardo to pull the docker container down, and then recreate it.
But, most importantly we’ve verified that because the Rails files are in a mounted volume that lives on my local computer, the code changes that Leonardo wrote in the Rails directory persist.
The next steps to me seem clear:
- Figure out how to get the Rails changes working immediately, by either restarting the Rails server inside the docker container, or enabling some sort of hot-reload by putting Rails in dev mode.
- Improve user experience while the user is waiting for the LLM response, when the LLM is calling tools, and while the LLM is streaming.
- Figure out how to commit file changes in case a mistake is made, and we need to rollback commits.
The first one we could start with a tool call to allow the agent to restart the rails server, from inside the container. Or potentially even just restart the entire docker container. With that too, we’d want a good user experience communicating what’s happening.
For the second issue, we’ve already solved similar problems for LlamaPress, where we basically have JavaScript on the client-side that listens for LangGraph base_message tool calls, then you can case on the tool_name args and input arguments, and format HTML elements depending on what tool call was done. I’ve been needing to consolidate all of this into a JavaScript library that can be used across projects, across LlamaPress, across LlamaBot, etc. I’ve found myself re-writing a lot of the front-end code needed to format agent tool calling outputs, and LLM outputs in general.
The third one seems easy enough, we can add another git command tool for Leonardo.
We’d need to figure out how to do this from within the Python/FastAPI Docker container that shares the mounted volume with the rails folder. Is it as simple as just running git commit -m "message"
?
Leave a Reply