Leonardo is our coding agent that builds Ruby on Rails applications, and a built in AI agent orchestration runtime that’s written in Python and LangGraph.
So in parts, this is a guide on deploying Ruby on Rails applications, but also deploying LangGraph agents.
To do this effectively, we use Docker Containers.
Docker is an amazing tool I knew almost nothing about 3 months ago, and yet by using ChatGPT, I’ve learned how powerful and amazing Docker is.
In general the LlamaPress tech stack (including LlamaBot, our LangGraph agent runtime, and Leonardo, our actual coding agent), allows us to develop powerful AI applications with powerful agent experiences inside the application.
We get the benefit of these three powerful open source frameworks:
- Ruby on Rails for it’s powerful full stack scaffolding features & rapid web application development cycles.
- LangGraph for it’s powerful agent orchestration framework.
- Docker for it’s powerful architecture-agnostic dev setup + project deployments. Perfect for going from a working prototype running on localhost to production
To deploy a Leonardo Application to production from localhost, I recommend taking the following approach:
Initial Installation of LlamaBot & Leonardo
aws configure # set up aws cli on your machine
git clone https://github.com/KodyKendall/LlamaBot
cd LlamaBot
bash bin/deploy_llamabot_on_aws.sh
This bin/deploy_llamabot_on_aws.sh
script does the following:
1. Collects important information for setting up your AWS Lightsail Instance.
read -p "Name of instance: " INSTANCE
read -p "Path to identity file: (defaults to ~/.ssh/LightsailDefaultKey-us-east-2.pem)" IDENTITY_FILE
export INSTANCE
export REGION=us-east-2
export AZ=${REGION}a
export BLUEPRINT=ubuntu_24_04
export BUNDLE=small_2_0
export IDENTITY_FILE=${IDENTITY_FILE:-~/.ssh/LightsailDefaultKey-us-east-2.pem}
Type your instance name (no spaces allowed). In our case I’m naming it: “HistoryEducation”
2. Launches an AWS LightSail Instance ($12/mo.)
aws lightsail create-instances \
--instance-names "$INSTANCE" \
--availability-zone "$AZ" \
--blueprint-id "$BLUEPRINT" \
--bundle-id "$BUNDLE" \
--region "$REGION"
IPADDRESS=$(aws lightsail get-instance \
--instance-name "$INSTANCE" \
--region "$REGION" \
--query 'instance.publicIpAddress' \
--output text)
echo $IPADDRESS
cat >> ~/.ssh/config <<EOF
Host $INSTANCE
HostName $IPADDRESS
User ubuntu
IdentityFile $IDENTITY_FILE
IdentitiesOnly yes
EOF
3. Sets up DNS records through Route 53.
export DOMAIN=llamapress.ai.
export ZONE_ID=$(aws route53 list-hosted-zones-by-name \
--dns-name "$DOMAIN" --query 'HostedZones[0].Id' --output text | sed 's|/hostedzone/||')
echo $ZONE_ID
TARGET_FQDN=$INSTANCE.llamapress.ai.
RAILS_TARGET_FQDN=rails-$TARGET_FQDN
cat > new-a-record.json <<EOF
{
"Comment": "Add A records for $TARGET_FQDN for LlamaBot Agent Deploy",
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "${TARGET_FQDN}",
"Type": "A",
"TTL": 60,
"ResourceRecords": [
{ "Value": "${IPADDRESS}" }
]
}
},
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "${RAILS_TARGET_FQDN}",
"Type": "A",
"TTL": 60,
"ResourceRecords": [
{ "Value": "${IPADDRESS}" }
]
}
}
]
}
EOF
aws route53 change-resource-record-sets \
--hosted-zone-id "$ZONE_ID" \
--change-batch file://new-a-record.json
4. Opens up Port 443 for HTTPS access
echo "Instance created! Now, waiting to open port 443..."
sleep 20
# Open port 443:
aws lightsail open-instance-public-ports \
--instance-name "$INSTANCE" \
--port-info fromPort=443,toPort=443,protocol=TCP \
--region "$REGION"
#Check port is open on instance
aws lightsail get-instance-port-states \
--instance-name "$INSTANCE" \
--region "$REGION" \
--query 'portStates[?fromPort==`443`]'
5. Allows you to SSH into instance directly, to install LlamaBot on your production Ubuntu server
echo "Instance is ready to be used! type command ssh $INSTANCE to connect to it, then paste the following command to install the agent: "
echo "curl -fsSL "https://raw.githubusercontent.com/KodyKendall/LlamaBot/refs/heads/main/bin/install_llamabot_prod.sh" -o install_llamabot_prod.sh && bash install_llamabot_prod.sh"
ssh $INSTANCE
After you you’ve done this, you should be able to ssh into the server.
Step 6. SSH into your LlamaBot & Leonardo Instance, and run the install script.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@ip-172-26-14-43:~$
Paste in the following command:
curl -fsSL "https://raw.githubusercontent.com/KodyKendall/LlamaBot/refs/heads/main/bin/install_llamabot_prod.sh" -o install_llamabot_prod.sh && bash install_llamabot_prod.sh
You’ll see this:
_ _ _ _
| | | | __ _ _ __ __ __ _ | | _ ___ | |_
| | | | / _` || '_ \ _ \ / _` || _ \/ _ \| __|
| |___ | || (_| || | | | | || (_| || |_) |_| | |_
|_____||_| \__,_||_| |_| |_| \__,_||____/\___/ \__|
(\ (\
( ) ( )
( )___( )
______ / )
{ )
| )) L L A M A B O T I N S T A L L E R
\______ )) LangGraph + Rails + Docker + Caddy
( ))
| )) LlamaBot (LangGraph) • LlamaPress (Rails)
| ))
| )) v0.2.6
| ))
→ Kickstarting setup... (press Ctrl+C to abort)
🦙🤖 Paste your OpenAI API Key:
Paste in your OpenAI API Key, and hit enter.
You’ll see a request for putting in your hosted domain.
🦙🤖 Paste your OpenAI API Key: sk-proj-*******
🌐 Enter your hosted domain (e.g., example.com):
This domain format must match this: <INSTANCE>.<AWS_ROUTE_53_DOMAIN>
Make the first part of the domain match EXACTLY what you put as the INSTANCE type in the install script, in my case it’s HistoryEducation as the INSTANCE>
Then, in the second part of the domain, put in your actual domain name that’s configured in AWS Route 53. So in my case it’s llamapress.ai in AWS_ROUTE_53_DOMAIN.
So, this means I’m pasting in this:
HistoryEducation.llamapress.ai
Which ends up looking like this:
🌐 Enter your hosted domain (e.g., example.com): HistoryEducation.llamapress.ai
Now, the following things will get installed on your Ubuntu 24 server automatically.
- Docker
- Caddy
- Github CLI
- LlamaBot & Leonardo.
This should take approximately 5 minutes or less.
You should see this if it succeeded.
🎉 Leonardo is deployed!
Now, you should be able to navigate to your URL. In this case, it should be:
https://HistoryEducation.llamapress.ai

Sign into Leonardo
The default username is: kody
The default password is: kody

You should now be able to see the Leonardo interface.

Get your Leonardo Instance by Authenticating with Github, and adding an origin.
Back in your ssh terminal:
gh auth login
> Github.com
> HTTPS
> Y
> Login with a web browser
! First copy your one-time code: C881-1E51
Press Enter to open github.com in your browser...
Copy the code.
Go to https://github.com/login/device
Paste the code.

Continue -> Authorize.
You may need to install the Github mobile app and give it an access code.
cd llamapress
git init
git remote add origin <your leonardo app url>
in my case it’s:
git remote add origin https://github.com/History-Education-Foundation/StoryBook
Then, git fetch, checkout main, and run docker compose up -d:
git fetch
git checkout main
docker compose up -d
If you get messages like:
? Volume "llamapress_rails_storage" exists but doesn't match configuration in compose file. Recreate (data will be lost)? Yes
Then always select “yes”.
Leave a Reply