Speedrun Merrymake

This guide is intended to get your code running in Merrymake as fast as possible. So let's jump straight into it.

Install the CLI

First we need to install the CommandLine Interface (CLI) with the command:

$ npm install --global @merrymake/cli

Quickstart a project

We can now use the CLI to automatically setup a user account, with a service ready to deploy. We only need to specify which programming language we prefer then watch the magic:

$ mm quickstart
> [p] python
  [#] c#
  [g] go
  [r] rust
  [j] java
  [t] typescript
  [¤] javascript

Take note of the key it gives us as it will be important later. Then simply run the command it gives us, to navigate to the new service:

$ cd [org]/services/Merrymake

Deploy the service

We can deploy the service with a Git push or with the command:

$ mm deploy

Trigger the service

In order to trigger a service we need a key, fortunately quickstart has already set one up, so we can simply use the following command, and specify the text payload world:

$ mm post hello

We have now learned how to deploy and execute code with Merrymake. This is sufficient to start playing around. The rest of the tutorial covers advanced topics such as system maintenance, setting up proper security, and integrations.

Video: Full-stack webapp

While not as detailed as the remainder of the tutorial, if you're in a hurry this video tutorial covers all the basics of how to build a real full-stack webapp in Merrymake.

Get Properly Introduced

While quickstart is great for playing around, it does not support production level software. To demonstrate how to work with Merrymake professionally we are going to build an uptime detector. Before we get started on that we need to familiarize ourselves with the CLI itself.

CLI Excellence

We at Merrymake have spent a lot of effort making the CLI as easy to use as possible, therefore it is different from usual CLI's in a few important areas:

Fewer clicks

We work to keep down the number of clicks needed to complete a task. Therefore, the CLI never presents a single option to the user, but simply auto selects it. This means that commands that offer 'edit or add' will go straight to 'add' the first time, since there is nothing to edit.

Context sensitivity

The Merrymake-CLI looks at which folder we are in, and presents only options relevant to that context. For example we cannot create an organization inside another organization. If you are missing an option, verify that you are in the correct folder to perform that action.

Preselecting options

Every option in the CLI has a unique word, highlighted with yellow. This name can be used as a commandline argument to preselect that option skipping the choice. In the example from earlier, mm quickstart presented a choice of languages, if we were to call it again we can skip this choice by adding the language as a commandline argument:

$ mm quickstart typescript

For text prompts we can preselect the default options by using underscore (_) as the commandline argument. The CLI also prints the full command to get to this current state directly.

Shorthand preselection

Most options in the CLI also have brackets with a character inside it, this character can be used as a shorthand when preselecting this option, by prefixing it with a dash (-). Using shorthand looks like this:

$ mm quickstart -t

It is possible to chain shorthands, combining -b -t into -bt. Note that not all options have a shorthand version, such as the command quickstart itself.

Help for text input

When you are asked to type in text you can press escape to display a help-text. For example when we hit escape while setting up a cron job:

$ mm cron new event event
Cron expression (optional): |
Eg. every 5 minutes is '*/5 * * * *'


The final CLI feature we want to mention is the dryrun mode. This mode lets us navigate through the CLI without making changes to our project, which is very helpful when building commands to use as part some automation, or if we're just curious about what lies behind an option. ;-)

$ mm dryrun org

Register a device

To work with Merrymake we need to register the device and tie it to a user account. The CLI sends all commands securely via ssh, which is like a fingerprint for your computer. quickstart automatically set up an ssh key for you, and configured it for Merrymake, however it did not tie an email to the account.

Accounts with no emails attached get's automatically deleted after a while without notice -- since we have no email to notify. We can prevent this by adding an email to our account, then we'll be notified if our account is in danger of being archived. Adding an email also lets us register multiple ssh keys (ie. multiple devices) to the same account.

$ mm register merrymake
If you're a security samurai like cofounder Nicolaj, and would like to use a password protected ssh key, you need to unlock it before you can use it with the CLI.
$ eval `ssh-agent`
$ ssh-add [file]

Create an organization

After registering our device, it is time to start building our uptime detector. The first step is to create an organization for it. An organization consists of service groups, loosely corresponding to teams, with repos (aka services) inside them. When we setup a new organization we also have to name the first service group and repo.

$ mm org [name] services alpha basic
Cloning [name]...
Creating service group...
Creating service...
Fetching template...
Use 'cd [name]\\services\\alpha' to go to the new service.
Then use 'mm deploy' to deploy it.

In addition to service groups, each organization also has one central message queue called the Rapids and one event-catalogue which we cover in depth in Configure the api endpoint.

For now, let's follow the instructions from the CLI and go to the new service:

$ cd [org]/services/alpha

Deploy with Git

In the speedrun, we saw that mm deploy can deploy a service to the platform. Behind the scene deployment happens through Git. Thus, we can work with services exclusively through Git if we prefer. Using Git directly is recommended if multiple developers are collaborating the service, because then we can control the commit messages. To deploy a service with Git, we simply commit our changes as normal and then push the main branch. Since we selected a template we are ready to push:

$ git push origin main
[some git stuff]
remote: 88.     .88                                                88
remote: 888.   .888                                                88
remote: 88Y8. .8P88                                                88
remote: 88 Y8o8P 88  .88.  88.d8 88.d8 Yb     dP 8888bd88b   .88.8 88  .8P .88.
remote: 88  Y8P  88 d"  "b 88"   88"    Yb   dP  88 '88 '8b d"  "8 88 .8P d"  "b
remote: 88   "   88 888888 88    88      Yb dP   88  88  88 8    8 88d8P  888888
remote: 88       88 Y.     88    88       Y8P    88  88  88 Y.  .8 88" 8b Y.
remote: 88       88  "88P  88    88       dP     88  88  88  "88"8 88  "8b "88P
remote:                                  dP
remote: Cloning repository...
remote: Detecting project type...
remote: Scheduling typescript build...
remote: [Build output]
remote: Build completed...
remote: Registering service...
remote: Deploying service...
remote: Queueing smoke test...
remote: Service 'Merrymake' will be released if/when the smoke test succeeds.

From this output we can see that Merrymake builds, packages, and deploys our service automatically. This is pretty normal for a continuous deployment pipeline. What is unique is the 'smoke test'. In Merrymake, before a service is allowed to handle traffic is started once, to ensure there are no critical errors in the configuration. We'll return to the smoke test later.

We can check if our smoke test went through we can inspect the Rapids, this displays all events that have gone through our system:

$ mm queue
      Id     │ River        │ Event        │ Status  │ Queue time
> [_] 192554 │         init │              │ success │ 23/11/2023, 14.44.23

Smoke tests are unique in that they have an empty Event. As we can see, the smoke test succeeded so the service is now live. If we want more details about an event, such as its console output we can drill down into it:

$ mm queue 192554 init
  messageId: '192554c1-ff81-435a-8bc0-35eb522b26d0',
  startedOn: '2023-11-23T13:44:23.577Z',
  finishedOn: '2023-11-23T13:44:23.636Z',
  result: 'success'

This becomes more useful when our services print something. Let's proceed so we can get to trigger it!

Create an api-key

Services are trigged by events coming indirectly from the Rapids. The Rapids -- and thus the organization -- is isolated from the outside by a virtual wall. To allow events from the outside we need an api-key or key for short. In Merrymake, all keys are temporary, to prevent old keys accidentally becoming security vulnerabilities. When creating a key we can specify a human readable description, which is highly encouraged, to help distinguish keys from each other. Let's create a new key for admin use lasting 14 days:

$ mm key new admin "14 days"

New keys are created as universal keys, therefore like services they can post any event to the Rapids. Later we'll look at how to limit this by allowlisting events for api-keys.

Trigger a service with code

In the speedrun we triggered the code with the post command, but in our uptime detector we would like a little more convenience. Let's create an admin portal where we can easily send events to our Rapids. In the root folder of our organization we create a new service group dev-utils with a repo admin:

$ mm group dev-utils admin empty

In our new repo we create a file called admin.html, and populate it with:

<html lang="en">
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Admin portal</title>
      #wrapper {
        display: flex;

      pre {
        border: 1px solid lightgray;
        width: 50%;

    <input type="text" id="event-type" />
    <button id="button">Post to Rapids</button>
    <div id="wrapper">
      <pre id="response"></pre>
        let respElem = document.getElementById("response");
        let textElem = document.getElementById("event-type");
        let button = document.getElementById("button");
        function loadDoc(url, method, body) {
          const xhttp = new XMLHttpRequest();
          xhttp.onload = function () {
            if (this.readyState === 4) {
              respElem.innerText = this.responseText;
          xhttp.open(method, url, true);
          respElem.innerText = "Waiting...";
        button.addEventListener("click", (e) => {

Remember to replace [key] with your api-key from the previous section.

Try it out by opening the file in a browser and posting a hello event. The output should say:

Hello, {}!

It looks a bit odd because the service we deployed expects a payload, which our admin portal currently don't support. The important thing is that we have verified that we can deploy and trigger services, so now it is time to start building the actual uptime detector.

Build an Uptime-Detector

Now that we understand the fundamentals of Merrymake, we can start adding functionality for the uptime-detector.

From hooks to functions

First let's add code to check whether an http endpoint is available. We replace the content in the file app.ts with a new function that uses the axios library to make an http GET-request like this:

import {
  type PayloadBufferPromise,
  type Envelope,
} from "@merrymake/service";
import axios from "axios";

async function minuteFunction(pbp: PayloadBufferPromise, e: Envelope) {

  minuteAction: minuteFunction,

Axios requests throw an exception if the endpoint gives an error or times out. This in turn crashes the service, this is fine, since everything in Merrymake is serverless without cold-start a crash doesn't affect other requests or services.

Note the line minuteAction: minuteFunction. This line specifies that the minuteAction should execute the minuteFunction.

Remember to install the axios library:

$ npm install axios

Before we can deploy our code we need to specify which event this modified service should trigger on. This is specified in the merrymake.json file. In this case, we want our function to run when a minute event enters our system. For the moment, we are not going to worry about what goes before the slash (main/) in hooks. We only care about specifying that a minute event should activate the minuteAction:

  "hooks": {
    "main/minute": "minuteAction"

The left-hand side of the colon ("main/minute") is called a hook for an event (minute), the right-hand side (minuteAction) is an action. merrymake.json connects hooks to actions, then app.ts connects actions to functions. One repo can have as many hooks, actions, and functions as we'd like.

In practice, we use the same name for the action and the function (handleMinute), making the code even cleaner:

// app.ts
async function handleMinute(pbp: PayloadBufferPromise, e: Envelope) {

// merrymake.json
  "hooks": {
    "main/minute": "handleMinute"

"Queued job"

We can now deploy our service, and then trigger it with our admin portal. We should see first Waiting... and then, after 5 seconds:

Queued job.

We can validate that the service was deployed by checking the queue:

$ mm queue
      Id     │ River        │ Event        │ Status  │ Queue time
> [_] abdf2f │         main │ minute       │ success │ 23/11/2023, 15.46.15
  [_] 17546f │         init │              │ success │ 23/11/2023, 15.44.23

Here we see not only that the service was deployed correctly, but also that we did trigger it, and it ran successful. Why then did it just say Queued job.? And why did it take 5 seconds?

Both these questions are answered by the fact that our handleMinute function does not send anything back to origin. The template we started with made a call to replyToOrigin. In Merrymake, an event can trigger other events, the complete series of events stemming from one event is called a trace, the first event in a trace is called the origin. No matter how deep we are in the trace replyToOrigin sends data back to the originator. By default, the originator waits for 5 seconds (which can be configured, see Configure the api endpoint), and if nothing has been sent, it writes Queued job. Even if the event has been processed to completion much faster.

We can bring down the 5 second delay by either calling replyToOrigin or reducing the timeout. In this case, the minute events will be triggered automatically, therefore nothing will be waiting for reply.

Use environment variables

Currently, we have hardcoded the url we are calling into app.ts. This is rather inflexible, because it requires a deploy every time we wish to change it. Instead, we can use environment variables, which in Merrymake can be changed much faster, without a redeploy. And they are really easy to use.

First, we need to modify the code to use the environment instead of a constant string. Since this change is non-trivial, we decide to use a simple type of feature toggling to make the switch safely. We use another environment variable to specify whether to run the new code or the old.

async function handleMinute(pbp: PayloadBufferPromise, e: Envelope) {
  if (process.env["FT_URL_IN_ENV"] === "live") {
    // New code
  } else {
    // Original code
Notice that if the feature toggling variable (FT_URL_IN_ENV) is unset or set to a wrong value the code defaults to the old behavior. We use "live" instead of the string "true" to avoid confusion with the boolean true (without quotes).

We can now deploy the code, trigger it, and verify with mm queue that it was successful; nothing changed. We can now try to toggle over to the new code, without having set the environment variable URL we expect the code to fail. Let's add a new envvar with key FT_URL_IN_ENV and the value live, accessible in both and public (we return to the last two options later):

$ mm envvar

Without deploying we can trigger our service again and check mm queue. This time our request failed. Signaling that our feature toggle worked, and we should probably switch it back. This situation is why fast environment variables matter, they allow us to quickly unroll faulty code if we use feature toggles:

$ mm envvar FT_URL_IN_ENV _

Being back to the working code, we can now fix the issue without any pressure, and then switch the toggle on again:

$ mm envvar new URL https://www.google.com both public
$ mm envvar new FT_URL_IN_ENV live both public

Trigger the service to verify that it works, and then remove the old code and the toggle-if, and deploy the code.

Environment variable scope

A common danger of using environment variables for feature toggles is accidentally reusing an existing toggle, or multiple teams accidentally using the same toggle. In Merrymake, this risk is reduced by the fact that environment variables are scoped to the current service group. In fact, this property defines what a service group is: services that share environment variables. Thus, if our teams correspond to our service groups we prevent the cross-team danger entirely.

We still have to manually remove old environment variables once we are finished with them.

$ mm envvar FT_URL_IN_ENV _

Use cron for recurring tasks

Currently, we have to manually trigger our service when we want it to run, but really, we want it to run automatically. For this purpose, we can use a scheduled event, also known as a cron job. A cron job simply posts an event to the rapids based on a schedule expression. Cron expressions are quite esoteric; luckily the internet has a wealth of builders to help make the expression, such as Cron Expression Generator

Cron jobs are built into Merrymake, so it's easy to make our service run every minutes, by posting a minute event with the schedule expression 0 * * ? * *.

$ mm cron minute

The name is used if we want to update the expression later, but otherwise has no effect, so the default value is fine.

After waiting for a couple minutes, mm queue shows some execution on the exact minute, so our service now runs automatically.

Get live debugging info

One of the really cool features of the Merrymake platform, is that it natively supports streaming events to clients. With this feature we can easily build chat, video, or music streaming applications, or implement "new posts available" in social media platforms. In this tutorial, we use streaming to send live debugging info to our admin panel.

Channels: Join and broadcast

In Merrymake, services listen to events on Rivers. Similarly, with Merrymake streaming clients listen to events in channels. Other services can then broadcast events to channels, which stream out to all clients in the channel directly. For security reasons clients cannot join channels directly. Instead, backend services can put the originator into channels, at any point during a trace.

So, to expose live debugging info we hook up a new event live-debugging to join the channel debugging.

// app.ts
async function handleLiveDebugging(pbp: PayloadBufferPromise, e: Envelope) {

// merrymake.json
  "hooks": {
    "main/minute": "handleMinute",
    "main/live-debugging": "handleLiveDebugging"

We can now modify our handleMinute-function to broadcast a called-event with a simple message I was called, to the debugging-channel every time it gets called.

async function handleMinute(pbp: PayloadBufferPromise, e: Envelope) {
  broadcastToChannel("debugging", "called", "I was called");

At this point we deploy the service.

Client-side: EventSource

We cannot simply post the live-debugging event, like we have with other events, because the client also have to setup for streaming. Luckily, this is very simple. We can add to our admin panel, right before the </div> another text area, and a few lines of code to handle called-events.

<pre id="debugging"></pre>
  let debugElem = document.getElementById("debugging");
  let source = new EventSource(
  source.addEventListener("called", (event) => {
    debugElem.innerText += event.data;

Again remember to insert your own key. Notice that the client does not know which or even how many channels it is listening to. All that is handled on the backend. Thus we cannot use channel information in the front-end.

If we try to run this code we'll see that it doesn't quite work yet. We're still missing the final step, to configure the api endpoint as a streaming endpoint, instead of the default reply endpoint.

Configure the api endpoint

In the root of our organization folder, next to our service groups is a special folder called event-catalogue. This is where we can configure how the HTTP endpoint responds, when we post events to the Rapids from the outside. We can change the timeout duration, or remove it and make the endpoint streaming. The event-catalogue can also be used for event schemas, which we'll return to later.

Changing an event timeout

In our present system, we know that the minute endpoint should respond quickly -- certainly faster than the default 5 second timeout. Thus it makes sense to decrease it, to 1 second (1000 milliseconds):

// event-catalogue/api.json
  "minute": { "waitFor": 1000 }

Making an event streaming

We also have our unfinished live-debugging event, which we have to tag as streaming, so we can use join in the backend and EventSource in the front-end:

// event-catalogue/api.json
  "minute": { "waitFor": 1000 },
  "live-debugging": { "streaming": true }

Now we should be also to test our improved admin panel, with events streaming in from the backend every minute.

Add a Database

Some apps consist of pure computation, such as integrating two systems. However, most have a data storage aspect as well. In our uptime-detector we would like to store the uptime status in a database, so we can present it to users. You can use any Postgres database to follow along with this section; we use an ElephantSQL free-tier database.

Secret connection string

The first step is to add the connection string as an environment variable. We already used environment variables earlier. A connection string is different, since it includes sensitive information. Therefore, we recommend making such environment variables secret, this way our services can use it, however, no one can easily read it.

$ mm envvar new DB "[connection string]" both secret

Smoke test the connection

Now that our service can access the database connection string it's time to create some tables. We like to manage as much as possible as code as usual. So let's setup a new service to be responsible for the database schema.

$ cd ..
$ mm repo db basic typescript

First, let's install the Typescript package for using postgres, pg:

$ cd db
$ npm install pg

Now, let's make sure our new db service can connect to the database. We are going to add this check to the smoke test during deployment, since if it cannot connect we don't want the service deployed. This is also called init, since it is run exactly once, on deployment, and cannot be triggered later. Thus, we don't need to have any hooks in this service:

// db/index.ts
import { merrymakeService } from "@merrymake/service";
import { Client } from "pg";

merrymakeService({}, async () => {
  let client = new Client({ connectionString: process.env.DB });
  try {
    await client.connect();
    console.log("Connected to the database");
  } finally {
// db/merrymake.json
  "hooks": {

After deploying the service, and verifying with mm queue that a connection was established. We can start creating tables.

Make services idempotent

We want a table for logging the latest call time, response time, and status code. We could use something like this:

    console.log("Connected to the database");
    await client.query(
      `CREATE TABLE UptimeCheck (
        callTime TIMESTAMP,
        responseTime NUMBER,
        statusCode NUMBER

However, this has a few shortcomings. First of all, this code fails if we run it after the table exists, so to redeploy the service in the future we would have to keep removing the previous creation code. Really, we want code that we can rerun again and again without causing issues, and without on subsequent runs. This property is called idempotence. The easiest way to make the present code idempotent is to add IF NOT EXISTS:

    console.log("Connected to the database");
    await client.query(
        callTime TIMESTAMP,
        responseTime NUMBER,
        statusCode NUMBER

This is already a major improvement. However, even though this code doesn't impede future deployments anymore, it is inflexible. The fields are static; we can never again use this code for anything other than as an audit trail, or to remake the database from scratch, which is luckily a very rare occurrence. Thus we want this code to be both idempotent and flexible whenever possible. Here is our favorite way to write the code from above.

    broadcastToChannel("debugging", "called", "Ensuring database schema");
    await client.query(`CREATE TABLE IF NOT EXISTS UptimeCheck ()`);
    await client.query(
      `ALTER TABLE UptimeCheck

Notice that we use the same debug broadcasting from earlier, so our admin panel displays when the database is updated. It is a common practice to broadcast deployments to a debugging channel from the smoke test function, because it gives a great overview of when something is deployed.

The first time this code is run, it has the same effect as the code from the beginning of this section, and as discussed subsequent runs have not effect -- unless we add or rename columns, then rerunning the code will introduce them, taking our database schema to the desired state. Let's deploy it and start logging some data.

Other Interesting Bits

Disable an api-key

To disable a key we set its duration to 0 seconds:

$ mm key [key] _ '0s'

This command immediately prevents people from using the key, however the key still appears on the list of keys for a while, in case we need to quickly reactivate it.