Trigger.dev blog.

Q2 2023: Enhancing Deployment, Hosting, and Collaboration on Trigger.dev

Cover Image for Q2 2023: Enhancing Deployment, Hosting, and Collaboration on Trigger.dev
Eric Allam
Eric Allam

6 weeks ago we launched Trigger.dev on Hacker News and enjoyed a very positive (well not all positive!) response, shooting up to #1 and becoming the most upvoted "Show HN" by a YCombinator company since Supabase. Since then we've been hard at work making improvements and trying to add integrations as fast as possible.

During the last 6 weeks though, we've also been listening to feedback from the community. And it's pretty clear based on those conversations that we are too hard to deploy, too hard to host, and too hard to contribute to.

So, we've decided to focus the next three months on fixing these issues by:

  1. Making it easier to deploy by supporting serverless execution
  2. Making it easier to host by simplifying and unifying around PostgreSQL
  3. Making it easier to contribute to by moving integration logic to the clients

You can view all the changes in more detail in our public GitHub Project. These changes will be taking place in the "Q2 2023" Epic.

Making it easier to deploy

One of the immediate questions we are asked when telling people about Trigger.dev is can they use it on Vercel, or AWS Lambda, or Cloudflare Workers. Currently, the answer to the question has unfortunately been no. Because we require a long-running server, customers trying to add event-driven async background-tasks to their serverless production environment have hit a brick wall. We're updating Trigger.dev to be serverless-first, moving Trigger execution over HTTP. We'll be including a built-in way to tunnel your localhost to the internet so everything still works seamlessly in development, without having to pay for something like ngrok ($20 a seat).

Our initial target platform will be Next.js/Vercel, which will be as simple as this:

// pages/api/jobs.ts
import { Job, TriggerClient, customEvent } from "@trigger.dev/sdk";
import { createHandler } from "@trigger.dev/nextjs";

const client = new TriggerClient({
  apiKey: process.env.TRIGGER_API_KEY,
});

new Job({
  id: "new-user-created",
  name: "New User Created",
  version: "1.0.1",
  trigger: customEvent({ name: "user.created" }),
  run: async (event, ctx) => {
    // Perform you tasks here
  },
}).registerWith(client);

const handler = createHandler(client, { path: "/api/jobs" });

export default handler;

Making it easier to host

Currently we rely heavily on Apache Pulsar for scheduling, delays, and low-level messaging to coordinate running triggers. While it's allowed us to move quickly to build out the Trigger.dev beta, it's proved to be resource intensive to run and complicates our self-hosting story.

In Q2, we'll be replacing Apache Pulsar and moving all our messaging, scheduling, and durability into PostgreSQL (see this Queues in PostgreSQL talk to see what makes this possible).

We're also going to be removing the wss service, since we won't be using Websockets for the Serverless version (we may eventually add back in websocket support, but we'll probably just connect directly to the app server powered by remix).

Another service we'll be removing is the integrations service, which currently handles requests and webhooks. Where are requests and webhooks going? That leads us neatly to...

Making it easier to contribute to

We're moving all requests and webhooks and anything related to integrations out of the platform and into clients. Previously, to ship a new integration or update one, we'd have to add code to two different places, the client (e.g. @trigger.dev/slack) and the platform. This is because the clients were "thin" and only contained information about the inputs and outputs of a request, along with some metadata. The client would send the input and the metadata to the platform and then the platform would be responsible for making the correct API calls, along with retrying and authenticating. A simplified diagram of posting a message to a slack channel:

trigger beta

In Q2, we're moving all integration specific code into the clients, and exposing a general purpose "Task API" that clients can use to provide their executions with idempotency, resumability, and retries. The updated posting a message to a slack channel would look like this:

trigger Q2

We're also doing the same thing for webhooks. Currently integration specific webhooks worked similarly to requests in that both the client and the platform needed to know information about the webhook and how to register it. A simplified diagram for triggering on a GitHub repo issue event:

webhook beta

In Q2, we're again moving all integration specific code for webhooks into the clients, exposing a general purpose "Webhook API" that clients can use to generate unique webhook URLs that they can then use to register a webhook with a 3rd party service, e.g. GitHub:

webhook Q2

Webhooks will continue to support nicely typed webhook payloads just like they do now, it'll just be easier for integration developers to expose those types (as it'll become all client-side):

Webhook event types

Because everything will be driven by APIs, there will be nothing particularly special about the official @trigger.dev integration packages, and anyone can easily create their own integration packages and distribute them. There'd also be nothing stopping a contributor from creating an SDK and integrations for a whole other language.

Also: New Vocabulary

We're also taking this opportunity to cleanup some of the names for things in Trigger.dev to try and clear up confusion. This is currently a WIP but we think is already significantly better:

  • Workflows become Jobs
  • Jobs have a Trigger
  • Workflow Runs become Executions
  • Workflow Steps become Tasks
  • Events stay Events

So this code from our Hello World example:

import { Trigger, customEvent } from "@trigger.dev/sdk";

new Trigger({
  // Give your Trigger a stable ID
  id: "hello-world",
  name: "Template: Hello World",
  // Trigger on the custom event named "your.event", see https://docs.trigger.dev/triggers/custom-events
  on: customEvent({
    name: "your.event",
  }),
  // The run functions gets called once per "your.event" event
  async run(event, ctx) {
    await ctx.logger.info("Hello world from inside trigger.dev 10");
  },
});

Becomes:

import { Job, customEvent } from "@trigger.dev/sdk";

new Job({
  // Give your Job a stable ID
  id: "hello-world",
  name: "Template: Hello World",
  // Trigger on the custom event named "your.event", see https://docs.trigger.dev/triggers/custom-events
  trigger: customEvent({
    name: "your.event",
  }),
  // The run functions gets called once per "your.event" event
  run: async (event, ctx) => {
    await ctx.logger.info("Hello world from inside trigger.dev 10");
  },
});

Towards a better Trigger.dev

As we dive into Q2 2023, we're stoked to tackle these improvements and make Trigger.dev a more developer-friendly platform. By streamlining deployment, hosting, and contributions, we're not just making our lives easier, but also empowering the community to build even cooler stuff. So, let's roll up our sleeves, embrace these changes, and see what awesome things we can create together on Trigger.dev!

FAQs

As we come across potential questions regarding this new direction, we'll add them below.

Will I be forced to upgrade?

Yes, unfortunately. Because of the size of the change, we've decided that this will be a breaking change and old clients will stop working with the cloud trigger.dev when it's released. We will be releasing updates to existing clients to add deprecating warnings and will have clear instructions on how to upgrade.

What are the reliability of requests?

Currently, because of the way requests are performed on the platform-side and not the client side, requests are guaranteed to be attempted exactly once (or retried more than once if they are retried).

Moving requests (now called tasks) fully over to the clients, that guarantee becomes at least once, since there is always a chance the client performs the request but then fails to update the platform, marking the task as complete. We'll be providing tools to clients to prevent duplicate tasks, such as idempotency keys. There's also an idea for how to easily prevent duplicate tasks on the client side through database integrations, but it's an area still being explored.