When I die, my tombstone will display a pithy message:
And it will be an accurate representation of my life as a knowledge worker.
So why wouldn’t I at least try to make these Zoom calls awesome?
Introducing AI Agents
Agents are all the rage in AI circles these days.
Here’s OpenAI’s definition of agents:
A system that independently accomplishes tasks on behalf of a user by actively perceiving their environment, making decisions, and taking actions to achieve specific goals, often with a high degree of autonomy.
Unlike chatting with an LLM, agents are more autonomous — you don’t have to tell them when to take an action or how to take an action. It’s built into the agent loop.
Doesn’t this require a lot of code?
You’re not wrong for assuming this, but there are a lot of tools that can help non-coders build “AI-Augmented Workflows.”
You may be familiar with Workflow Automation Tools like Zapier, Make, IFTTT and N8N.
These tools pre-date LLMs and were part of an earlier software movement called No-Code, which allowed you to stitch together different apps and workflows to create automations.
These workflow automation tools still exist, but now have an additional functionality: they can access LLMs.
Here’s a simple way to think of these AI-Augmented Workflows:
Step 1: Trigger
The trigger sets off the automation and can include:
An email is received
A file is saved to Google Drive
A row is added to a Google Sheet
A Zoom call occurs
A new item is added to an RSS feed
Step 2: Add intelligence
The trigger will be accompanied by additional information. For example:
Calendar Invite → sender, date, invitees, description text
Gmail → sender, subject line, body text, attachments
Zoom call → date, attendees, transcript, duration
All of these “variables” can then be passed into an LLM (of your choice). These variables can then be incorporated into both your prompt and context.
Armed with this new prompt, you can create any type of LLM output ( and here we can reference our G.R.A.I.L. framework).
Step 3: Output
In this last step, you have the output provided by the LLM. And it needs to go somewhere. Here are some pathways for this LLM Output:
Save it as a Google Doc
Push it to a Slack channel
Email it to a specific address
And voila!
Your Agentic Workflow will continue to scan for the trigger so you can sit back, relax and let the LLM do its thang.
Creating a Zoom call analyzer
Let’s put this into practice with a very simple example: running transcripts from Zoom calls through an LLM to extract key information.
This has been a legit pain for me for two reasons.
First, I do 15-20 Zoom calls a week.
Second, there are a lot of tiny steps to make this happen seamlessly. It got to the point that I hired an UpWorker to do this on my behalf (especially since these sections of these Zoom calls make for excellent context scaffolding.)
These steps include:
Downloading the transcript from Fireflies.ai (my AI notetaker)
Copy-pasting the transcript and the prompt into Claude
Waiting for the output and then saving it to a specific folder
Sending the highlights to me for review
While none of these steps are complex, this is easily a 10 minute workflow.
Here’s how I fully automated this workflow using the workflow automation tool, relay.app. (My preliminary research shows that it’s the simplest and cheapest tool compared to Zapier, Make and N8N.)
Step 1: Connect Fireflies to Relay.app
The automation platforms make this process super easy as they handle all the api connectivity and the user authorization. It’s effectively point and click to connect any app.
Step 2: Set up the Fireflies.ai trigger
The trigger for Fireflies is simple. After every Zoom call, a new transcript is created — and this sets off the workflow.
Step 3: Apply the Intelligence Layer
This step is akin to taking the transcript from Fireflies, uploading it to ChatGPT and giving it a prompt.
Below you’ll see both the attached transcript (1) and the prompt that I used (2).
Here are some neat outputs from this prompt. First, it provides customary “CRM” information:
Action items that I can quickly scan
A summary of the conversation
But it goes further, providing valuable information for my (quickly growing) AI Training and Strategy business:
Pain points and dream outcomes (for marketing copy)
AI use cases (for product design and positioning)
Any entrepreneur knows that these last two bullets are gold — yet hard to consistently extract and save in a regimented fashion.
(This information btw gets aggregated and pushed back into LLMs for presentations, proposals and marketing materials.)
In this step you also get to pick the model — and each of them have different associated costs.
Step 4: Push the output via email and save a copy
As a Gen X’er I still live in my email. So I’d love to have a copy of this sent to me so that I can review it quickly and act accordingly.
This is extremely easy inside of Relay (and with a few quick steps, I could send it to others outside of my organization or push it to a Slack channel).
Finally, recognizing the long-term value of this unique context, I want to save the raw transcript into Google Drive.
It’s your turn
These steps are pretty straightforward, but here’s the catch: creating the agent is the easy part.
The hard part is identifying the workflow that needs to be automated, finding the triggers and creating the right prompt.
As prompting and tool usage become table stakes, creative (yet pragmatic) use cases of AI are going to be the special sauce.
I think I could vibe code this using Cursor + APIs, but I'd rather pay the $20/month!
Cool to see Relay in action! I use Make and wish Granola had an integration