Photo by Lucas Davies on Unsplash
Hey Friends!
I’ve just landed in London in advance of the Self Publishing Formula Live Show in just a few weeks where I’ll be speaking both days on the systems, strategies, and automations we talk about right here. If you’re attending, come say hi—I’ll be the one handing out copies of Indie Author Magazine , brainstorming workflows with fellow nerds, and mocking the tea drinkers by reminding them coffee is life.
Before this week’s good stuff, here’s a quick heads-up about what’s coming:
When I get back, Author Automations is getting a serious upgrade. The monthly rate for new subscribers is going up to $39—but it’ll include way more:
One DIY Automation Kit per month (now available for sale on IndieAuthorTraining.com )
Discounts on courses, live events , and consulting
First dibs on new tools and templates
Founding members will get access to a private community with support and personal consulting from me
But here’s the best part:
If you’re already subscribed you’re locked in. No price changes. No surprises. Just more value for the same rate.
If you’ve been thinking about jumping in, now’s the moment. Everything’s growing—but I’m keeping the early crew grandfathered in as a thank-you for building this with me.
Subscribe now
Now, back to this week’s newsletter! Last week, we layered logic into our workflows—filters, routers, conditions, and fail-safes. We made our systems smarter, more stable, and way less likely to fall apart when you look away.
This week, we hand over the clipboard.
We’re talking agentic automations—systems that don’t just run when told, but evaluate, choose, and adapt based on context. Workflows that behave more like teammates than task runners. And to do that, we need to give them a brain.
Enter: n8n.
This open-source automation tool doesn’t just let you build drag-and-drop flows (though it does that beautifully). It also gives you control—over memory, logic, branching, and AI integrations that go way beyond “insert summary here.” Think programmable intelligence. Think condition-aware flows. Think: a system that knows what happened yesterday and adjusts today accordingly.
And powering that brain? The LLM layer.
Yes, ChatGPT is great. Claude has manners. Gemini’s trying hard. But the AI universe is bigger than the press cycle. We’re living in an era where you get to choose your model—and in many cases, even run it locally. Models like Deekseek, Mistral, LLaMA, Mixtral, Command R, and open-weight variants from Cohere, Aleph Alpha, or open-source Hugging Face models are shaping workflows for people who need power without giving up privacy or budget.
This week is about how to:
Introduce n8n as your agentic command center
Connect it to LLMs that don’t require an OpenAI subscription
Structure logic that evaluates, not just triggers
Think like an architect, not just an operator
We’re building beyond buttons. It’s time to build judgment into your systems.
🔧 Why n8n Is the Right Tool for the Job
You already know how Zapier and Make.com work—one trigger, one outcome, maybe a couple branches if you’re feeling fancy. But n8n changes the game. It doesn’t just connect tools. It lets you think like a system architect—visually, dynamically, and on one flexible canvas.
Instead of building isolated workflows in separate silos, you can connect multiple tools and workflows into a single flowchart that thinks across your whole business. Pull from your database, check your CRM, tag someone in your email list, generate a response, and log it to your helpdesk—in one view. Not across four apps and three browser tabs.
Need to split logic into four paths based on subscriber type? Easy. Want to build a webhook that triggers an AI model, sends an email, and updates a spreadsheet if it fails? No problem. You can even build self-healing automations that check themselves and restart if something stalls.
This is where n8n shines—it gives you the power to see and manage everything at once. No clicking back and forth between tools, no wondering where data went. It’s like command central for your workflows, and once you’ve used it, it’s hard to go back.
Even better? You can run it locally.
I’m talking full control. No data leaks. No API token surprises. No third-party limits.
And when it comes to AI integrations, you’re not locked into the usual suspects. I’ve got Ollama running locally with open-source models like Phi (lightweight, fast, shockingly capable), and I’m using WebUI as a front-end to test prompts, run agents, and validate results before I ever wire them into a workflow.
This stack doesn’t just save money—it gives me privacy, flexibility, and speed. And it’s what makes it possible to test and deploy intelligent automation without handing over my data to a cloud black box. It’s all part of the StorytellerOS that runs my business.
You don’t need to start with everything. You just need to know it’s possible. Because this is where automation stops being about convenience and starts becoming infrastructure.
🤖 What Makes an Automation Agentic?
An automation becomes agentic when it stops blindly following instructions—and starts making decisions based on the data around it. It doesn’t just react. It evaluates.
Here’s the big shift: in a traditional workflow, you have to map out every step. “If this happens, then do that.” You’re the one wiring the logic and setting the conditions.
It’s linear.
But in an agentic system, you connect all your tools, define the objective, and let the system decide how to get there.
You’re no longer telling it how to respond—you’re telling it what outcome you want, and it figures out the path using context, logic, and even multiple models.
That means your agent can:
Check your CRM for past interactions
Pull the latest order from Shopify
Cross-reference data in Airtable
Look at whether someone opened your last two emails
Compare that data to trends scraped from the web or pulled from Perplexity
Double-check its own response using a secondary “think” model before executing
In short: it has a 360º view of your ecosystem. And it uses that view to decide what to do next—without you scripting every move.
That’s how we’re building discovery workflows on Direct2Readers.com , by the way. We use agentic automation to cross-check genres, pull also-boughts from third-party sources, pull added data, analyze the cover for mood and alignment with tropes, and match a book to what a reader actually wants—not just what an algorithm wants to promote. Here’s a video that breaks down what’s happening.
Even mainstream tools are getting in the game:
Make.com’s new “agents” let you group multiple scenarios together and execute them conditionally. It’s like an orchestration layer for people with lots of scattered workflows. If you’ve already got a dozen scenarios, this makes them smarter—without having to rebuild from scratch.
Zapier’s beta agent feels more like a souped-up ChatGPT wrapper, but it’s learning. You can give it tasks like “respond to customer inquiries with tone matching” and it’ll make decisions based on text, timing, and CRM tags. Still early, but worth watching.
But if you want the full agentic experience now? That’s where n8n wins.
It already lets you wire together:
Your tools (via APIs, webhooks, or native nodes)
Your logic (conditions, loops, retries, branches)
Your models—including local or private LLMs
🧠 LLMs Are the Brains—But You Get to Choose Which One When we talk about giving your automation a brain, most people picture ChatGPT. Or Claude. Or maybe Gemini if they’ve been poking around Google’s experimental zone.
Those are the names making headlines. But they’re not your only option. Not by a long shot.
There’s an entire universe of large language models (LLMs) that can power your workflows—many of them smaller, faster, and far more flexible than the big three. And depending on how you structure your automations, you can mix and match them—right inside n8n or any agentic system you’re building.
That flexibility matters. Because the model you choose becomes the brain of your operation. And you don’t always want the same brain for every task.
If you’re self-hosting (like I do with Ollama), you can run models like Gemma3, Phi-3 or Mixtral directly on your machine or a private server. These models are lightweight, lightning-fast, and excellent for tasks like:
Drafting short content snippets
Evaluating simple customer data
Routing decisions in workflows
Running quiet background processes without eating your API budget
If you’re working with more nuanced content or need long-context handling (say, summarizing a podcast transcript or evaluating user behavior over time), you might pipe that through Claude 3 or Command R+.
And in truly agentic flows? You can use multiple models at once:
• One model does the generation
• A second checks the tone or fact accuracy
• A third decides whether to publish or send
All inside one n8n canvas. All working together.
Need to scrape content from the web for research? Add Perplexity or a scraping tool like Apify to your flow.
Need to enrich product data or run an “also bought” logic layer like we do on Direct2Readers.com? Combine your CRM, scraped store listings, and a summarizing model that actually recommends targeted marketing copy based on what’s selling NOW—not just regurgitates keywords.
This is the real magic of agentic automation: your system doesn’t just act. It decides.
And now, you get to decide who’s doing the thinking.
📬 Use Case: Reader Support Automation That Actually Thinks
The Goal:
Reply to incoming reader messages with helpful, tone-aware content—without it sounding like it came from a robot factory.
Here’s how it works, step by step:
Trigger
A reader joins your mailing list with their mailing address. That hits a webhook in n8n.
Lookup + Routing
The system cross-checks their email in your CRM or Google Sheet to see if they’re already tagged (ARC reader, buyer, reviewer, etc.).
LLM #1 (summarizer)
It checks your calendar to see if you have signings or upcoming events, and checks to see where they are
Logic Branch
If they’re new, send a welcome-style reply. If they’re returning, check for recent interactions or purchases.
LLM #2 (copy generator)
The second model drafts a reply using templated tone guidance. You’ve trained this flow to sound like you—friendly, clear, no fluff. It can send them a personalized reply with local information, upcoming events, and other information without you pre-programming anything , It has access to your calendar, CRM, and past emails.
LLM #3 (tone check or override)
Before it goes out, the response passes through a final model that double-checks for tone alignment. This could be another LLM or a fine-tuned classifier. You decide.
Send + Log
The system sends the email reply and logs the interaction to your CRM, helpdesk, or a shared team dashboard—whichever system you’re using.
This is how agentic automation becomes useful—not just impressive.
It doesn’t stop at email, either. You can run this same flow to:
Triage helpdesk tickets and flag only the ones that need human input
Write product descriptions from inventory fields
Draft Facebook ads from your book metadata
Coordinate multi-step campaigns across email, social, and your blog
And if something fails—like an API response gets weird or a model goes offline—your logic catches it, reroutes it, or logs it for manual review. That’s what agentic systems are built for: resilience.
🔐 Privacy, Control, and the Case for Going Local
Everything we’ve talked about—agentic workflows, smart orchestration, multiple LLMs talking to each other—it all sounds powerful (because it is). But the real power move isn’t in what these systems can do.
It’s in where they live.
Most automation platforms are built on other people’s infrastructure. Your data runs through their servers. Your usage is capped by their pricing. Your business logic is effectively rented—at any moment, a policy change or outage could pull the rug out from under you.
That’s why I run the entire Author Automations ecosystem—including Chellebot, our customer interactions, and team workflows—on a self-hosted stack.
My LLMs don’t phone home. They’re not training on subscriber emails or ingesting confidential ad copy. They run inside a secure, locked-down environment that I control. My metadata stays mine. My inputs and outputs stay off third-party dashboards.
This isn’t paranoia. It’s planning.
When you self-host tools like:
Ollama (to run models like Mistral, Phi, or LLaMA)
WebUI (for testing and visual prompt design)
Baserow or Supabase (for local databases and customer info)
n8n (to orchestrate all of it)
…you stop building your business on rented space. You own the system. You run the system. And you’re free to scale it however you want—no roadmap dependency, no surprise price hikes, no terms-of-service edits buried in footnotes.
You don’t have to start here. But you should know it’s possible.
This is the future of intelligent automation. Not just smarter systems—but sovereign ones.
📣 Consulting Is Open (And Yes, the Newsletter’s Staying Exactly the Same)
Author Automations started as a personal playground—a place to share what I was building, breaking, and fixing in real time. That part isn’t changing.
This newsletter will always be a space where I unpack tools, give away frameworks, and teach the systems that keep creative businesses running without burning out the people behind them.
But behind the scenes? Things have grown. I’ve built hundreds of automation scenarios—everything from onboarding flows to publishing pipelines, AI-powered content stacks, intelligent reader interactions, and fully private marketing systems. And now, I’m working on making them available.
I’ve opened up consulting slots, plus a full suite of:
• Pre-built automation packages
• System audits and integration walkthroughs
• Custom workflows for authors, publishers, and small creative teams
If you want help turning your business into a machine that actually supports your creative work (instead of draining your time), now’s the time.
🛠️ View the Packages or Book a Consultation Here
If you’re not ready? No pressure. This newsletter stays free, weekly, and focused. Always.
But if you’re building something bigger and you need backup? I’m in. I’m adding more automations as I can (Hey, I’m in Europe, give a girl a break!) and I’ll start using Substack Notes to let everyone know when I do!
Until next week,
Chelle