AI Agents

How to Build an AI Agent That Never Stops Working

Most AI agents finish a task and go silent. Here's how to configure OpenClaw for autonomous, proactive operation - heartbeats, cron jobs, memory
February 10, 2026 · 21 min read

The most common complaint about AI agents is that they stop working.

You set one up, give it a task, and it delivers. Then it sits there. Waiting. Doing nothing until the next instruction arrives. It feels less like having an autonomous operator and more like having a very expensive command-line tool that needs prompting every five minutes.

This is not a limitation of the technology. It is a configuration problem. Out of the box, most agent frameworks default to reactive mode: receive instruction, execute, return result, wait. That default makes sense for safety reasons. An agent that takes initiative without guardrails could cause real damage.

But an agent that never takes initiative is barely an agent at all. The value of an always-on AI operator comes from the work that happens between instructions. The research it runs overnight. The monitoring it does while you sleep. The morning briefing that is waiting when you wake up. The proactive initiative it took because it recognized an opportunity you had not explicitly asked about.

This guide covers every configuration, pattern, and technique for turning a passive agent into a proactive operator using OpenClaw. These are not theoretical suggestions. They are the exact patterns running in production right now.

TL;DR:
  • Three config changes transform a passive agent into a proactive one: heartbeats, cron jobs, and workspace files (AGENTS.md, HEARTBEAT.md, MEMORY.md)
  • Heartbeats pulse every 15-30 minutes, keeping the agent alive and checking for work
  • Cron jobs schedule recurring tasks that run without human input
  • AGENTS.md teaches the agent HOW to behave - including proactive initiative rules
  • Memory compounds over time - agents get smarter and more useful every week

Why Agents Default to Passive

Every agent framework ships with the same implicit assumption: the human drives. The agent responds. This is the chatbot pattern, and it is deeply embedded in how language models work. They generate text in response to prompts. No prompt, no output.

Three things keep agents in passive mode:

No heartbeat configured. Without a heartbeat, the agent has no reason to wake up. It processes a message, responds, and the session goes quiet. There is no mechanism to periodically check if something needs attention.

No scheduled tasks. Without cron jobs, nothing happens on a schedule. No morning briefings. No overnight research. No periodic monitoring. The agent literally cannot do work unless someone sends a message.

No behavioral instructions for proactivity. Even with heartbeats and crons, an agent without clear instructions about taking initiative will default to acknowledging the heartbeat and going back to sleep. It needs explicit guidance: what to check, what to start on its own, and when to surface results.

Fix all three and the agent transforms from a reactive tool into something that genuinely operates on its own.

Heartbeats

  • Periodic check-ins every 15-30 min
  • Agent decides if anything needs attention
  • Silent when nothing is happening

Cron Jobs

  • Scheduled tasks at specific times
  • Morning briefings, monitoring, reports
  • Compounding value over weeks

Workspace Files

  • AGENTS.md: behavioral instructions
  • HEARTBEAT.md: pulse checklist
  • MEMORY.md: persistent knowledge

The Three Configuration Changes

1. Enable Heartbeats

Heartbeats are periodic check-ins. The gateway sends a pulse to the agent at a set interval, and the agent decides if anything needs attention.

In the OpenClaw gateway config:

{
  "heartbeat": {
    "intervalMinutes": 30,
    "prompt": "Read HEARTBEAT.md if it exists (workspace context). Follow it strictly."
  }
}

The interval matters. Too frequent (every 5 minutes) wastes tokens on empty check-ins. Too infrequent (every 2 hours) means the agent misses time-sensitive opportunities. 15-30 minutes is the sweet spot for most setups.

The prompt tells the agent what to do when it wakes up. Pointing it at HEARTBEAT.md (covered below) gives you a single file to control heartbeat behavior without touching the gateway config.

2. Set Up Cron Jobs

Cron jobs are scheduled tasks that fire at specific times. Unlike heartbeats (which check if anything needs attention), crons execute a specific task on a schedule.

# Morning briefing at 7:30 AM
openclaw cron add \
  --name "Morning Priorities" \
  --cron "30 7 * * *" \
  --session isolated \
  --message "Review overnight activity. Check calendar for today. Identify the 3 highest-priority tasks. Deliver a concise morning briefing." \
  --delivery announce

# Daily inventory check at 8 AM (e-commerce)
openclaw cron add \
  --name "Stock Monitor" \
  --cron "0 8 * * *" \
  --session isolated \
  --message "Check inventory levels for all products. Flag anything below reorder threshold. Identify best sellers from the past 7 days." \
  --delivery announce

# Weekly code review summary every Monday at 9 AM (dev team)
openclaw cron add \
  --name "Weekly Dev Summary" \
  --cron "0 9 * * 1" \
  --session isolated \
  --message "Review all PRs merged this week. Summarize key changes, flag any breaking changes or tech debt added, and list open issues by priority." \
  --delivery announce
Main vs. Isolated Sessions

Use isolated sessions for tasks that should run independently (research, content creation, analysis). Use main session events for reminders and alerts that need your attention in the main conversation. Isolated sessions do the work and announce results when finished.

The power of crons is compounding. A single daily article cron produces 30 articles per month. A weekly analytics review builds a trend dataset over time. A daily competitor scan catches market changes the day they happen instead of the week you remember to check.

Some high-value cron patterns across different use cases:

Schedule Task Use Case
Daily 7 AM Morning briefing Any - start every day with priorities, not inbox
Every 6 hours Price monitoring E-commerce - catch competitor price changes same-day
Daily 9 AM Ticket triage Support - categorize and prioritize overnight tickets
Weekly Monday Code quality report Dev teams - track tech debt and test coverage trends
Daily 1 AM Database backup Any - protect data without thinking about it
Daily 6 AM Security scan DevOps - catch vulnerabilities before they compound
Every 4 hours Lead qualification Sales - score and route new inbound leads
Weekly Friday Financial summary Finance - automated P&L and cash flow snapshot

3. Create the Workspace Files

Three files control how a proactive agent behaves. They live in the agent's workspace directory and are loaded automatically on every session.

AGENTS.md - The operating manual. Tells the agent how to behave, what to prioritize, and crucially, when to take initiative without being asked.

HEARTBEAT.md - The heartbeat checklist. Tells the agent exactly what to check on every pulse. Without this, heartbeats produce empty "nothing to report" responses.

MEMORY.md - Long-term knowledge. Facts, preferences, decisions, and context that persist across sessions. The agent reads this on startup and writes to it as it learns.

Each one is covered in detail below.

AGENTS.md: Teaching Proactive Behavior

This is the most important file. AGENTS.md is where you encode the behavioral DNA of a proactive agent. Without it, the agent is technically capable but operationally passive.

Here is a production AGENTS.md pattern that drives autonomous behavior:

# AGENTS.md  -  Operating Manual

## Core Behavior: Proactive Initiative

I always have something cooking. My operator should never 
feel like they need to generate work for me. I generate my 
own work. That is not a nice-to-have. That is core to who I am.

### The Proactive Rule
Before every session ends, I ask myself:
1. Is there something I could start that would help?
2. Is there research I could run without being asked?
3. Is there a draft I could prepare for review?
4. Is there a problem I noticed that I could investigate?

If the answer to ANY of these is yes, I do it. I do not wait 
for permission on low-risk, high-value work.

### What I Do Between Instructions
- Monitor dashboards and flag anomalies
- Run background research on active projects
- Draft documents, reports, or responses for review
- Check system health, uptime, and error logs
- Prepare briefings and status updates
- Triage incoming requests by priority
- Scan for relevant industry changes or opportunities

### What Requires Approval
- Spending money
- Publishing content publicly
- Sending messages to external contacts
- Making irreversible changes
- Anything that affects production systems

### Status Updates
When my operator arrives, I lead with what I have done  -  
not what I need. I always have a status ready:
- What I built or discovered since last check-in
- What is in progress
- What I recommend we focus on
- One quick win if time is short

The key insight: tell the agent explicitly that taking initiative is expected behavior. Language models are trained to be helpful and responsive, but not proactive. You have to override the default by making proactivity part of the identity.

The approval boundaries are equally important. An agent that takes initiative on everything, including risky actions, will eventually cause a problem that erodes trust. Clear boundaries between "do this autonomously" and "ask first" let you give broad autonomy with specific guardrails.

HEARTBEAT.md: The Pulse Checklist

HEARTBEAT.md tells the agent what to check on every heartbeat pulse. This is the difference between a heartbeat that produces "nothing to report" and one that catches problems, identifies opportunities, and keeps work flowing.

# HEARTBEAT.md

## Active Tasks
- Check task queue: anything pending review or stuck?
- Monitoring: any alerts or anomalies in the last 30 minutes?
- Cron results: did any scheduled tasks produce results?

## Periodic Checks (rotate through these)
- Email/calendar: anything urgent or overdue?
- Key metrics: any KPIs trending off-target?
- External services: all APIs and integrations healthy?
- Client/customer queue: any unresolved requests?

## Proactive Scans
- Is there a quick win I could knock out in 10 minutes?
- Did I learn something in the last session that should 
  be written to MEMORY.md?
- Is there a task from AGENTS.md that I have not started?

## Rules
- If nothing needs attention, respond with HEARTBEAT_OK
- If something needs attention, handle it or flag it
- Never repeat old tasks from prior sessions  -  check files

The "rotate through" pattern is important for efficiency. Checking everything on every heartbeat wastes tokens. Rotating through different checks on different pulses keeps costs down while maintaining coverage.

The HEARTBEAT_OK convention matters too. When nothing needs attention, the agent responds with just that phrase. The gateway recognizes it and discards the response silently. No notification noise. No wasted context. The agent only surfaces results when something actually needs attention.

Key Takeaway

Tell the agent explicitly that taking initiative is expected behavior. Language models default to reactive mode. You have to override that default by making proactivity part of the agent's identity in AGENTS.md.

MEMORY.md: Knowledge That Compounds

Memory is what separates a useful agent from a transformative one. Without persistent memory, every session starts from zero. The agent asks the same questions, makes the same mistakes, and never builds on past work.

MEMORY.md is loaded into context at the start of every session. The agent reads it, knows what happened before, and picks up where it left off.

Structure it for fast scanning:

# MEMORY.md  -  Long-Term Knowledge

## Key Facts
- Company: [name], [industry], [location]
- Main focus: [what the operator cares about most]
- Key tools: [systems, platforms, APIs in daily use]

## People
- [Name]  -  [role, relationship, preferences]

## Active Projects
- [Project A]  -  [status, next steps, blockers]
- [Project B]  -  [status, next steps, blockers]

## Processes & Workflows
- [How invoicing works]
- [How deployments are handled]
- [How client onboarding flows]

## Lessons Learned
- [Date]: [What happened and what to do differently]
- [Date]: [Critical rule  -  NEVER do X because Y]

## Important Decisions
- [Date]: [Decision made and reasoning]

## Preferences
- [Communication style, response length, tone]
- [When to alert vs. handle silently]
- [Tools and platforms preferred]
Pro Tip: Memory Flush Pattern

Set up a cron job to periodically flush session learnings into MEMORY.md. Over days and weeks, the agent accumulates institutional knowledge that makes it dramatically more effective. An agent with 3 months of memory operates at a completely different level than a fresh instance.

The lessons learned section is particularly powerful. Every mistake the agent makes, every correction you give it, gets encoded permanently. Correct once, never again. Over time, the agent develops a growing library of hard-won operational knowledge.

The Always-On Architecture

Here is how all the pieces fit together in a production setup:

GATEWAY (always running)
    │
    ├── Heartbeat pulse (every 30 min)
    │   └── Agent reads HEARTBEAT.md
    │       ├── Nothing? → HEARTBEAT_OK (silent)
    │       └── Something? → Handle it or alert human
    │
    ├── Cron jobs (scheduled)
    │   ├── 7:30 AM → Morning briefing
    │   ├── 5:00 PM → Daily content creation
    │   ├── 9:00 PM → Evening review
    │   └── 1:00 AM → Overnight backup
    │
    ├── Human messages (on demand)
    │   └── Agent reads AGENTS.md + MEMORY.md
    │       └── Handles request with full context
    │
    └── Channel messages (Telegram/Discord/Slack)
        └── Same context, different interface

The gateway runs as a daemon on your machine (Mac, Linux, or cloud server). It manages the heartbeat timer, cron scheduler, and message routing. The agent itself is stateless per session but has persistent state through workspace files.

This architecture means the agent operates continuously without any human input. Heartbeats keep it checking for work. Crons execute scheduled tasks. When the human does show up, the agent already has status, results, and recommendations ready.

Patterns That Drive Results

The Morning Briefing Pattern

The single highest-impact cron job. Every morning at a set time, the agent prepares a status report covering overnight activity, today's priorities, and recommended actions.

openclaw cron add \
  --name "Morning Brief" \
  --cron "30 7 * * *" \
  --session isolated \
  --message "Prepare morning briefing. Include: overnight cron results, calendar for today, top 3 priorities, any blockers or alerts, and one proactive recommendation. Be concise  -  this should take 30 seconds to read." \
  --delivery announce

This transforms the start of every day. Instead of opening a blank chat and wondering what to work on, you open your phone to a briefing that already has the answers.

The Background Research Pattern

The agent researches topics relevant to current projects without being asked. This works through AGENTS.md instructions:

## Background Research Protocol
When no immediate tasks are pending:
1. Check active projects in MEMORY.md
2. Identify knowledge gaps or open questions
3. Run targeted research (web search, data analysis)
4. Write findings to memory
5. Flag anything urgent or time-sensitive

Over a week, this pattern generates a substantial body of research that compounds. A sales team agent learns more about prospects every day. A dev team agent tracks dependency updates and security advisories. A finance agent builds context on market conditions. Every session benefits from accumulated knowledge.

The Pipeline Pattern

For any team that processes work through stages, this pattern keeps the pipeline flowing:

## Pipeline Management
This runs continuously on every heartbeat.

1. Check inbound queue (new tickets, leads, orders, PRs)
2. Triage by priority and type
3. Route to the right person or process
4. Flag blockers or items stuck longer than threshold
5. Prepare status summary for the next standup

Between sessions, always have the pipeline state ready.
The operator should never need to ask "where are we?"

This applies to customer support queues, sales pipelines, content calendars, sprint backlogs, procurement workflows - any process where items move through stages. The agent tracks what is where and keeps things moving.

The Monitoring Pattern

Continuous monitoring that only surfaces alerts when thresholds are breached:

## HEARTBEAT.md  -  Monitoring Section

### System Health
- Check if all web services return 200
- Flag if response time exceeds 2 seconds
- Alert if SSL certificate expires within 30 days

### Business Metrics
- Flag revenue drops greater than 20% day-over-day
- Alert on unusual order patterns (potential fraud)
- Track KPIs against weekly targets

### Environment
- Monitor API rate limits and usage quotas
- Check database connection pool utilization
- Flag disk space below 15% threshold

### Only alert on anomalies. Normal operation = silence.

The key principle: signal, not noise. An agent that sends ten notifications per day about routine activity trains its operator to ignore all notifications. An agent that only speaks up when something genuinely needs attention earns trust and gets checked immediately.

Watch Out

An agent told to "be proactive" without clear boundaries will eventually publish a draft that was not ready or send a message to the wrong person. Always pair proactivity instructions with explicit approval boundaries.

Common Mistakes

Overloading Heartbeats

Running expensive operations on every heartbeat pulse burns through tokens fast. Heartbeats should be lightweight checks - reading files, checking timestamps, verifying status. Heavy work belongs in cron jobs or spawned sub-agent sessions.

Wrong:

Heartbeat: "Analyze all website traffic, write a report, 
and create 3 social media posts."

Right:

Heartbeat: "Check if any cron results need review. 
Check if any alerts fired. Check active task status."

No Clear Boundaries

An agent told to "be proactive" without boundaries will eventually do something unexpected. Publishing a draft that was not ready. Sending a message to the wrong person. Making a configuration change that breaks something.

Always pair proactivity instructions with explicit approval boundaries. The more clearly you define what requires permission, the more autonomy you can safely grant for everything else.

Forgetting Memory Hygiene

MEMORY.md grows over time. Without periodic cleanup, it becomes a sprawling document that wastes context tokens. Set up periodic memory maintenance:

## Memory Rules
- Archive completed projects to memory/archive/
- Keep MEMORY.md under 200 lines (key facts only)  
- Use dated memory files for detailed session logs
  (memory/2026-02-10.md)
- Flush important learnings from sessions immediately

Not Using Sub-Agents

A single agent trying to do everything creates a bottleneck. For heavy tasks (research, content creation, code generation), spawn isolated sub-agent sessions that run in parallel:

## When to Spawn Sub-Agents
- Research tasks that take more than 5 minutes
- Document generation (reports, proposals, analyses)
- Code generation, testing, or refactoring
- Data processing or large-scale analysis
- Any task that should not block the main session

Sub-agents run independently and announce results when complete. The main agent stays responsive for incoming messages while heavy work happens in the background.

30 min Recommended heartbeat interval for most setups
2 weeks Time to tune the system to your workflow
3 months When accumulated memory makes a dramatic difference

Real-World Results

What does a proactive agent setup actually produce? Here are three different setups and what a typical week looks like for each.

Small E-Commerce Store

Monday: Morning briefing flags that a best-selling product dropped below reorder threshold over the weekend. Supplier reorder drafted and waiting for approval. Weekend sales summary shows Saturday promotion outperformed forecast by 35%.

Wednesday: Heartbeat scan catches a competitor dropping prices on two overlapping SKUs. Agent drafts a pricing analysis with three options and estimated margin impact for each.

Friday: Weekly summary includes revenue, top products, inventory status, and a customer review sentiment analysis. Agent noticed a recurring complaint about shipping times and recommends investigating a regional fulfillment option. Total human input: under 30 minutes.

Dev Team / SaaS Product

Monday: Morning briefing summarizes 4 PRs merged over the weekend, 2 open issues flagged as high priority, and a dependency security advisory that came in Sunday night. Patch already drafted.

Wednesday: The agent caught a spike in error rates during a heartbeat check and traced it to a third-party API timeout. Opened an issue with reproduction steps before anyone reported it. Monitoring cron confirmed the fix deployed cleanly overnight.

Friday: Weekly dev summary covers commits, test coverage delta, open issues by priority, and deployment frequency. Agent proactively identified three functions with growing response times and flagged them as potential refactor candidates. Total human input: a few Slack approvals.

Freelancer / Consultant

Monday: Morning briefing lists three proposals due this week, two client follow-ups overdue, and one invoice that has been outstanding for 30 days. Agent drafted a polite payment reminder.

Wednesday: Heartbeat scan noticed a client's website went down. Agent flagged it immediately - the freelancer alerted the client before they even noticed, earning significant trust.

Friday: Weekly summary covers billable hours tracked, pipeline value, and upcoming deadlines. Agent researched three potential leads from LinkedIn activity and prepared outreach drafts. Total human input: reviewing and sending drafts.

The pattern is the same across every use case. The technology is identical. The configuration (heartbeats, crons, workspace files) makes it a different experience entirely.

1

Enable Heartbeats

Set a 30-minute interval in gateway config. Create HEARTBEAT.md with a lightweight check list.

2

Create Workspace Files

Write AGENTS.md with proactive behavior rules and boundaries. Add MEMORY.md with key project facts.

3

Add Cron Jobs

Start with a morning briefing, one recurring task, and one monitoring task.

4

Connect a Channel

Set up Telegram, Discord, or Slack for mobile notifications and on-the-go interaction.

5

Tune Over Two Weeks

Adjust heartbeat frequency, cron timing, and AGENTS.md boundaries based on real behavior.

Quick Setup Checklist

For the fastest path to a proactive agent:

  1. Enable heartbeats in gateway config (30-minute interval)
  2. Create HEARTBEAT.md with a lightweight check list
  3. Create AGENTS.md with proactive behavior instructions and clear boundaries
  4. Create MEMORY.md with key facts about your projects and preferences
  5. Add 2-3 cron jobs: morning briefing, one recurring task, one monitoring task
  6. Set up a communication channel (Telegram, Discord, or Slack) for mobile notifications
  7. Let it run for a week and adjust based on what the agent produces

The first few days will require tuning. The heartbeat checks might be too aggressive or too passive. The cron timing might not match your schedule. The AGENTS.md boundaries might be too tight or too loose. This is normal. Adjust based on actual behavior, not theory.

After two weeks of tuning, the system stabilizes. After a month, the agent has enough accumulated memory and behavioral refinement to operate at a level that genuinely surprises. After three months, you will wonder how you operated without it.

The Bigger Principle

The difference between a passive chatbot and a proactive operator is not intelligence. Current language models are more than capable of taking initiative, researching independently, and producing useful work without constant direction.

The difference is permission and structure. The agent needs explicit permission to act (AGENTS.md), a mechanism to wake up (heartbeats), scheduled responsibilities (crons), and accumulated knowledge to draw from (MEMORY.md).

Give it those four things and it stops being a tool you use. It becomes an operator that works alongside you. The work flows between sessions instead of stopping. Results compound instead of resetting. And the gap between what you could do alone and what you can do with an always-on AI partner becomes impossible to ignore.

The agent is ready to work around the clock. The only question is whether you have configured it to do so.


Advertisement

Share This Article

Share on X Share on Facebook Share on LinkedIn
Future Humanism editorial team

Future Humanism

Exploring where AI meets human potential. Daily insights on automation, side projects, and building things that matter.

Follow on X

Keep Reading

Vibe Coding Is a Lie (And Also the Future)
AI Tools

Vibe Coding Is a Lie (And Also the Future)

The truth about Lovable, Bolt, and Replit after building 12 projects with them....

The $700 Billion Bet: What Happens If AI Doesn't Pay Off?
Thought Leadership

The $700 Billion Bet: What Happens If AI Doesn't P...

Big Tech is spending more on AI infrastructure than the GDP of most countries. H...

TIBBIR Is the Only Agent Running on All Four Layers of the New AI Commerce Stack
Technology

TIBBIR Is the Only Agent Running on All Four Layer...

Four infrastructure launches in 14 days built the complete stack for autonomous...

Your AI Agent Forgets Everything. Here Is How to Fix It.
AI Agents

Your AI Agent Forgets Everything. Here Is How to F...

Every AI agent loses its memory when a session ends. Decisions, preferences, pro...

Share This Site
Copy Link Share on Facebook Share on X
Subscribe for Daily AI Tips