The standup used to be how you found out what was happening.

Now it’s how you interrupt the people making it happen.

9:00 a.m.
"Yesterday I did this. Today I'm doing this. These are my blockers."

9:05 a.m.
Same script, different person.

9:10 a.m.
Again.

Everyone knows this standup could’ve been done on Slack, but everyone keeps showing up anyway.

The environment that made these standups useful is disappearing. When your engineers can ship 10x more code with AI tools, coordination rituals become the bottleneck. Teams that keep these live syncs pay a context-switching tax just to restate information the system already has.

In teams that build with AI every day, visibility no longer has to be narrated. It can be observed.

We've moved our engineering team to an exception-driven model. We don't sync because it's 9:00. We sync when there’s a decision to be made.

We replaced our standups with this

We got tired of the “status narration” loop, so we built a small replacement inside our dev workflow.

In Claude Code, we keep a shared library of “skills” (markdown workflows) available inside each repo via .claude/skills/. One of them is AI Data Report. At the end of a work block, or right before handing something to reviewers, we invoke it like this:

/ai-data-report

“Generate a session summary of what we accomplished.”

Claude reads the skill instructions, inspects the repo state (diffs, commits, PR context, test signal), and generates a transparency report as a markdown file saved to:

.claude/reports/YYYY-MM-DD-[type].md

It’s deliberately factual and review-friendly:

  • changes made
  • files/diffs and commit links
    risks + test signal
  • suggested next steps
  • Human vs Agent time split

Example excerpt:


📝 Session Summary — 2026-02-03

Changes made:
- Hardened auth middleware
- Improved search caching

Files: 8 modified (+214 / -97)
Commits: 3
Bugs fixed: flaky test in search.spec.ts

Next steps:
- Add eval set for "zero results" edge case

Risk flags:
- Touches auth + caching
- Staging tests failing (2/120)

Time: 90% Agent, 10% Human (review + direction)
  

Blockers shouldn’t be self-reported

Once you’re generating session reports from repo reality, the next move is obvious: stop waiting for humans to remember they’re blocked.

Most blockers show up as friction you can measure:

  • PR waiting too long for review
  • tests flaking on a branch
  • deploy failing in staging
  • tickets drifting with no owner or unclear acceptance criteria

When those signals surface as they happen, the standup’s biggest promise to catch blockers early becomes less true than your tooling.

Could the “Scrum Master” title go extinct?

We think the title is on borrowed time.

When visibility is automatic, there’s less demand for someone to run the visibility ritual.

The good ones don’t disappear. They move closer to the real constraints: decision latency, review bandwidth, quality signal, incentives that quietly create rework and silence.

The role becomes less about meetings and more about making the system harder to lie to.

Building a shared brain for best practices

Automating coordination is only one piece. If you're going to move faster, you need repeatable workflows that don't live in one person's head.

That's why we've been building a shared library of reusable skills for AI coding assistants (Claude Code, Cursor, Copilot, etc.):

Tool Purpose
AI Data Report Generates data-driven reports tracking AI tool usage, response times, and session patterns.
Dataset Builder Creates test questions with expected answers from a knowledge base for evaluation.
Evaluator Runs queries against an AI agent and scores responses against ground truth.
E2E Testing Records and tests features like real user sessions with visual verification.

We're actively developing these workflows for our internal team, and we're starting to share what we learn.

Join our community

Get access to real artifacts, tutorials, and workflows as we figure out what coordination looks like when AI does most of the coding.

We'll be releasing practical examples you can adapt to your own team's workflow.

Sign up