Automating Changelogs with OpenClaw: From Hours to Minutes
/ 6 min read
The Problem: Release Management at Scale
As an Engineering Manager, one of my recurring responsibilities is release management. Every quarter, our team of 30+ engineers ships a major release, and someone needs to compile a changelog that makes sense to our customers. That someone is usually me.
The process was straightforward but mind-numbingly tedious: go through three months of Git commits, identify the meaningful changes, look up the associated Jira tickets to understand the user impact, and translate everything from engineering-speak into something a non-technical user could actually understand and care about.
This would easily take a full working day. Every single release. And the quality was inconsistent - sometimes I’d catch all the important changes, sometimes I’d miss things buried in cryptic commit messages. It felt like exactly the kind of repetitive, structured work that AI should be able to handle.
Discovering OpenClaw
Lately I discovered and installed OpenClaw as my personal AI assistant - but I hadn’t really pushed it to do anything complex or workflow-oriented.
Then it hit me: what if I could just ask OpenClaw to generate the changelog for me?
I opened a chat and typed: “I need help automating our release changelog generation. Can you help me build a workflow?”
What happened next was genuinely impressive. Instead of just giving me a script or telling me to use an existing tool, OpenClaw asked me the right questions:
- What’s your repository structure?
- How are your commits formatted?
- Where is the issue tracking data?
- What should the changelog format look like?
- Who’s the target audience?
We were building something custom, and OpenClaw was guiding me through what information it needed to make it work.
Building the Workflow: Iteration by Iteration
The Initial Design
I already had a pretty clear vision of what I wanted: fetch the commits from GitHub between two release tags, parse the semantic commit messages to extract ticket IDs, and then pull the actual context from Jira to make the changelog user-friendly.
Our commits follow a semantic format: type(TICKET-####): description, so parsing them would be straightforward. And I
knew we could use the Jira MCP server to fetch ticket data - each ticket has a description field where we write user
stories and business value, which is exactly what end users need to see.
I explained the approach to OpenClaw and asked it to implement the workflow. It created a set of Python scripts:
- One to fetch commits from GitHub between two refs
- One to parse the semantic commit format and extract ticket IDs
- One to connect to the Jira MCP server and enrich the data with ticket context
- One to generate the final changelog
The First Version: Good but Not Great
The first run worked. It fetched 29 commits from our test release, parsed them into features, fixes, and improvements, and pulled the corresponding Jira tickets. Already way better than doing it by hand.
But when I reviewed the output, it was still too technical. Entries like “Fix speed calculation after sleep cycle” or “Webhook URL Validation Enhancement” might make sense to an engineer, but they didn’t tell our customers what actually changed or why they should care.
The Jira data was there, but the workflow was just pulling the ticket title, not the actual user-facing description from the ticket body.
Refining the Context Extraction
I gave OpenClaw feedback: “This is pulling the Jira ticket titles, but those are too vague. Can you extract the actual user-facing descriptions from the ticket body?”
OpenClaw updated the script to parse the Jira description field more intelligently. Instead of just using the title, it would extract the first few sentences from sections like “Summary” or “Goal” - the parts that explained what users would actually experience.
The next run was dramatically better. Entries like “Webhook configuration now accepts internal service URLs without requiring full domain names” actually explained what changed in a way that made sense.
Polishing the Output
But we weren’t done. I reviewed the generated changelog and noticed inconsistencies:
- Some entries started with lowercase, others with uppercase
- Technical jargon like “FE” and “UI” weren’t translated
- Spelling was inconsistent (“cancelled” vs “canceled”)
- Some descriptions were verbose and unclear
I gave OpenClaw feedback: “This is much better! But we need to polish the grammar and make it consistent.”
OpenClaw added a final “grammar polish” pass to the workflow. It would:
- Ensure consistent capitalization
- Replace technical terms (but preserve domain-specific words, our customers understand)
- Fix common spelling issues
- Remove awkward constructions
- Format everything uniformly
The final output looked professional, consistent, and most importantly - readable for our customers.
The Final Workflow
After several iterations, we ended up with a complete workflow:
- Fetch commits from GitHub between two release tags
- Parse semantic commits and filter out placeholder tickets
- Enrich with Jira data by fetching ticket descriptions via the Cloud API
- Extract user-facing context from Jira’s description field
- Simplify technical language while preserving domain terms
- Polish grammar and consistency for a professional finish
- Output formatted markdown in a code block, ready to copy/paste
All of this happens with a single command: “Run release preparation for version X.Y.Z based on version A.B.C”
The workflow takes about 30 seconds to run. What used to be an 8-hour task is now literally a one-minute operation.
What’s Next
I’ve saved the workflow as a custom OpenClaw skill, which means it’s reusable and can be refined over time. As our commit format or Jira structure evolves or tools change, I can iterate on the workflow to keep it aligned with our processes.
I’m also exploring what other repetitive tasks in my role could benefit from this kind of custom AI automation. The possibilities feel endless once you realize AI assistants can do more than just answer questions - they can be collaborative partners in building real, useful tools.
If you’re using OpenClaw or similar AI assistants for more than just chat, I’d love to hear what workflows you’ve built. The automation possibilities here are genuinely exciting.