What 15 hackathon apps taught us about the future of customer communication

One of my first indications that AI was “happening” inside of real companies was when a customer of ours recently told us her company ran an internal AI hackathon. 48 hours, real prizes, and full participation across the org.

When the dust settled, 15 of the projects were tools for customer success and support, which she found funny given the fact that my past many product have all aimed to make the lives of success, support, and operations teams 10x easier.

Also… Fifteen! Out of everything people could have built, nearly every team gravitated to the same problem: how do we communicate better with our customers?

Everyone has the same problem

What’s remarkable isn’t that people built customer communication tools. It’s that they built them independently. Nobody coordinated. There was no brief saying “build something for support.” These were developers, product managers, and success teams all looking at their day-to-day frustrations and arriving at the same conclusion.

One team built a tool that checks incoming support requests against historical tickets to surface similar issues. Another built an app that turns closed bug tickets into customer notifications. And yet another built a tool to identify at risk customers based on recent communication trends.

The specifics were different, but the theme was identical: there’s a gap between our teams (engineering, marketing, support) and our customers, and the gap is big enough to be both noticeable and painful.

The problems were always there

After chatting on this for a while I realized: none of these problems are new. Every team at that hackathon has been living with these gaps for years. The developer who built the bug-to-update tool has always known that customers don’t hear about fixes. The support team has been answering the same questions over and over again for the past 10 years.

The problem hasn’t changed, but now people empowered to do something about it.

The AI tools are a bit irrelevant, they didn’t create a new category of problems, and they really haven’t created a new category of solutions (for example, I built the “identify at risk customers tool over 2 years ago”). What the recent trend in AI tooling has done is lower the floor on what’s worth solving.

Small operational inefficiencies that always existed (but weren’t worth spending the time of an expensive engineer) are suddenly accessible, if not solvable. A non-technical person can prototype a solution in a few hours instead of writing a spec that sits in a backlog for six months.

The reason that matters is not because the prototypes are production-ready (they’re often not (and the team has no idea how to get them there)), but because they shine a light on the problem that’s always existed.

Shining the light

We talk to SaaS companies every day who know they have customer communication gaps but have never prioritized fixing them. The changelog is stale. Support tickets get resolved but customers never hear about it. New features ship and the only people who know are the ones who read commit messages.

These aren’t dramatic problems, nobody’s hair is on fire… but they do compound.

Customers churn because they think the product isn’t improving. Prospects don’t convert because there’s no visible proof of momentum. Support handles the same questions because updates never reached the people who reported the issue.

When a hackathon team builds a tool to solve one of these problems in 48 hours, it doesn’t mean the problem is easy. It means the problem is obvious, or at least clear enough that someone with no engineering background can articulate the problem, prototype a workflow, and demo it to a room full of people who all nod and say “yeah, we get it.”

The gap between demo and done

While the hackathon got everyone experimenting with these AI tools and succeeded in getting the teams rallied, there were a few small implementation issues.

For example, the commission checking product it looked great, but there was one small issue… the math wasn’t always correct. The app connected to real APIs and data sources, but struggled with all the things we’ve been wrestling with in software forever, boundaries and edge cases.

This is the stock experience of building with AI right now. You can get 80% of the way to something impressive in a few hours. The remaining 20% (accuracy, reliability, edge cases, maintenance) is where years of engineering and domain expertise live.

A built an AI tool that posted on their behalf, and in their demo they just shipped an update live… without viewing what it wrote! Works for the demo, looks awesome, but will it work for the 50th update? The 200th? I mean… the 2nd?

That’s where purpose-built products earn their keep.

What this means for SaaS

If 15 teams at one company independently identify customer communication as their biggest pain point, that’s a signal.

The companies that win in the next few years won’t be the ones with the flashiest AI demos. They’ll be the ones that take these real, obvious, painful gaps in customer communication and solve them reliably. Not as a weekend project, but as a team focused on solving the problem, and completing that “final 20%” that takes 80% of the time.

A hackathon participant put it well: the hard part isn’t producing an output. The hard part is making that update good. Making it accurate, making it speak to the right audience, making it happen consistently, automatically, every time you ship.

That’s the actual work that separates every successful SaaS product from 2026 on.


If the hackathon energy has your team thinking about how to close the loop on customer communication, let’s have a chat about how Changebot turns your shipping activity into customer-facing updates automatically.