As you consider deploying AI in your business, the number of options can be overwhelming. There are prebuilt tools, autonomous agents, automation platforms, custom-built applications — but those are just the ways AI capabilities can be delivered, not where it can be used to solve real problems in your business. I am going to spend the next six posts diving into four common areas where most businesses can put AI to use in their business today.
Before I name the four categories I want to walk through, an honest caveat is in order. These are common problem patterns that show up across most businesses, but that does not make them a checklist of where you should start. Your starting point is still the strategic problem your business is solving, whether that’s cutting cost, building revenue, strengthening retention, or improving customer attraction. Once you’ve named that goal, walked it down to the specific operational problems in the way, and prioritized them by impact, then you’re in a position to decide what category of solution is appropriate. The earlier posts on defining and prioritizing business problems still come first; this series sits on top of that work, not in place of it.
Honest Framing Up Front
A few weeks back I wrote about my own personal AI stack. This series is the next layer down. Not the tools I use in my own work, but the categories of business problems I’ve actually pointed AI at, on behalf of clients or in the products my company builds and sells. The point of using my own work as the worked example is not that my applications are the only options on the market, because they aren’t. There are good vendors and serviceable open source projects in each of these categories. The reason I’m using my own work is that I know where it breaks, what I’ve learned, and what I’d build differently if I were starting over today. That’s a different kind of value than someone listing options they’ve only read about, and you should know going in that this is the lens you’re getting.
The Four Categories
1. Knowledge Access (When AI Reads Your Documents)
Two distinct problems sit underneath this category, both solved by the same broad class of technology. The first is internal. Knowledge ends up buried in documents that nobody can find, or that everyone has to interrupt your local expert to access. This includes areas like policies, procedures, technical manuals, prior project documentation, or customer history. The same people end up answering the same questions over and over because they’re often the only ones who know where the answers actually live. The second is external. Customers are looking for information about your products or services and the existing options are inadequate. They face an oversized website that nobody has the patience to navigate, or a phone number and email address that most of them won’t bother to use. The customer who can’t find what they need in thirty seconds usually leaves.
Next week, I’m going to dive into the details on this category of solutions, demonstrating how RAG (the technical name for the document grounded approach to AI) pays off. I’ll use concrete examples from Docora, the document assistant we built. You can think of it as a highly customized version of ChatGPT constrained to your documents and your documents only, with no risk of inventing answers from outside that knowledge base.
2. Demand Generation (When AI Finds Your Next Customer)
This problem is straightforward: Finding qualified prospects is one of the most time-consuming activities in any sales operation, and most of those hours should really be going to closing rather than sourcing. Manual prospecting eats most of a sales team’s time and produces output that varies wildly in quality from one week to the next.
This category fits when you have a definable ideal customer profile, when public signals about prospect fit are actually available, when sales bandwidth is the genuine bottleneck on your growth, and when you’re willing to invest in the prospecting layer rather than expecting it to be free. It doesn’t fit relationship driven enterprise sales where the qualification work is intrinsic to the relationship, vague targeting where the AI has nothing to filter against, or low deal volumes where the per prospect economics simply don’t work out.
I’ll use concrete examples of how this works using the Architected Intelligence prospecting platform, the same one I wrote about in the From Framework to Field posts a few weeks ago. The deep dive will go further into how it actually works, where we’ve seen it succeed, and the situations where I’ve recommended against it.
3. Customer Interaction (When AI Picks Up the Phone)
The problem is one most service businesses know well. Scheduling, intake, routine inquiries, and after-hours coverage end up chewing up phone hours that don’t add margin. The pattern is familiar in any service business with high call volume and relatively low call complexity.
This category fits when you have high volume routine interactions with a narrow band of legitimate variation, when your business has seasonal load that’s hard to staff against, or when after-hours coverage is the limiting factor on customer satisfaction. It doesn’t fit low volume or relationship heavy interactions, complex troubleshooting, or any context where the brand experience genuinely requires a recognizable human voice.
This is the one category where I’m not the primary builder. I work with partners who specialize in conversational AI for telephony, and they’re the ones bringing the phone platform itself. What I do is the integration layer, which means connecting the AI phone system to a RAG service like Docora so it can answer with real institutional knowledge rather than canned responses, and hooking it into help desk and CRM systems so a phone interaction creates the right downstream record (a support ticket, an updated customer history, a routed escalation). The integration layer is where most phone deployments quietly underperform, and it’s the part I can speak to from direct experience. The deep dive will cover both sides: what to ask AI phone vendors before signing, and what the integration work actually looks like once you have a vendor in place.
4. Code Generation (When AI Writes Your Code)
The problem in this category is more pervasive than most leadership teams realize. Internal development teams are using AI coding tools poorly, inconsistently, or not at all, and the variance in productivity, quality, and security implications that follows is significant. This is a category most companies stumbled into rather than chose deliberately, and the lack of a deliberate adoption plan shows up in the inconsistent results.
This category fits any internal team writing software for their own organization. The tools have matured to the point where the question isn’t whether to use them. The question is how to use them well. There are very few “doesn’t fit” cases here, and plenty of “doing it badly” cases. Most of my consulting work in this category is getting teams off the bad pattern rather than arguing about whether to start.
The credibility frame here is different than in the other three categories, because I don’t have a product to sell in this space. What I do have is daily, hands-on experience with these tools going back to before they were widely available. I was experimenting with AI-assisted coding before Claude Code or Codex existed as products, and I adopted both the day they launched. That kind of sustained day to day usage on real client and product work is the credibility I’m offering here. The deep dive will walk through standards and governance for internal teams, including tool selection, prompt and review practices, and the most common failure modes I see when teams pick up these tools without a deliberate plan.
A Note on Bias
It’s worth emphasizing that I have a commercial interest in all four of the categories above, in four different shapes. Two are products I build and sell directly: Docora for the RAG category, and the Architected Intelligence platform for prospecting. One is a category I work in through partner relationships and lead the integration on, which is the AI phone systems space, and one is a pure consulting practice with no product attached (AI-assisted coding for internal dev teams).
What I won’t do is bend the recommendations to fit what I sell. When one of my own products isn’t the right fit for a given context, I’ll say so. The purpose of these deep dives is to walk through what actually works in each category, with the kind of authority that comes from having built, integrated, and operated real systems rather than just read about them. If you read these posts and conclude that the category is right for your business but the worked example I’ve used isn’t a fit, you’ve still got the value the series is offering.
Closing
Most of the leaders I talk to know they should be doing something with AI but don’t actually know where to start. The honest answer to “where” isn’t a framework. It’s a short list of categories where the technology has matured enough to deliver business value reliably, paired with the discipline of pointing it at your actual strategic problems rather than at whatever happens to be in front of you on any given day. Four of those categories are above. There are others, but these are the ones I’ve worked in directly and can speak to with depth. The deep dives that follow will all share the same shape: the problem the category addresses, when AI fits and when it doesn’t, the risks worth knowing about, and what we’ve actually built, integrated, or evaluated in each space. The goal isn’t to convince you to use any of these specific applications. It’s to give you a more grounded answer to the “where should we start” question than most leaders have today, once you’ve already done the harder work of naming what you’re actually trying to fix.