Shipping AI Features Inside a SaaS Platform
The situation
Teamwork.com is a project management platform used by thousands of teams. They wanted to add AI capabilities to the product — not as a side experiment, but as core features that users would interact with daily. The challenge was that they didn't have in-house AI engineering expertise to take these features from concept to production.
They brought me in as a contract Principal AI Engineer to own the AI features end-to-end.
What I built
AI Meeting Bot
The first feature was an AI-powered meeting assistant. The bot joins a user's meeting, captures the full transcript, then generates an AI summary of what was discussed. But it goes beyond summarization — it extracts action items from the conversation and automatically creates tasks inside Teamwork, assigned to the right people. The summary and tasks are then shared with all meeting participants.
This turned meetings from "we talked about stuff" into "here's exactly what we agreed to do, already tracked in your project management tool."
AI Plan My Week
The second feature helps users organize their week. It takes your existing tasks, deadlines, and priorities, then uses a combination of AI and rule-based algorithms to suggest an optimal schedule — placing tasks into your calendar in a way that accounts for deadlines, energy levels, and meeting conflicts.
The hybrid AI + rules approach was intentional. Pure LLM scheduling is too unpredictable for something people rely on daily. The rules handle hard constraints (deadlines, working hours), and the AI handles the softer judgment calls (priority, task grouping, context switching).
What I fixed along the way
When I joined, several existing AI features in the platform were using older API patterns — no structured outputs, which meant inconsistent responses and occasional hallucination in user-facing interactions. One of my first moves was migrating these to structured outputs. The effect was immediate: more consistent responses, fewer edge cases to handle, and faster processing since we stopped needing to parse and validate free-form LLM output.
The tech
The backend was Go and TypeScript, deployed on Kubernetes. For AI orchestration I used LangGraph with both OpenAI and Anthropic models, choosing the right model for each task (summarization vs. extraction vs. planning). Everything ran in Docker containers with proper CI/CD pipelines.
Need AI features built into your product?
Book a free call