Start Small, Design for Scale

Jordan Gurrieri, Co-Founder & CEO of BlueLabel, on scaling AI through workflow-fit and change management, plus a member-only offer for PE & Private Credit teams prioritizing their first AI investments.

This is the newsletter of Just Curious, the world’s leading network of applied AI experts. We connect operators and investors with proven partners to design, build, and deploy AI solutions that drive measurable growth, efficiency, and innovation.

You can submit your needs here, we’ll anonymize them, and return 5–15 expert perspectives with pricing from our network. Think of it like a “Mini RFP” And it’s free.

Jordan Gurrieri, Co-Founder & CEO @ Blue Label

AI inside mid-market companies rarely collapses because the models are weak. It breaks because the workflow around them can’t hold the weight. That’s the tension Jordan Gurrieri has spent fourteen years solving as Co-Founder & CEO of BlueLabel, where his team has shipped more than 300 production-grade systems for operators who need consistency more than novelty.

When we talked, Jordan kept returning to a simple idea: AI only works when the underlying process works. In both of our conversations, he walked through how his teams map real-world workflows, rebuild brittle prototypes into reliable systems, and design deployments that technicians, analysts, and sales teams can trust on Monday morning, not just in a demo.

Jordan’s focus on workflow-fit and disciplined sequencing mirrors what we’re hearing across the PE and private credit ecosystem: teams don’t need more AI ideas. They need clarity on which opportunities drive real leverage.

In that spirit — and as a way to provide more hands-on value — we’re introducing our first-ever Just Curious member offer: a focused, two-week Data & AI Opportunity Strategy sprint from a leading Data & AI Consultancy. We’re making it available at 50% off for the first three firms who reply.

More on that toward the end of today’s issue.

What to expect:

  • Why workflow fit — not modeling — determines 80% of AI success

  • The 4-week rebuild that replaced a shaky custom GPT with a consistent API system

  • How a telecom cut $10K per month per dispatcher and rolled out AI to 150 technicians

  • A field-tested process for identifying which workflow to automate first

  • Why agentic systems (not dashboards) are the next competitive moat

  • A myth-busting look at timelines inside legacy enterprises

  • Practical guidance for execs feeling pressure to “do AI” but unsure where to start

 Recently / Coming Up on Just Curious

Watch the full interview with Jordan Gurrieri

Expert Q&A: Jordan Gurrieri, Co-founder & CEO, BlueLabel

Each week, we ask our applied AI experts five questions to surface their frameworks, lessons, and operator insights. Jordan’s answers follow, paired with Stu’s thoughts from our full conversation.

Jordan specializes in the hard part of AI: rebuilding clever demos and POCs into consistent, scalable workflows that survive contact with real operations.

1. Describe who you are, what you do, who you do it for, and what makes your approach unique.

“I’m Jordan Gurrieri, Co-founder and CEO of BlueLabel, an AI consulting and development partner for mid-market and enterprise companies. At BlueLabel, I’m leading our mission to help companies move beyond AI pilots and into production-ready deployments that deliver measurable business impact. What drives me is transforming how businesses interface with technology by making AI practical, powerful, and human-first.

Our collaborative approach via our proprietary BlueLabel SPRINT™ Framework combines business alignment, design thinking, and engineering precision to identify high-value opportunities, validate results quickly, and integrate AI securely into existing systems.

Four things set us apart:

  1. Speed with discipline — value checkpoints in which we show real business impact every 2–4 weeks.

  2. Human-first design — we fit AI to people and process, not the other way around.

  3. ROI-driven — every phase lives or dies on business outcomes and how we demonstrate that to your team.

  4. Long-term partnership — we start with a single project, build a vision and roadmap, and earn our way to transformation.”

Stu’s Thoughts:

Jordan’s “speed with discipline” claim isn’t posturing. It’s pattern recognition he and BlueLabel developed over 300+ digital product deployments. In our conversation, he repeatedly came back to cadence. His teams don’t promise transformation; they commit to value every 2–4 weeks. That structure alone forces leaders to choose business outcomes instead of ambition decks.

In our chat, he described the typical enterprise pattern: teams “run pilots to identify use cases,” but when they try to scale, the solution breaks the moment more than five employees touch it. That’s the pain BlueLabel is engineered around. Not building shiny demos but building systems that behave the same way on Monday morning as they did in the sprint review.

Aside: We hear this pattern constantly: agencies can spin up a flashy pilot, but the thing collapses the second five people try to use it the same way. That’s not a tooling issue but a talent and experience issue. BlueLabel has both, which is why their builds hold up when others don’t.

The real differentiator is his human-first posture. Jordan has a simple rule: “We fit AI to people and process — not the other way around.” It’s a small sentence, but it’s the philosophical opposite of most enterprise AI deployments. Start with people, not models. Engineer reliability before anything else. Make the system boring enough that operations can trust it. That’s how you get from pilot to production.

2. What problem are you most focused on solving right now, and how are you anticipating solving this problem with AI?

“Right now, the biggest challenge we’re focused on solving is helping companies move from AI pilots to production-ready solutions. Many organizations are stuck in “pilot purgatory”, running multiple experiments that never quite translate into measurable business value. Others find that scaling a validated pilot into a reliable, organization-wide tool is far harder than expected. The hype around AI often makes it seem simple, but turning a model into something that hundreds of employees can depend on every day is complex, technical, and requires deep alignment with business goals.

A good example is our work with a manufacturing client that’s using computer vision on satellite imagery to streamline a very manual piece of their sales prospecting process. The solution works, but our job now is to make it reliable, cost-effective, and seamlessly integrated into the sales team’s existing tools and workflows so it becomes something the entire organization can trust and adopt.

We’re solving this through a careful balance of LLM-based and ML-based model training and optimization, iterating between them to improve accuracy while keeping training costs within budget. This kind of disciplined, production-minded approach is what ultimately turns AI from an experiment into an everyday business advantage”.

Stu’s Thoughts:

Jordan kept coming back to the same pattern: most early AI efforts work for one person but collapse when the rest of the organization tries to use them. Someone builds a custom GPT or a clever agent, it behaves just well enough in a demo, and everyone assumes it will scale unchanged. It does not.

The telecom example is a good  illustration. Their internal pilot worked for one or two technicians, but when they tried to roll it to 180 people, accuracy fell apart. Output drifted. Costs ballooned. Jordan’s team tore it down and rebuilt it at the API layer in four weeks, integrating directly with dispatch, rewriting the workflow logic, and normalizing outputs so every technician got the same result every time.

That rebuild turned a clever demo into infrastructure. And the payoff was massive: a $10K monthly reduction per back-office dispatcher, 75% more field visits per day, and a deployment that grew from 5 technicians to 150. 

Pilots don’t need more creativity. They need engineering, governance, and reliability. The ROI comes only after the system behaves the same way for everyone.

3. If you had to pick one process every company should automate with AI today, what would it be?

“I don’t think there’s a single process EVERY company should automate with AI, that’s the wrong way to look at it. If there were a process that’s 95% the same across all businesses, someone would have already built a great off-the-shelf solution for it, and every company would just buy licenses and move on. The truth is, those “perfect universal” use cases are rare.

Instead, every company should start by getting crystal clear on their medium- and long-term business objectives, independent of AI. Then, identify the mission-critical SPECIFIC process that absolutely must run flawlessly for those objectives to be met. When you unpack that specific process, you’ll find the pain points — waste, friction, repetition, human error, etc. That’s the process to automate first. Then, once that’s running smoothly, move to the next one.”

Stu’s Thoughts:

Jordan pushed back on the idea that there’s a “universal” workflow every company should automate. In his view, the work inside most organizations is too specific, too idiosyncratic, and too shaped by real people to be solved with a one-size-fits-all playbook.

On our call, he was clear that every company’s workflows contain their own quirks, handoffs, and pockets of tribal knowledge. That’s why his team starts with workflow-fit: mapping how the work actually gets done, listening for friction, delay, and inconsistency, and only then deciding where AI belongs.

The telecom story is the perfect example. Once the team mapped the end-to-end process, the real bottleneck wasn’t the whole workflow, but a single step. Technicians were sitting idle for 30–45 minutes waiting for dispatch to validate equipment checks. Fixing that one step improved throughput across the entire field organization. Automating everything wasn’t required; automating the chokepoint was.

This is the operator mindset: don’t chase AI, chase bottlenecks. Identify the step that reliably burns margin or time. Validate it with the people who live inside it. Automate that step first — not the whole process — and scale only after the system proves it can deliver the same outcome every time. That’s how you go from “we’re experimenting” to 75% more jobs completed per technician per day.

4. Where do companies waste the most effort because they’re not using AI yet?

“Companies waste the most effort when they’re not using AI to supercharge people in their organization with deep domain expertise. 

AI is at its best when it amplifies human judgment, not replaces it. The real opportunity is to put AI tools directly in the hands of experts who already understand the nuances of their field and can use that knowledge to get exponentially more done with fewer resources and with a faster turnaround.

This is clearest in technical roles. A skilled software architect who understands how to design systems and select the right technologies can now build in days what used to take a full team and months of development cycles. Similarly, a seasoned salesperson who knows exactly what signals define a high-quality lead can use AI to surface and prioritize only the most promising prospects, freeing them from hours of manual research and letting them focus entirely on closing deals from a narrower list of higher-quality leads.”

Stu’s Thoughts:

Jordan’s point about amplifying experts came alive when he described how they involved senior field technicians early in the telecom project. “We studied how they actually do the work,” he said.

They weren’t looking to replace expertise. They wanted to capture and use it it. The expert voices shaped the troubleshooting agent, the validation flow, and the decision steps the system automated.

That design choice solved a core adoption issue: trust. When experts see their own judgment embedded in the tool, they use it. When they don’t, they route around it. Jordan neven noted that the system improved through field feedback. Technicians pushed corrections and refinements directly into the agent, tightening accuracy week after week.

For operators, the punchline is simple: AI accelerates expertise but can’t invent it. The fastest ROI isn’t replacing knowledgeable people, it’s augmenting and scaling them. Capture their judgment, encode their heuristics, and turn tribal knowledge into a repeatable asset. That’s how five experts become fifty.

5. What’s an AI capability that’s just around the corner that businesses should prepare for now?

“Agentic systems and Mesh AI are right around the corner. Businesses should start preparing for a world where AI agents don’t just support human teams but interact directly with each other, even across organizations. It might sound theoretical now, but we’re not far from seeing AI agents from different companies collaborating on high-volume, repeatable tasks with minimal human involvement.

Imagine the logistics space. A supplier’s AI agent and a shipping company’s AI agent negotiating last-minute delivery routes. The shipping agent spots open truck capacity and automatically contacts the supplier’s agent, offering discounted pricing to fill that space. The two systems exchange data, negotiate within set guardrails, and finalize an agreement, all in seconds, while the truck is still en route.

Most companies underestimate how quickly this future is approaching. Given how long it takes organizations to align on use cases and begin implementation, the time to start preparing for agent-to-agent ecosystems is now. The businesses that begin experimenting early will have a serious advantage when this next wave of AI maturity hits.”

Stu’s Thoughts:

(For what it’s worth, “Mesh AI” was new to me too, but the underlying idea wasn’t.)

Most teams still use AI the same way: personal productivity boosters, ad-hoc drafting, the occasional analysis shortcut. That’s 99% of how AI is deployed today. Jordan’s view is that this phase is already table stakes. Everyone has access to the same frontier models.

Where the advantage emerges, he argued, is in embedding agents directly into the workflows that make a business unique.

His telecom example is an early version of this future: a voice-enabled technician assistant, an automated dispatch agent, and a workflow that lets those agents execute checks, validate jobs, and close work orders without human bottlenecks. Running 150 technicians on a consistent, API-level system.

Agentic systems aren’t a leap of faith. They should be an extension of reliable processes you already trust. Start with one agent tied to one workflow. Then link agents together. Then integrate them with external systems. The technical jump is smaller than the organizational one, which is exactly why Jordan warns leaders to start now. Alignment takes quarters. The capability will arrive much sooner.

Key takeaways

  1. Map the real workflow, not the imagined one. Identify the sub-step where delay, handoffs, or tribal knowledge create measurable cost.

  2. Pilot with a production mindset. Validate workflow fit, define consistency requirements, and decide early whether the pilot must be rebuilt at the API layer.

  3. Design adoption like a product. Bring experts into the build, capture their heuristics, and measure reliability, trust, and throughput as aggressively as accuracy.

Member Offer: Data & AI Opportunity Strategy (50% Off)

Are you a private equity or private credit leader trying to determine where data and AI can actually drive meaningful leverage. In sourcing, underwriting, portfolio oversight, or fund operations?

We’re launching the first offer from the Just Curious Network, a two-week engagement delivered by a leading Data & AI consulting firm that works exclusively with private equity and private credit.

The goal: help your team identify the highest-value, lowest-friction ways to apply data and AI, and develop a plan you can execute in months, not quarters.

This introductory offer is available at 50% off for the first three firms who reply.

What you’ll gain:

  • A prioritized roadmap of 3–5 high-impact AI + data use cases tied directly to your workflows

  • A buy-vs-build analysis showing what should be built internally vs. sourced from vendors

  • A phased 3–6 month implementation plan with clear sequencing, dependencies, and org enablers

  • A final presentation + written report for partners, CIOs, and portco leadership

Valued at $15,000. Offered for $7,500.

Available only to the first three qualified firms that respond in the next two weeks.

If you'd like details, just reply “Strategy.”

Connecting with Jordan