Ben Bausili | InterWorks https://interworks.com/people/ben-bausili/ The Way People Meet Tech Thu, 23 Oct 2025 18:59:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 5 Ways to Fail with AI https://interworks.com/blog/2025/10/23/5-ways-to-fail-with-ai/ Thu, 23 Oct 2025 18:59:41 +0000 https://interworks.com/?p=71403 You’ve seen the stat, 95% of generative AI pilots deliver zero measurable business returns. According to MIT’s 2025 “GenAI Divide” report, only 5% of companies are finding success. This isn’t a complete surprise: They are called pilots for a reason and new technologies of take...

The post 5 Ways to Fail with AI appeared first on InterWorks.

]]>

You’ve seen the stat, 95% of generative AI pilots deliver zero measurable business returns. According to MIT’s 2025 “GenAI Divide” report, only 5% of companies are finding success. This isn’t a complete surprise: They are called pilots for a reason and new technologies of take time to find their footing. Nevertheless, there’s something to be learned here.

What separates the winners from the other 95% stuck in “pilot purgatory?” It’s not the AI models. It’s not a lack of tech talent. It’s something basic and at the foundation of each project: Flawed strategy, broken workflows and organizational design failures.

Here’s how to ensure your glorious AI failure:

1. Fall in Love with the Magic

The key to failure: Treat your shiny new LLM like a plug-and-play miracle. Get mesmerized by GPT demos and assume that buying access to a powerful model is the finish line. Hand it to your IT team, have them plug it in and wait for the magic.

Why this tanks: The MIT report is blunt: 95% of pilots fail because of “flawed enterprise integration” and “lack of fit with existing workflows.” One CIO put it perfectly after seeing dozens of AI demos: “Maybe one or two are genuinely useful. The rest are wrappers or science projects.”

Pretend that you’re building a car in your garage. You’ve just bought a powerful engine, and it’s mounted on an engine hoist dangling over your bare chassis. Clearly, you aren’t ready to race just yet. You’ve bought a brilliant motor, sure, but you don’t have the transmission, wheels or steering column yet. The model is powerful, sure, but it needs context management and tools. Without the unglamorous work of APIs, data pipelines, security protocols and process redesign, it’s just an expensive noise machine.

What the 5% do instead: They narrow the scope so they can obsess over the required plumbing first. Start with a departmental or use-case specific solution. Map exact workflows, identify friction points and design integrations. They solve business problems where AI happens to be the best too and don’t just slap AI on large, vague problems.

2. Slap AI on your Existing Roadmap

The key to failure: Our friends at Hex recently posted about their bitter lessons from building with AI. Tell me if you’ve heard this story before: Pour money into customer-facing AI projects in sales and marketing. Prioritize initiatives that generate great press releases and excite the board. Bonus points if success is nearly impossible to measure.

Why this tanks: The MIT report shows a clear “investment bias” where companies allocate over 50% of AI budgets to high-visibility, top-line functions that consistently fail. Meanwhile, “successful projects focus on back-office automation.”

A great success example is in the legal field. Law is one of the few areas delivering consistent ROI because it’s text-based (perfect for LLMs), back-office focused and brutally simple to measure: Fewer review hours equals immediate savings.

The 95% are performing “Innovation Theater” where AI pilots are more marketing tools than transformative operational investments or empowering user enablement. 

What the 5% do instead: They mine the back office for gold. They start with legal, finance, compliance and admin. These highly structured processes are perfect for building new AI workflows where automation delivers immediate, quantifiable savings. Less sexy, infinitely more profitable.

3. Build a Tool That Never Learns or Evolves

The key to failure: Deploy your AI like traditional enterprise software. Plan, build and launch as a finished, static product. Walk away and expect it to keep working without any feedback loops, user training or continuous improvement.

Why this tanks: It’s treating AI as a project instead of a product. This is the heart of the GenAI Divide. As MIT puts it: “The core barrier to scaling is not infrastructure, regulation or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context or improve over time.”

You’re applying an outdated mental model. Not only is this a new technology where it is pure hubris to think you’ll get it right the first time, it’s also a non-deterministic technology. AI is not predictable and often the models are changing over time. AI systems are dynamic engines that need continuous learning from user interactions, feedback and organizational data. Without that, they stay stuck at day-one performance while user needs evolve.

No feedback collection. No AI trainers. No human-in-the-loop reviewers. No monitoring for model drift. You’ve built a static masterpiece that can’t adapt — and users will abandon it faster than you can say “ChatGPT is better.”

What the 5% do instead: They build for learning from day one. They budget for feedback loops, prompt evaluations, observability, data curation and user interviews. They measure rate of improvement, not just launch dates. They create the operational structure that will help the AI tools improve overtime.

4. Build Everything Yourself

The key to failure: Embrace a “Not Invented Here” mentality. Insist on building proprietary AI systems in-house, especially if you’re in a regulated industry. Cite compliance and security concerns while embarking on an 18-month, multimillion-dollar journey to reinvent wheels.

Why this tanks: Take a moment to absorb this stat: externally procured AI tools and partnerships succeed 67% of the time. That’s twice the success rate of internal builds. Yet companies, especially in regulated sectors, keep choosing the path that’s statistically twice as likely to fail.

By betting on your own, custom solutions for everything you’re trading proven expertise, accumulated experience and focused R&D from specialized vendors for a low-probability shot at imagined perfection. Meanwhile, your competitors partner with vendors and go from pilot to production in 90 days while you’re still in month six of requirements gathering.

What the 5% do instead: They default to partnerships with specialized vendors. These companies have already solved the integration challenges, compliance hurdles and learning gaps across dozens of implementations. More importantly, they save the custom work for the truly impactful and unique aspects of your company.

5. Crush Your Employees’ Grassroots AI Experiments

The key to failure: When you discover that your employees are using ChatGPT and Claude to get work done, shut it down. Label this “Shadow AI” as a internal rebellion that is nothing more than a security threat. Block the tools, write stern policy memos and discipline anyone caught using personal AI subscriptions for work.

Why this tanks: I’ve seen this story before. Tableau gained popularity as a true “land and expand” product. Some of my earliest customers had simply put a license on their Amex and one was running a secret Tableau Server under their desk. For a long time, Tableau was seen as a threat to data security and the “one source of truth” companies have sought for decades. It turned out that empowering users to answer data was extremely powerful and got more results then simple report factories with months of backlog requests. The same is happening with AI.

More employees, by far, use AI for work than have work supplied access. 90% of employees regularly use LLMs, only 40% of companies have purchased official AI subscriptions. This massive gap reveals widespread use of personal tools for work tasks. And here’s the key insight: this unsanctioned “Shadow AI” often delivers “better ROI than formal initiatives” and “reveals what actually works.”

Your employees are running hundreds of free, real-time micro-pilots every day. They’re validating use cases, identifying high-value workflows and pinpointing exactly where formal AI solutions could deliver the most impact. They’re doing your R&D for free, and you’re shutting it down.

Think of Shadow AI like desire paths, those dirt trails people create by walking the most efficient route instead of using the planned sidewalks. They’re a user-generated map of efficiency. Paving them over is organizational self-sabotage.

What the 5% do instead: They embrace Shadow AI as strategic intelligence. They provide secure, enterprise-grade tools so employees can experiment safely. Some provide clear “AI Stipends” to provide access to a wide range of tools. Some give access to platforms like OpenRouter that can provide access to all major AI models with access, data retention and security controls. They provide clear guidelines and playgrounds for deploying solutions, access data and for safe experimentation. Then, they obsessively study usage patterns to understand which tasks are being automated, which prompts solve real problems and where formal AI investments should go. The 5% follows the desire paths instead of destroying them.

The Bottom Line: It’s a Leadership Gap, Not a Tech Gap

The GenAI Divide isn’t about having better models or more data scientists. It’s about having better strategy and organization alignment.

The 5% who succeed understand they’re building a new organizational capability in an emerging technology field. This requires workflow integration, continuous learning, smart partnerships and grassroots insights.

So, here’s your choice: you can follow the natural path by embracing these five keys to failure and join the 95% with expensive science projects and nothing to show for them. Or flip the script and build something that can empower your employees, make lives better, and create real value. Just remember it takes more than technology. It takes leadership too.

The post 5 Ways to Fail with AI appeared first on InterWorks.

]]>
Why Your AI Agent Can Write Code but Fumbles Data https://interworks.com/blog/2025/10/03/why-your-ai-agent-can-write-code-but-fumbles-data/ Fri, 03 Oct 2025 15:54:58 +0000 https://interworks.com/?p=70732 As I work with AI agents like Claude Code on new problems, I’ve noticed a fascinating pattern: Ask me to code a web page and the AI can make it appear like magic. Ask me to use the same AI to clean up a messy...

The post Why Your AI Agent Can Write Code but Fumbles Data appeared first on InterWorks.

]]>

As I work with AI agents like Claude Code on new problems, I’ve noticed a fascinating pattern: Ask me to code a web page and the AI can make it appear like magic. Ask me to use the same AI to clean up a messy dataset? Suddenly I’m the one doing most of the heavy lifting, carefully guiding my silicon partner through every step.

There’s a growing gap in outcomes and perspectives between software engineers versus data engineers that comes down to the fundamental difference in their work. This gap reveals something profound about where we’re headed with AI in the enterprise.

AI is Great at Code

The AI thrives in software engineering because it is fundamentally about translating clear intentions into code. The problems are well-defined. The patterns are established. And most importantly, the code itself is the product.

I’m not saying that there aren’t challenges around context and complexity, but think about it: When you write a sorting algorithm, the context you need is minimal. You have inputs, outputs and a clear definition of success. The AI has seen millions of these patterns. It knows the dance.

But data? Data is different. Data is messy. Data has stories to tell that aren’t visible in its schema.

AI Hasn’t Figured out Data Yet

Last week, I threw some survey data at Claude Code, hoping for the same magical experience. Its first instinct? Jump straight into counting and aggregating — technically correct, completely useless. It generated metrics like “% of team members who responded” without knowing how many people were on each team. It was like watching a chef begin cooking without knowing what ingredients were in the fridge.

Here’s what the AI didn’t ask:

“What story are you trying to tell with this data?”
“Are those open-ended responses hiding gold we should mine first?”
“What biases might be lurking in how this was collected?”
“How does this connect to your actual business problem?”
“What are useful ways to make these responses actionable?”
The AI wanted to count. I needed it to think.

The Interrogation Gap

I call this the “Interrogation Gap.” It’s the space between what AI agents can execute and what they should explore. Current AI hasn’t been trained to be suspicious of data in the right ways. It doesn’t know to poke at it, question it, turn it upside down and shake it to see what falls out.

I had to be the interrogator, so I developed a process the AI would never have suggested but executed brilliantly once directed:

First, I made it slow down and look. I asked it to write a loop in python to read each user’s response. As output, it would create several note files that it would read and update iteratively. What tools are users mentioning? What problems do they face? What are surprising or useful suggestions? Suddenly, patterns emerged and the picture of what the analysis should be, became clearer.

Then, we built a model from those observations. Those themes became categories. We could loop back through the dataset and annotate each row with what tools were mentioned or issues encountered. This turned the categories into something that could be quantified.

Finally, we connected it to the business context. Not just “45% of respondents mentioned X,” but “This cluster of power users has a workflow we’re not supporting, and here’s what they’re doing instead.”

The speed improvement was still there, but more than time saved, my analysis was better for having an LLM be able to provide attention to every response and take notes. What I achieved was better than keywords and sentiment analysis, even if I didn’t save a huge amount of time. To get there, I had to bring the strategy. I had to be the detective.

Why This Matters for Your Organization

This gap is a critical insight for anyone implementing AI in their organization. It speaks to the type of issues we encounter applying AI to solutions and where we need to put our own efforts to get good results.

If you’re in software development, you’re probably already seeing massive productivity gains. In my teams, we’ve been able to tackle technical debt like upgrading PHP versions or enhancing our code coverage in ways we wouldn’t have tackled previously. Your developers are probably shipping faster; your backlog is shrinking too. Software life is good. The AI can handle more of the “what” because the “why” is often embedded in the requirements. Our software experts are more focused on architecture and quality than ever before.

But if you’re in data, analytics, or any field where context is king? You need a different playbook. Your experts are more valuable than ever. AI can help them build python scripts or bar charts faster than ever, but they are the ones that know which questions to ask, which assumptions to challenge, which threads to pull.

Right now, we see AI enhances experts, not replacing them. It’s replacing the mechanical parts of analysis: The writing of SQL queries, the generation of charts, the formatting of reports. It cannot replace the human who knows that last quarter’s spike was due to a one-time event. The analyst who remembers that this dataset excludes your biggest customer. The expert who can spot when the AI is confidently analyzing the wrong thing.

Those experts are irreplaceable.

Your Next Challenge

Here’s my challenge to you: Next time you hand data to an AI agent, don’t start with “analyze this.” Start with “help me understand this.” Make it show you the data through different lenses before it starts calculating anything.

You might be surprised what stories emerge when you slow down the robot and speed up the detective work.

Because in the end, software engineering is about building things right. But data work? That’s about building the right things. It’s a distinction that makes all the difference.

The post Why Your AI Agent Can Write Code but Fumbles Data appeared first on InterWorks.

]]>
Perfecting Your Craft – AI, Competitive Gaming and the Joy of Being Human https://interworks.com/blog/2025/09/26/perfecting-your-craft-ai-competitive-gaming-and-the-joy-of-being-human/ Fri, 26 Sep 2025 20:34:30 +0000 https://interworks.com/?p=70534 One of my kids recently dragged me back into fighting games. Some from my childhood (Street Fighter) and others completely new to me, like Guilty Gear. Every button press can mean the difference between glory and getting absolutely destroyed. It’s been a blast. At InterWorks,...

The post Perfecting Your Craft – AI, Competitive Gaming and the Joy of Being Human appeared first on InterWorks.

]]>

One of my kids recently dragged me back into fighting games. Some from my childhood (Street Fighter) and others completely new to me, like Guilty Gear. Every button press can mean the difference between glory and getting absolutely destroyed. It’s been a blast.

At InterWorks, we often talk about “perfecting your craft.” It’s one of the core, cultural values that goes deep into the people here and makes working together a joy. Whether you’re gaming, coding, writing or building dashboards, that drive to master something is universal. So, let’s use the lens of competitive gaming to look at some surprising lessons about excellence, even when AI can seemingly do everything better than us.

The Grind, the Flow and the Human Edge

Mash and pray. This is where most people start when picking up a fighting game, a genre filled with complicated button combinations to memorize and execute. It can feel overwhelming. But then you learn one basic combo. Then two. You practice and learn, figuring out defense and counters. You may even go so deep that you start using terms like “frame advantage” that would be meaningless to people outside of fighting games.

Some call it a grind, others mindless repetition, but it’s really deliberate practice to grow your skills. It all leads into these magical moments where everything clicks, that flow state where your actions feel effortless. Sound familiar? It’s the same state whether you’re learning Python, mastering an instrument or building your first Tableau dashboard. Flow is magical.

We see that magic in beautiful moments of human brilliance. It’s why we follow sports and play games, seeing what seems impossible become possible with the right moment and years of preparation. Take EVO Moment 38 from last year, even if you’ve never played Street Fighter, in the short 25 second video you’ll see true human joy erupt from doing the impossible. In it, a player known as Hayao uses a character thought of as unplayable, using a move thought of as useless, to execute a beautiful comeback in what looked like certain death. The move was so perfectly executed it became legendary.

The Joy of Being Human – Beyond “The End”

Perfection isn’t the point. We’ve seen in many other types of games, computers will consistently beat us. Chess has long been solved by computers, and yet it’s more popular than ever with humans. DeepMind’s AlphaGo conquered the world’s best Go players. OpenAI Five dominated Dota 2. And yet, computers haven’t replaced us — they’ve shown us new ways to be human.

My son’s also into video game speedrunning — completing games as fast as humanly possible by exploiting glitches, finding optimal routes and executing pixel-perfect moves. It’s dedication and finger dexterity at its finest.

Often, these routes are discovered using computers in what’s called TAS (Tool-Assisted Speedruns). Software performs these runs frame by frame, achieving impossible precision. A TAS bot executes movements no human ever could, crushing human records. While AI and TAS show ultimate perfection, our joy isn’t just in the outcome. It’s in:

  • The process and struggle
  • Those incremental improvements
  • Creative problem-solving
  • Sharing the experience with others

AI hasn’t ended human competition — it’s expanded it. AlphaGo revealed strategies that human Go players now study. TAS videos inspire speedrunners to ask “what if a human could do that?” They push boundaries we didn’t know existed.

Look at what happened with Tetris recently. For decades, we thought the game went on forever. Then 13-year-old Willis Gibson hit the first “kill screen” — a point where the game’s code literally couldn’t continue.

Did that stop Tetris players? No way. It became a new frontier. Players now explore how to reach that kill screen earlier and more consistently, developing techniques to break the game in ways we never imagined. The “end” just opened a new chapter.

Skills That Transfer – A Universal Language

So, what does mean for our work? Most of us don’t get paid to compete and never will. However, I believe beyond the intrinsic value of developing your skills and belief in yourself, learning to perfect a craft offers you something of great value: meta skills on how to go deep and become great. Learning timing in Street Fighter might seem useless for your day job, but these meta-skills of perfecting your craft are gold:

Problem Decomposition: No one learns a fighting game by thinking, “I will now master everything.” You break it down. You learn one character’s basic moves, then one simple combo, then how to counter one specific opponent. This skill —breaking a massive, intimidating goal into small, manageable chunks — is the foundation of every successful project, from software development to writing a book.

Failure as Data, Not Defeat: Every lost match in competitive gaming is a replay you can study. You learn to see failure as information rather than judgment. This mindset shift — from “I failed” to “I found something that doesn’t work” — is the difference between people who plateau and those who continuously improve.

Deliberate Practice and Isolation: Gamers don’t just play matches, they go into “training mode.” They practice the same difficult combo 100 times in a row, isolating a single variable until it becomes muscle memory. This is the essence of deliberate practice: Identifying a weakness and focusing on it relentlessly, a technique that applies directly to mastering a musical instrument, a coding language or a public speaking engagement.

Rapid Feedback Loops & Iteration: In a game, feedback is instant. You try a strategy, and you either win or lose the exchange. You immediately learn, adjust and try again. This builds an intuitive comfort with rapid iteration — trying something, seeing the result and immediately applying the lesson. This is the core principle behind agile development, A/B testing, and any modern creative process.

Emotional Regulation (The “Mental Game”): Competitive gamers talk endlessly about the “mental game” and avoiding “tilt,” where frustration from a mistake cascade into more mistakes. The ability to stay calm under pressure, reset after a failure and maintain focus is a critical life skill, essential for everything from high-stakes negotiations to simply having a productive, stress-free day.

The Long Game vs. Short Game Balance: You’re simultaneously thinking about winning this round, this match and improving for next month’s tournament. This multi-timeline thinking — balancing immediate needs with long-term growth — is exactly what separates strategic thinkers from tactical executors.

Community Learning and Analysis: The modern player learns in public. They watch top players stream, they consume video guides breaking down complex topics, and they ask for feedback on their own gameplay videos. This habit of seeking out experts, analyzing the work of others and engaging with a community to accelerate your own growth is exactly how top performers in any field stay ahead of the curve and what makes communities around tools like Tableau (The Data Fam!) so special and important.

The discipline I’m relearning in fighting games — focusing on fundamentals, practicing iteratively, analyzing mistakes — makes me better at work. That “perfecting your craft” mindset transfers everywhere.

Embrace the Journey

Whether you’re gaming, coding, designing or anything else, remember that perfecting your craft is deeply human and rewarding. Don’t let AI or some “perfect” ideal diminish your pursuit. See these tools as guides and inspiration instead.

Embrace the grind. Celebrate small victories. Learn from defeats. Push your boundaries. It’s what drives us to optimize that dashboard one more time, refactor that code until it’s elegant, or find the perfect insight in a messy dataset. That drive, that spark when facing a challenge — that’s the joy of being human.

The post Perfecting Your Craft – AI, Competitive Gaming and the Joy of Being Human appeared first on InterWorks.

]]>
Stop Waiting on Data: AI Projects Should Start Today https://interworks.com/blog/2025/09/12/stop-waiting-on-data-ai-projects-should-start-today/ Fri, 12 Sep 2025 17:23:07 +0000 https://interworks.com/?p=70267 I’m seeing so many people in the data space say that companies need to get their data right before they can do AI (we may even be guilty at times). It is simply wrong. Look, I’m not saying data hygiene isn’t important, but telling people...

The post Stop Waiting on Data: AI Projects Should Start Today appeared first on InterWorks.

]]>

I’m seeing so many people in the data space say that companies need to get their data right before they can do AI (we may even be guilty at times). It is simply wrong.

Look, I’m not saying data hygiene isn’t important, but telling people they must eat their vegetables before they can leverage exciting new technology isn’t just incorrect, it’s counterproductive. This “data first” mentality is creating unnecessary friction at exactly the moment when organizations should be moving fast to capture AI’s competitive advantages.

The Perfect Data Fallacy

The reality is that you can start AI projects now. They just need to have the right scope and they need to plan for proper data prep. This sort of iterative process is how you get to clean, useful data in the first place — by using organizational excitement to take on projects that matter and solve individual slices of the data problem.

Can you think of any organization ever achieved “perfect” data? I’ve been doing this for almost two decades and can’t name one. Not even us, by the way! The companies that are winning with AI today aren’t the ones who spent years perfecting their data infrastructure first. They’re the ones who identified high-value use cases, scoped them appropriately and built data solutions incrementally as part of delivering business value. AI projects are failing because of overly ambitious, ill-defined scope that treat it like magic, not because the data wasn’t in a perfect state.

Learning from the Tableau Revolution

This iterative process is exactly what we saw happen with Tableau in the early years. People were excited to have a tool that made it easy to see and understand their data (and those shiny new interactive dashboards certainly didn’t hurt when it came to impressing the C-suite). The data was often not ready, so project after project we went in to enable people to use Tableau, but we discovered and solved the data problems at hand as we went.

We didn’t launch huge, boondoggle data warehouse projects. Instead, we delivered iterative slices that built up into a shared data platform everyone could benefit from, with a wake of useful dashboard projects along the way. Each project made the data a little cleaner, a little more accessible, a little more valuable.

One of the key lessons I’ve learned is that you should leave a wake of finished projects. Small, important, iterative projects that build up into a more meaningful whole. You’ll not just deliver value quickly, but you’ll discover new value along the way that your monolithic plan would never have contained.

The AI Opportunity Is Now

The same dynamic applies to AI, but with even higher stakes. While your competitors are waiting for their data to be “ready,” you could be:

  • Steamlining communication by summarizing details by product line or business area
  • Building customer service chatbots to do early research to enable your support team to focus on client relationships
  • Enabling citizen developed with coding agents like Claude Code to build small apps to solve your business problems

Each AI project becomes a catalyst for better data practices. When stakeholders see the value of AI-driven insights, they suddenly become much more interested in data quality initiatives.

Starting Smart, Not Perfect

The key is starting smart. Choose AI projects that:

  • Have clear business value even with imperfect data
  • Can be scoped to work with your current data capabilities
  • Include data improvement as part of the project deliverables
  • Generate enough excitement to fuel continued investment

Don’t let the perfect be the enemy of the good. Your data will never be perfect, but your AI projects can still be transformative. The organizations that understand this are the ones that will lead their industries into the AI-driven future.

Your data isn’t ready for AI, but you should be. Be ambitious, take on projects that matter and use AI as the forcing function that finally gets your data strategy right.​​​​​​​​​​​​​​​​

The post Stop Waiting on Data: AI Projects Should Start Today appeared first on InterWorks.

]]>
AI and The Elephant https://interworks.com/blog/2025/09/10/ai-and-the-elephant/ Wed, 10 Sep 2025 14:45:46 +0000 https://interworks.com/?p=70243 Change management is hard. With AI changing our businesses, it’s also crucial. I’ve been thinking about change management lately through the lens of “Switch” by Chip and Dan Heath. If you haven’t read it, the book uses a powerful metaphor: Our rational mind is the...

The post AI and The Elephant appeared first on InterWorks.

]]>

Change management is hard. With AI changing our businesses, it’s also crucial. I’ve been thinking about change management lately through the lens of “Switch” by Chip and Dan Heath. If you haven’t read it, the book uses a powerful metaphor: Our rational mind is the Rider, and our emotional side is the Elephant. The Rider can guide the elephant, but if there’s a disagreement, the Elephant will always win.

And right now, when it comes to AI, we have a very anxious Elephant on our hands.

The Elephant in the Room

The leaders of Silicon Valley seem to be tripping over themselves to claim the number of jobs AI has replaced. It doesn’t matter that these claims don’t have a lot of merit and seem more like a cover for profit-driven layoffs than the advancements of their AI Agents they’d like to sell you. The message employees are hearing loud and clear? “We’re coming for your job, and we’re excited about it.”

Is it any wonder the Elephant is spooked?

We try to calm things down with phrases like “AI won’t take your job, someone who uses AI will.” But honestly? That’s such a shallow appeal. It’s fear-based motivation, increasing the fear they are already feeling. It’s telling people they’re in a race against their coworkers rather than working together toward something meaningful. It’s Rider language trying to motivate an Elephant, and it’s just not working.

Why This Matters More Than You Think

The truth is your AI initiative will fail without employee buy-in. An MIT study recently found that 95% of company AI projects are not seeing any ROI. There are many reasons for the failures, but it’s important to remember that successful AI adoption depends entirely on employees sharing their knowledge and expertise. In fact, that same report showed that “shadow AI,” where employees’ use of their personal AI tools is delivering substantial productivity gains. We need people to document what they know, to train these systems, and to make their hard-won insights visible and shareable if we want them to work on a company level.

But think about what we’re asking: When someone sees AI as a threat to their livelihood, we’re essentially asking them to train their replacement. The Elephant is nervous because it’s being asked to dig its own grave. No wonder we’re seeing resistance and use of AI primarily for personal productivity.

A Different Way Forward

What if we completely reframed this conversation?

Instead of “adapt or die,” what if we showed how AI could increase the time you had to focus on relationships, to build things you care about or move your goals forward?

Think about it. Nobody got into a data career because they loved data entry, analytics to format excel sheets, data science because they love cleaning data, or programming because they liked writing boilerplate. Yet, that’s what a lot of the day in those jobs looks like. But relationships? Creative problem-solving? Mentoring? Strategic thinking? That’s the stuff that makes work meaningful.

The organizations that will thrive with AI are the ones that get this emotional reality. They’re:

  • Being specific about how AI augments human work rather than replacing it

It’s About Relationships (It Always Is)

This all comes down to trust. Employees need to believe that investing in AI capability won’t backfire on them. You have to mean it. Vague promises about upskilling aren’t enough. AI can process data, generate text and identify patterns. But it can’t build trust with a nervous client. It can’t mentor someone through a career crisis. It can’t read the room in a tense negotiation or celebrate with a team after landing a huge project.

These human capabilities are more important than ever. And the organizations that understand this will use AI to amplify these strengths:

Moving the Elephant

The path forward isn’t about conquering our emotional resistance to AI. It’s about honoring those emotions and addressing them honestly. Your Elephant isn’t being irrational when it’s nervous about AI. It’s pattern-matching based on what it’s seeing and hearing. As leaders, we can’t override emotion with logic, we must create an environment where both the Rider and the Elephant want to move in the same direction.

Most of all, it’s the right thing to do. We can make AI about human flourishing, not human replacement. We can turn people into our competitive advantage by remembering that business is about relationships between people. That’s something no AI can replace.

The question isn’t whether we’ll adopt AI. It’s whether we’ll do it in a way that honors the Elephant, and the humans, in the room.

The post AI and The Elephant appeared first on InterWorks.

]]>
InterWorks Believes in People, Not Just AI https://interworks.com/blog/2025/09/08/interworks-believes-in-people-not-just-ai/ Mon, 08 Sep 2025 20:30:41 +0000 https://interworks.com/?p=70159 Salesforce just made headlines for cutting 4,000 customer service jobs, with CEO Marc Benioff proudly declaring he “needs less heads” now that AI agents handle half their support interactions. Microsoft chopped 15,000 roles this year. Meta cut 3,600. The message from Silicon Valley is clear:...

The post InterWorks Believes in People, Not Just AI appeared first on InterWorks.

]]>

Salesforce just made headlines for cutting 4,000 customer service jobs, with CEO Marc Benioff proudly declaring he “needs less heads” now that AI agents handle half their support interactions. Microsoft chopped 15,000 roles this year. Meta cut 3,600. The message from Silicon Valley is clear: Humans are becoming optional in customer support.

We respectfully disagree.

Don’t get us wrong, we love technology. We’ve built our entire business around helping organizations get the most from their data tools. But when it comes to supporting people who use complex platforms like Tableau, we believe human expertise is irreplaceable.

The Problem with AI-First Support

Benioff’s comment about “needing less heads” reveals a fundamental misunderstanding about what good support looks like. Sure, AI can handle basic questions and routine requests. But Tableau users are seeking more than answers, they want understanding.

When someone’s struggling with a complex calculated field, trying to optimize dashboard performance or navigating governance challenges across their organization, they need more than a chatbot. They need someone who’s been there, solved similar problems and can guide them through not just the “what” but the “why” and “how.”

AI might tell you the steps to create a parameter. A human expert helps you understand when parameters are the right solution and when they’re not.

Our Human-Centric Approach

While others are cutting support staff, we’re investing in ours. Here’s what makes our approach different:

Assist by InterWorks pairs you with real Tableau experts who understand your unique challenges. No phone trees, no chatbots, no generic responses. Just knowledgeable humans who can troubleshoot, teach and guide you toward better solutions.

KeepWatch by InterWorks provides managed Tableau Server services with actual people monitoring your environment. When something goes wrong (and in complex data environments, something always does), you get proactive support from experts who know your setup inside and out.

Why Human Expertise Matters More Than Ever

The irony is that as technology becomes more sophisticated, the need for human expertise increases. Today’s Tableau environments are more complex than ever:

  • Multi-cloud deployments
  • Advanced security requirements
  • Integration with dozens of data sources
  • Governance across global teams
  • Performance optimization at scale

These challenges require judgment, creativity and experiences. These qualities come from years of hands-on work, not reading the manual in your training data.

The Real Cost of “Efficiency”

Companies like Salesforce are celebrating 17% cost reductions from cutting support staff. That tells you a lot about where their values are and who they are talking to. Certainly, for customers wrestling with inadequate support, this is not welcome news.  So, what’s the real cost to users?

  • Longer resolution times for complex issues
  • Generic solutions that don’t fit specific use cases
  • Lost opportunities for optimization and best practices
  • Frustrated users who can’t get the help they need

At InterWorks, we measure success differently. We track how quickly we can get you unstuck, how much we can teach you in the process, and how much more effective you become with Tableau as a result.

A Partnership, Not Just a Service

When you work with our support teams, you’re gaining a partner. Our experts become extensions of your team, understanding your goals, your constraints and your ultimate destination.

We’ve seen this approach work thousands of times. Users don’t just get their immediate problems solved: They become more confident, more capable and more strategic in how they use Tableau.

The Future Is Human + Technology

We’re not anti-AI, we’re huge fans. We use technology to make our entire company a small giant. We use AI to augment human expertise, not replace it.

While other companies are betting everything on automation, we’re betting on the power of combining great technology with great people. Technology should further our relationships, not eliminate them.

Ready to experience support that puts humans first? Learn more about Assist by InterWorks and KeepWatch by InterWorks, or reach out to see how real experts can make a real difference for your Tableau environment.

The post InterWorks Believes in People, Not Just AI appeared first on InterWorks.

]]>
You Are Not The Average Customer https://interworks.com/blog/2025/08/19/you-are-not-the-average-customer/ Tue, 19 Aug 2025 16:00:35 +0000 https://interworks.com/?p=69779 Software is designed for the average customer, but the truth is that no customer is the actual average. That’s why we’re always dealing with trade-offs. Our tool process flow doesn’t quite match the reality of our business. The analytics dashboard doesn’t quite capture the right...

The post You Are Not The Average Customer appeared first on InterWorks.

]]>

Software is designed for the average customer, but the truth is that no customer is the actual average. That’s why we’re always dealing with trade-offs. Our tool process flow doesn’t quite match the reality of our business. The analytics dashboard doesn’t quite capture the right metrics. The CRM stages don’t fit the way we think about selling. So often, we conform to our tools and deal with the rough edges.

The future of tools, however, looks different.

Over the past few weeks, I’ve been spending my free time coding with Claude, and it’s more obvious to me than ever that the cost of building a tool that truly fits your business is lower than ever. I’m not talking about a one-sentence prompt that magically generates the perfect tool — that’s not AI, that’s magic. But I do think that Claude and tools like it are making it so that the people who best understand the problem can engage with solving it using code, like never before.

I’ve Felt This Before

I’m long past the days where I could dedicate the majority of my week to programming. Despite a schedule that included a conference and a family vacation, I’ve built three different applications that each fill me with excitement. I can build thing again. Good things. I don’t think I’m special in this regard.

At the start of my career, I was building dashboards using C# and SQL. When Tableau entered the scene, my projects took 10% of the time. It was an accelerator in so many impressive ways and changed how I approached business intelligence and the trajectory of my career. It was worth the cost of the licenses, the decrease in performance and the rough edges that appeared when it didn’t quite match my needs. In this moment, I feel that same excitement that first lit when I began building interactive dashboards that wowed executives with capabilities they had never seen. This moment with AI feels similar, except I’m back to working in code, with all the benefits of flexibility and fit, without giving up that wonderful sense of acceleration.

Of course, as Pedro Tavares points out in his post, “Writing Code Was Never The Bottleneck.” Deploying code, managing servers — all of these things are challenging and not quickly solved by AI alone. That is why I think businesses need to spend more time focusing on enabling this upcoming wave of citizen developers by making a paved path to deploying these new AI-developed solutions. We don’t have to make the trade-off of tools built for the average user when we can build them rapidly, but that also means we need to enable being able to deploy rapidly as well.

We Can Just Do Things

This reminds me of Aaron Francis’s Talk at Laracon, where he speaks about the spirit of being able to choose to “just do things” and build what you want. While he’s not talking about AI specifically, his philosophy about being empowered to build what you want resonates deeply with this moment. He sees Laravel as giving you the tools to just build — and I see AI as part of that same journey.

The convergence is already happening. The combination of Laravel and AI, with tools like Laravel Boost, is a great example of this synergy. It’s not just about generating code faster — It’s about creating an environment where the friction from idea to production is as low as possible.

One of the great insights of lean methodology is that single-piece flow unlocks efficiency. Whenever we can build something without constant handoffs, we’re going to be faster and more efficient. AI gives us new ways to do this, and we should leverage them.

Faster Cars != Faster Traffic

Faster code doesn’t not mean faster outcomes if we solely focus on the ability to generate code. We need to pay attention to the entire process — from code review to deployment, from prototype to production.

If we make the right investments in developer experience, we’re going to see things transformed. Not because we can generate code faster, but because we can reduce the friction at every step of the process. When the people who understand the business problems can directly engage with solving them through code, when deployment is as smooth as development, when AI assists with the entire lifecycle rather than just the typing — that’s when we’ll truly move beyond the compromises of average.

The tools we need aren’t just better code generators. They’re comprehensive ecosystems that understand our frameworks, integrate with our workflows and help us navigate the real bottlenecks: Understanding existing code, coordinating changes and safely deploying to production.

Better Than the Average

We’re at an inflection point. The question isn’t whether AI can help us code faster (it does) — it’s whether we’re ready to reimagine how we build software entirely.

When we combine AI’s acceleration with thoughtful investment in developer experience, we can finally build the tools that fit our actual needs. Not the average customer’s needs. Not close-enough compromises. But tools that work the way we work, measure what we need to measure, and flow the way our businesses actually flow.

The future isn’t about conforming to tools built for everyone else. It’s about building exactly what we need, rapidly and confidently. And for the first time, that future feels achievable for all of us — not just the companies with massive engineering teams.

We’re entering an era where custom is the new standard. Where fitting your business perfectly is more achievable than settling for average. Where the people who understand the problems can build the solutions.

And I, for one, can’t wait to see what we build together.

The post You Are Not The Average Customer appeared first on InterWorks.

]]>
Practical AI: The Types of AI Workloads https://interworks.com/blog/2025/06/05/practical-ai-the-types-of-ai-workloads/ Thu, 05 Jun 2025 14:38:50 +0000 https://interworks.com/?p=67888 Generative AI continues to capture our collective imagination, and if my feeds are any indication, we’re finding new words to describe them all the time — chatbots, copilots, agents and more. But what’s the reality here? What do practical AI workloads actually look like? Single-LLM...

The post Practical AI: The Types of AI Workloads appeared first on InterWorks.

]]>

Generative AI continues to capture our collective imagination, and if my feeds are any indication, we’re finding new words to describe them all the time — chatbots, copilots, agents and more. But what’s the reality here? What do practical AI workloads actually look like?

Single-LLM Features

These are straightforward, linear flows — the kind you’ll see in basic product features or data integration use cases:

  • Text summarization
  • Concept extraction
  • Text transformations (adding structure or combining text)
  • Q&A

These use cases are the easiest to get started with. If you’re a Snowflake customer, you already have access to Cortex, which gives you LLM capabilities through simple functions. Tools like Sigma can take these functions from Snowflake (or Databricks) and bring them directly into your analytics workflow.

This opens powerful new ways to work with unstructured data. A common example is survey data — You can use these functions to add structure to free-form responses, making them easier to analyze. LLMs are a powerful tool for imposing structure on our world of unstructured and semi-structured datasets.

In their simplest form, chatbots work this way too, with each output becoming part of the next input to continue the conversation. Just like chatbots, the key to success with any AI feature is your initial input (or prompt) which enables you to provide context and instruction to your request.

AI Workflows

This is where things get more complex. You still have defined inputs and outputs, but the path between them can branch based on conditions. Think of LLMs being orchestrated through code or visual ETL-like flows—something many data professionals are already familiar with. Traditional ETL tools are adding AI features, and we’re also seeing AI-focused workflow tools emerge, like n8n.

With AI workflows, you can tackle problems like:

  • Support response automation
  • Lead management and enrichment
  • Research tasks
  • Data enrichment
  • Communications integration and summarization

The power here is in handling more nuanced tasks that require some deterministic decision-making along the way.

AI Agents

Agents are different. They decide their own path and work mostly independently. They have access to multiple tools and choose which ones to use (or not use) through technologies like MCP (Model Context Protocol). This autonomy makes agents powerful but also introduces new considerations around cost, processing time and error handling.

A few practical tips about AI agents:

  • Start simple. Don’t jump straight to agents. They’re best for complex, valuable workflows where you can wait for results.
  • Focus on the basics. Agents are just models using tools in a loop. Your job is figuring out what tools they need and what prompts will guide them effectively.
  • Manage context carefully. The model only knows what’s in its prompt. You need to help it track what it’s done and what comes next — this can be as simple as a text file the agent can read and update.

Human-in-the-loop review becomes important here, allowing for guidance and quality control when needed.

Looking Ahead

It’s hard to predict exactly what’s next, but current trends suggest we’ll see multiple agents working together, with humans assigning and managing their workloads. Whether the future brings one powerful agent or a fleet of specialized ones working in concert, the key is starting with practical applications today.

The path forward is clear: Begin with simple LLM features, experiment with workflows as your needs grow more complex, and consider agents only when you have high-value problems that justify their complexity. Each step builds on the last, creating a foundation for whatever comes next.

The post Practical AI: The Types of AI Workloads appeared first on InterWorks.

]]>
A Big Week in AI https://interworks.com/blog/2025/05/27/a-big-week-in-ai/ Tue, 27 May 2025 17:27:52 +0000 https://interworks.com/?p=67753 What Happened? Last week brought a burst of AI announcements from the major players, each positioning their latest capabilities as breakthrough advances: OpenAI launched Codex research preview for parallel agent orchestration in coding. Microsoft Build delivered MCP Registry support for Windows, GitHub’s evolution from “pair...

The post A Big Week in AI appeared first on InterWorks.

]]>

What Happened?

Last week brought a burst of AI announcements from the major players, each positioning their latest capabilities as breakthrough advances:

  • OpenAI launched Codex research preview for parallel agent orchestration in coding.
  • Microsoft Build delivered MCP Registry support for Windows, GitHub’s evolution from “pair programming” to “peer programmer” agents and open-sourced Visual Studio Code with Copilot.
  • Google I/O unveiled AI Mode for search with data visualization, Gemini 2.5 Flash pricing, Jules asynchronous coding agent and MCP integration across their platform.
  • Anthropic’s Code with Claude Event introduced Claude 4 (Opus and Sonnet), comprehensive agent tooling, enhanced prompt caching and deeper Claude Code integrations.

Amid the typical AI hype that describes our industry, some genuinely useful developments emerged. The common thread wasn’t revolutionary AI, it was practical improvements to how AI tools integrate into existing workflows, particularly around code and data tasks.

The Agent Revolution Accelerates

What stood out wasn’t any single announcement, but the convergence around similar capabilities. OpenAI’s Codex research preview showed parallel agent orchestration for coding. Google’s Jules introduced asynchronous coding agents that can juggle multiple tasks simultaneously. Anthropic positioned their new Claude 4 models as the foundation for “true agents” capable of “hours of tasks” without losing context, adding capabilities of handling memory with their new Files API and new improvements to their leading Claude Code tool. Microsoft’s GitHub copilot now includes an agent that they describe as an evolution from “pair programming” to what they call “peer programming” which means you can assign tasks directly to agents within GitHub workflows.

Strip away the marketing language, and you see companies solving similar problems: How to make AI assistance less conversational and more task-oriented. For data leaders, the practical question is whether these tools can actually handle the messy, context-heavy work that dominates data operations. Early indicators suggest some can, though the gap between demo and daily reality will be crossed by significant engineering and work for companies. Given clean datasets, AI can successfully build dashboards in languages like Python, but our businesses are often missing those clean datasets and surrounding context. The investment in your knowledge repositories and not just your data warehouses is more important than ever.

Search and Analytics

Google’s I/O event included AI Mode for search that generates data visualizations and tables to answer questions directly. While the demos looked polished, and I love seeing data visualizations in more places, the real test is whether this works reliably with complex data questions or just simple chart generation. We’ve seen (and still see) Google’s AI summaries at the top of search provide completely wrong information. AI Hallucination is not a solved problem, so double check anything that comes out of AI mode.

Anthropic’s Code Execution Tool demonstration showed Claude loading datasets, generating charts and analyzing results in real-time. In my own testing, I was able to ask very general questions of Claude Code and it would perform analysis in Python, then create another app (streamlit in this case) to present the data to me.  The first time I did it, I was blown away. With additional requests and time, I still encounter the occasional weird error, like the inability to change a specific font color for some reason, while I was able to get it to do more complex things around layout with no problem. Claude Code still fails in interesting ways, but it also seems like it will save hours and hours of work with any analyst working in code. There’s a strong argument that we should be doing more and more of our analytics work in code as AI continues to improve on the efficiency gains in this area.

While the capabilities are impressive on clean data, the practical question for data teams is whether this works consistently with real datasets that have missing values, schema inconsistencies and other common messiness. In practice, I’ve found that AI can help with these tasks too, but I often still have to break the tasks like a senior data worker would. If I ask questions that assume the data is good, the model will assume they are good. If I guide it towards checking the data, it’ll do the appropriate checks.

One last warning on using these agents, I’ve seen several times now where the AI will hard code data. This is sometimes called “reward hacking” where the AI will cheat a bit to get the answer it needs. This disappears if you are very specific about using a dataset and only that dataset to generate numbers, but it’s a precaution that’s well worth mentioning.

Here’s one of my test runs using the SuperStore dataset:

The Rise of MCP

Behind the demos lies a more practical development: The emerging standards of AI agent infrastructure. MCP (Model Context Protocol) is a standard first proposed by Anthropic, but has been iteratively introduced across the ecosystem. In short, MCP is a standard way to provide access to tools and other resources to AI models. Microsoft’s addition of MCP support to Windows and GitHub, Google’s integration of MCP into the Gemini SDK and Chrome, and Anthropic’s API-level MCP support all point to the same reality: MCP appears to be gaining traction as a standard for AI-tool integration.

For data teams, this standardization could mean agents eventually integrate more seamlessly with existing tools — your lakehouse, orchestration platform, monitoring systems. The operative word is “could.” It’s still early days for this standard and things like row-level security don’t have standard solutions yet.

The Economics of Intelligence

Pricing LLMs is hard. There are tables that describe the per-million token costs of each model, but often that doesn’t give you the picture. Features like prompt caching can change the economics, especially for agent workflows, but also aspects like how many tokens each model typically generates. For instance, Gemini 2.5 Pro tends to be extremely wordy, even if it is cheaper than comparative models.

All that said, Google’s pricing strategy with Gemini 2.5 Flash offers concrete value. At roughly 25% the cost of comparable models while delivering competitive performance, it makes sense as a general workhorse when top of the line performance isn’t required.

On the more expensive end, but with clear strengths in programming and agentic flows, Anthropic has further enhanced prompt caching by extending context lifespan from 5 minutes to 1 hour. This should addresses a real limitation and save real money on solutions that use it. Long-running data processes often require maintaining context across extended operations. This improvement makes it more practical to have agents manage complex, multi-step workflows without losing track of what they’re doing.

From Programmer to Manager

Perhaps the most intriguing development came in the form of the asynchronous agents that were shown by everyone. There is a clear message that the industry sees programmers moving to management. In Anthropic’s words, their own programmers are “moving from individual contributors to managing multiple concurrent agents running tasks.” Similar things were seen in OpenAI’s Codex preview and Microsoft’s Github Copilot Agent.

Not a headlines feature, but a point I found interesting was the onboarding improvements Anthropic mentioned. They claim new employee onaboarding went from two to three weeks to two to three days. This provide a glimpse at one of the underappreciated aspects of LLMs: They are a tool for understanding and breaking down complexity in our language, programming or not. With the right knowledge infrastructure, we should be able help new team members understand our complex data architectures through AI assistance.

The Diffusion Experiment

Lastly, this one doesn’t have direct data implications, but Google’s announcement of Gemini Diffusion is cool. This isn’t a transformer model as all our current LLMs have been. Instead of working to predict the next work (or token), these work on blocks of text rather than sequential tokens.

The benefit? It’s fast. 5x faster then our fastest models. That has real implications if it gets adopted, allowing for more work to be done and iterated upon. While it’s unclear whether this approach will gain traction, it suggests the industry isn’t settling into a single architectural pattern and that there’s plenty of innovation left for us to create.

What This Means for Data Leaders

The convergence around agent capabilities, standardized protocols and code-centric AI suggests we’re approaching an inflection point. The question isn’t whether AI will transform data work, but how quickly and in what form. Your best and brightest should be working with agent tools, such as Claude Code, to see how far they can push them and to understand their current limitations. You should be building out the surrounding infrastructure, such as MCP Servers, to allow these tools to impact your business. Most of all, you should be working on your knowledge repositories.

The rise of data visualization tools like Tableau accelerated companies getting their data infrastructure in order and we saw the rise of new database giants like Snowflake and Databricks. AI benefits extremely from these investments, but it also needs context about our business. Just like how those BI tools gave us insight into the gaps in our data, AI is making it clear that we have gaps in our knowledge. Undocumented systems, old information, broken processes and tribal information are all going to be barriers that keep these AI systems from thriving at your company.

The most successful data organizations over the next 12 months will likely be those that experiment thoughtfully with these emerging capabilities while investing in the cultural, technical and informational foundations required. The tools are rapidly becoming capable enough for production use — the limiting factor is increasingly organizational readiness rather than technical maturity.

The post A Big Week in AI appeared first on InterWorks.

]]>
Devs on Stage Shines at Tableau Conference 2025 https://interworks.com/blog/2025/05/01/devs-on-stage-shines-at-tableau-conference-2025/ Thu, 01 May 2025 19:40:13 +0000 https://interworks.com/?p=67381 Tableau Conference 2025 in San Diego has come and gone! To anyone who attended or were watching from afar, it was clear this conference was about vision of the future – AI-infused features, AI agents and the introduction of the new Tableau Next platform. Amidst...

The post Devs on Stage Shines at Tableau Conference 2025 appeared first on InterWorks.

]]>

Tableau Conference 2025 in San Diego has come and gone! To anyone who attended or were watching from afar, it was clear this conference was about vision of the future – AI-infused features, AI agents and the introduction of the new Tableau Next platform. Amidst this hype one session stood out, offering a refreshing and appreciated look at the present: Devs on Stage.

Year after year, this session is a highlight, offering a glimpse into upcoming innovations directly from the developers building them. This year, however, it felt particularly significant as it shifted the focus back to the core Tableau tools – Prep, Desktop, Server and Cloud – the very tools that analysts and data professionals rely on every single day.

Powering Up Prep and Desktop

With a crowd full of analysts, the session kicked off with significant enhancements aimed to the tools they use most, Prep and Desktop.

Tableau Prep Enhancements

Tableau Prep Builder received key updates designed to extend its power and flexibility:

  • Custom Python Scripts in Tableau Cloud: A major step forward is the addition of support for custom Python scripts directly within Tableau Prep flows running in Tableau Cloud. This allows users to move beyond built-in transformations and leverage the vast Python ecosystem for complex data manipulation, statistical modeling or custom cleaning operations.
  • Publish to Google Drive: Adding to its output options, Tableau Prep can now publish flow outputs directly to Google Drive. This solves a need for any users needing to deliver data directly to file, allowing you to avoid making users go to a dashboard to download the data manually.

Tableau Desktop Gets Major Love

Tableau Desktop, the heart of Tableau authoring, saw a wealth of updates

  • Connect to Tableau Semantics: Tableau Semantics is one of the first major experiences released from Tableau Next. It provides a modern, centralized place to standardize information about your data, including relationships between tables and important calculations. This provides the engine that will drive the analytics and AI answers in Tableau Next, but will also allow for more standardized datasets for your core Tableau experiences.
  • Maps — Viewport Parameters: Building on the foundation of Spatial Parameters introduced in late 2024, Tableau is adding “viewport parameters.” This allows a spatial parameter’s value to be set to the currently visible area (the viewport) of a map. In the demo, they showed some great use cases, including having multiple maps in sync, or having a second map serve as a “zoomed” view to allow for greater detail, while still having the big picture at hand. I can’t wait to see what the Tableau community does with this one.
  • Custom Overlays via Analytic Pane Extensions: The familiar Analytics pane gets an upgrade with the introduction of Analytic Pane Extensions. This allows developers to create custom analytic objects that users can simply drag and drop onto their visualizations. This could be used to show things like elevation or drive-time on a map with other visualizations.
  • Show Me Upgrade (“Choose for me”): I’ve always loved Show Me as a easy way to get started, and now it’s becoming even smarter. Previously, it recommended chart types based on selected data. Now, with the “Choose for me” option, users can select a desired visualization and generate it with suggested dimensions and measures. It’s a great way to lower the barrier to entry for initial data exploration.
  • Custom Color Palette Creation: A long-standing request from the community has been addressed: creating custom color palettes no longer requires manually editing the Preferences.tps file. Tableau Desktop is adding a built-in interface for custom palette creation. In one of my favorite uses of AI I saw at the conference, you can also describe the type of pallet you’d like to see and have it automatically generated for you.
  • Dynamic Color Palette Ranges: With this update, the color ramp on legends can adjust based on the data currently in view (e.g., after filtering), providing more nuanced visual differentiation within specific data subsets.
  • Rounded Corners: Sometimes, it’s the little things that get the biggest cheers. The announcement that users can finally apply rounded corners to dashboard objects (containers, worksheets, etc.) is finally here. This seemingly small aesthetic tweak has been a long time wishlist item and reflects a move towards more modern UI design possibilities within Tableau dashboards, clearly resonating with the community’s desire for more formatting control.

These Desktop enhancements represent a significant investment in the core authoring experience, blending quality-of-life improvements with powerful new analytical and data connection capabilities.

Boosting the Platforms: Pulse, Server and Cloud

Beyond the authoring tools, Devs on Stage showcased important updates across the broader Tableau ecosystem, focusing on AI integration, administration and connectivity.

Tableau Pulse gets Chatty

Tableau Pulse has had a promising start with a focus on Metric-first business intelligence. It provides a simpler experience for users who really need to focus on their core KPIs and metrics. Now, it’s adding Conversational Analytics to its capabilities. This unlocks new ways to interact with metric digests more naturally, allowing follow-up questions in plain language to dig deeper into the “why” behind the numbers. If executed well, this opens better self-service capabilities to a wide range of business users.

Tableau Server Improvements

Administrators managing Tableau Server deployments received welcome news with several key governance and usability features:

  • SCIM Support: Tableau Server will now support the System for Cross-domain Identity Management (SCIM) protocol. This is a significant enhancement for enterprise deployments, allowing administrators to automate user and group provisioning directly from their central Identity Provider (IdP) like Microsoft Entra ID (Azure AD) or Okta. It should help manual user additions or custom scripting, saving time and money.
  • Recycle Bin: Finally! A way to save those accidentally deleted dashboards! Addressing another common administrative pain point, both Tableau Server and Tableau Cloud are getting a Recycle Bin. This feature allows administrators or content owners to easily restore accidentally deleted projects, workbooks, or data sources for up to 30 days.
  • Admin Logs (User Interactivity): The Activity Log, particularly valuable for Advanced Management users, is being enhanced to capture detailed user interactivity events. This has been another long-requested feature to provide insight on user behaviors. This will enable companies to understand how their users are leveraging dashboards. This granular data provides invaluable insights for auditing purposes, understanding content usage patterns, identifying performance bottlenecks, and optimizing dashboard design for better user experience.

Tableau Cloud Update

For Tableau Cloud users, they are of course getting a lot of the features mentioned above, but another key connectivity enhancement was announced: dbt Support. Tableau Cloud is adding native support for connecting to dbt (data build tool) via Tableau Bridge. This is a huge win giving the popularity of dbt, and now users can directly access and leverage dbt models and metrics within Tableau Cloud, ensuring consistency and reusing the valuable transformation logic and semantic definitions already established in dbt.

Other Notable Updates

Several other announcements rounded out the ecosystem enhancements:

  • Google Workspace Add-ons: Out now, Tableau insights can be brought directly into Google Docs and Google Slides. These new add-ons allow embedding dashboards and metrics into your documents and presentations. Excitingly, Google Sheets is planned for the near future. This solves a very real problem for a lot of customers that need to get data into other locations. It’s important to note, however, that while it’s easy to update the metrics and dashboards in these documents, it will still be a manual process requiring a user to click to update.
  • Accessibility: Tableau continues its commitment to accessibility, introducing new keyboard shortcuts and actions for navigation and interaction within visualizations on both Desktop and Server/Cloud.
  • Published Data Sources in Tableau Semantics: As mentioned earlier, the ability to connect and leverage existing Tableau Published Data Sources within the new Tableau Semantics layer is a key integration point, ensuring that current data assets can participate in the future agentic AI experiences powered by Tableau Next.

Labs Sneak Peek: Voting on the Future

New to Devs on Stage was the “Labs” section, where the developers showcase experimental features and got the Tableau community involved by voting on which ones they’d most like to see prioritized. Three intriguing possibilities were featured:

  • Authoring API: The proposed Authoring API would allow developers to programmatically interact with the Tableau Desktop authoring environment itself. The demos showcased compelling use cases, including generating reports via AI, automating dashboard layouts and formatting, creating interactive walkthroughs (similar to features in tools like InterWorks’ own Curator), and even automating the translation of dashboard content into different languages.
  • Tableau Pulse Research Agent: The concept was described as similar to OpenAI’s “Deep Research” AI tools but applied directly to the user’s specific data context within Pulse. This experimental AI agent aimed to answer the crucial “why” behind metric changes
  • Tableau Sketch: This presented a novel interaction paradigm: filtering data by drawing a shape or pattern. Tableau Sketch would use fuzzy matching to find data points that follow the user-drawn curve. While novel, I failed to think of many ways I’d apply this capability in my own day-to-day use of data.

The Authoring API won by a pretty large margin. This could unlock substantial productivity improvements for developers and analysts, enable entirely new kinds of programmatic or embedded analytics solutions. Tableau is planning further discussion on this API at the upcoming DataDev Day.

Final Thoughts: Celebrating Progress

Overall, the Devs on Stage session at Tableau Conference 2025 was a resounding success, providing a much-needed focus on the core Tableau platform amidst the broader emphasis on AI and Tableau Next.

The announcements showcased a healthy mix of highly anticipated “wishlist items” like rounded corners and built-in custom color palettes with significant workflow enhancers like Python in Prep and Viewport Parameters, crucial enterprise features like SCIM support and the Recycle Bin, and strategic integrations bridging the present and future, such as the Tableau Semantics connector.

While the experimental features, particularly the Authoring API, offer exciting glimpses into potential future directions driven by community feedback, the real strength of Devs on Stage this year was its grounding in practical, tangible improvements. It celebrated the ongoing evolution of the Tableau platform, demonstrating a commitment to making the tools analysts know and love even better. As organizations look to leverage these new capabilities, finding the right path to integrate them effectively will be key — a challenge where expert guidance can make all the difference.

The post Devs on Stage Shines at Tableau Conference 2025 appeared first on InterWorks.

]]>