InterWorks https://interworks.com/ The Way People Meet Tech Tue, 28 Oct 2025 15:22:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 Connecting LLMs to Tableau: A Practical Guide for Using Tableau MCP https://interworks.com/blog/2025/10/28/connecting-llms-to-tableau-a-practical-guide-for-using-tableau-mcp/ Tue, 28 Oct 2025 15:22:53 +0000 https://interworks.com/?p=71420 What is MCP? To set the stage, we’ll begin with a quick overview of MCP: What it is, why it matters and how it can be used with Tableau. At its core, the Model Context Protocol (MCP) is a standard that gives large language models...

The post Connecting LLMs to Tableau: A Practical Guide for Using Tableau MCP appeared first on InterWorks.

]]>

What is MCP?

To set the stage, we’ll begin with a quick overview of MCP: What it is, why it matters and how it can be used with Tableau. At its core, the Model Context Protocol (MCP) is a standard that gives large language models (LLMs) a universal way to access external tools and data sources. Whether it’s retrieving data from a third-party database, calling an external API or running a complex analytics workflow, MCP makes these tools directly accessible to the LLM. By building tools and servers based on the MCP standard, service providers ensure that any compatible LLM can use them, allowing seamless integration across different AI systems.

In short, MCP acts as a common language for LLMs, exposing tools and data in a way that any compatible LLM can understand and use.

Earlier this year, Tableau introduced its own Tableau MCP server, making it possible for AI models to utilize Tableau-specific tools tied to a Tableau Cloud or Tableau Server site. In practice, this means you can chat with an LLM and ask it questions about your own Tableau environment and data directly.

While you can’t use this method to develop or publish new Tableau content, it is already quite powerful for surfacing insights about your Tableau environment. This post walks you through the setup process for Tableau MCP and discusses a few example use cases on how it can be utilized.

Demo Prerequisites

What you will need to get started:

  • Access to Tableau Cloud or Tableau Server with a Personal Access Token (PAT)
    • Note: If using Tableau Server, Metadata API access will need to be enabled for certain tools to work.
    • Additionally, tools related to Tableau Pulse require that Pulse is enabled on your Tableau Cloud site (applies only to Tableau Cloud).
  • Container engine, such as Docker or Podman
    • This is optional. Tableau MCP doesn’t need to run in a container, but for simplicity and ease of use, we’ll set it up in a container using Podman. The commands translate to Docker as well.
  • AI tool that can interact with MCP servers
    • We will be using Claude Code, but you can also use other tools such as Claude Desktop, VS Code or Cursor.
    • Important: Make sure using Tableau MCP and AI tools aligns with your organization’s policies on AI and data usage.

Architecture

The main components in the MCP architecture are the MCP Server, MCP Host and MCP Client. As a user, we interact with the MCP Host (Claude in this example) by chatting with an LLM. When needed, Claude can then invoke its built-in MCP Client to facilitate connections to the MCP Server, which is the container-hosted Tableau MCP server. The MCP Server, in turn, connects to our Tableau Server to fetch the relevant data. For example, let’s say we want to retrieve a View from Tableau Server. If that’s the case, then the high-level sequence of steps would be:

  1. We ask Claude a question about our Tableau environment.
  2. Claude identifies that our question is about Tableau and recognizes that it has access to Tableau-related tools via MCP.
  3. Claude initiates a tool call for one of the specific Tableau MCP tools, such as retrieving Tableau Views.
  4. Claude leverages its built-in MCP Client to connect the Tableau MCP server.
  5. The Tableau MCP Server performs the requested action by reaching out to our Tableau Server or Tableau Cloud site.
  6. Tableau MCP Server returns the Tableau View data back to Claude’s MCP Client.
  7. Claude receives the MCP Server’s response and incorporates it into its own output, giving us a reply to our initial question about the View.

Setting up Tableau MCP

1. MCP Server steps:

In this walkthrough, we’ll cover how to run Tableau MCP locally in a container using the streamable-http transport method which enables remote access by multiple clients. You can also run it with stdio transport and/or outside of a container if you prefer. Keep in mind that production environments require proper hardening, including deploying the MCP server with secure transport layers, end-to-end encryption, robust authentication and strict access controls to safeguard data and prevent unauthorized access. The commands below use Podman, but the commands translate directly to Docker as well.

We’ll start by cloning the code from Tableau’s tableau-mcp GitHub repository and then build a Tableau MCP container image via the following commands:

git clone https://github.com/tableau/tableau-mcp.git
cd tableau-mcp
podman build -t tableau-mcp .

This creates our Tableau MCP container image. To confirm the image has been created successfully, we can list our existing images:

podman images list

With the container image ready, the next step is to launch a container from it. This will run the Tableau MCP server which the LLM can connect to. To start the container, run the following command:

podman run -d --name tableau-mcp --env-file <path_to_env_file>/env.list -p 127.0.0.1:3927:3927 tableau-mcp

Let’s break down what the command above is doing:

  • podman run -d –name tableau-mcp starts a container named “tableau-mcp” running in detached mode
  • –env-file <path_to_env_file>/env.list passes in the env.list file which contains environment variables Tableau MCP requires to facilitate the connection to Tableau Server. More details on the env.list file are included below.
  • -p 127.0.0.1:3927:3927 specifies that the container is to run locally and exposes the Tableau MCP service to our host machine at port 3927 – this is where our MCP server will be reachable at

In order for Tableau MCP to connect to our Tableau Cloud Server site, we need to specify an env.list file when running the container. Tableau includes a template for the env.list in the tableau-mcp GitHub repository (as env.example.list), which contains the environment variables below:

TRANSPORT=http <Can also be stdio>
SERVER=<Tableau Cloud/Server URL>
SITE_NAME=<Tableau site name>
PAT_NAME=<PAT name>
PAT_VALUE=<PAT value>
DATASOURCE_CREDENTIALS=<Optional - JSON with data source credentials where required >
DEFAULT_LOG_LEVEL=debug
INCLUDE_TOOLS=<Optional - List of MCP tools to include>
EXCLUDE_TOOLS=<Optional - List of MCP tools to exclude>
MAX_RESULT_LIMIT=<Optional - Limit on number of returned items>
DISABLE_QUERY_DATASOURCE_FILTER_VALIDATION=<Optional - Disable MATCH and SET validation>

After the podman run command, we can see our container has started and our MCP server should be listening on port 3927.

podman logs tableau-mcp

2. Client-side steps:

With the MCP server up and running, the next step is to connect to it from an AI tool. This example uses Claude Code, though any Tableau MCP-compatible AI application will work.

Let’s start Claude Code and check its current MCP settings using the /mcp command:

As expected, no MCP servers are listed yet. To link Claude Code with Tableau MCP, we need to add the Tableau MCP configuration details to Claude’s settings. From the container logs output (podman logs tableau-mcp), we see that Tableau MCP server is running locally at http://localhost:3927/tableau-mcp. Let’s run the following to register the MCP server with Claude:

# Add the MCP server config to Claude Code
claude mcp add --transport http tableau http://localhost:3927/tableau-mcp
# List Claude's MCP servers to confirm the addition
claude mcp list
# Start Claude and check the available MCP Servers and their tools
claude
/mcp

With this addition, Claude should be able to see Tableau MCP and the exposed tools. Once we start Claude, we can run the /mcp command again and can inspect the tools available to Claude:

Note on the client-side Claude configuration: If using Claude Desktop instead of Claude Code, one would modify the claude_desktop_config.json file with the MCP server details. This configuration file can be accessed through Claude Desktop’s Developer settings or, alternatively, added as a Claude Desktop extension.

Using the MCP Server

Now that we’ve got everything set up, we will explore a few basic examples. In particular, our focus is on retrieving basic metadata/admin insights about the environment and querying data and dashboard content.

1: Using MCP to get metadata about Tableau content

To start, we can use MCP to retrieve basic metadata about Tableau content. This operation provides a clear view of available dashboards, workbooks and data sources, which although simple, can be extended for more sophisticated workflows. To do this, we’ll have Claude list our Tableau workbooks, serving as a proof of concept that Claude can indeed retrieve data from our Tableau site. When we query our workbooks, Claude leverages the List Workbooks tool to fetch and present the results:

Next, we ask a question that requires more than one tool call — We might be curious about which dashboards receive the most views, asking Claude to output the most popular dashboards. To answer this question, Claude makes use of multiple tools and displays a list of Views for each Workbook, along with the number of total views to indicate popularity. When we enter a prompt, Claude interprets the request, determines which MCP tools to invoke and executes the necessary operations. Unless prompted to use specific tools, Claude will assess which ones are needed to fully answer the user’s prompt, as shown in the chat below:

Switching gears, let’s say we are interested in our Published Data Source and seek a breakdown of our Tableau environment’s data connections. MCP can provide an inventory with information such as data connection types (e.g., Snowflake, Excel, SQL Server), number of workbooks connected to each published data source or ownership. One can highlight which databases are most used, how they are connected and where dependencies lie. The output indicates which tool was used, providing more transparency on what requests are made to the MCP Server. In this case, Claude invokes the “Search Content” to pull the data about our Published Data Sources:

2. Analyzing Tableau Dashboards via MCP

So far our prompt examples have dealt with metadata and administrative insights — in other words, data about what types of objects exist in our environment. But, beyond this, Tableau MCP also exposes tools to analyze and query our actual data. For example, we can ask what our dashboards are showing to help surface new insights or summarize findings.

Say we have a Workbook about travel preferences, including data on destinations, travel mode and trip ratings. Tableau MCP has tools to get View data or even to retrieve View images — basically, Claude can “look” at the dashboard images and summarize stand-out points based on the image data. Below is an example of a chat output where Claude leverages this “Get View Image“ MCP tool to tell us about the top travel destinations and travel modes in the Travel Dashboards Workbook:

Depending on the AI tool you’re using, you can leverage these findings to create new content and documents as well, such as having the LLM pipe these findings into a new visualization that showcases the main points:

Looking Ahead

The travel dashboard is a very basic example, but with just a set of prompts, we’ve extracted key trends from the Tableau views and created a visual high-level overview of the results. It’s worth noting that Tableau MCP isn’t adding brand-new features. The tools it provides map to actions you could already perform in Tableau — only now, they are also accessible through an LLM, so you can ask personalized questions about your Tableau environment without leaving the AI chat. As it stands, Tableau MCP can be a useful tool for summarizing, exploring and generating new ideas.

The functionality is still fairly limited — currently, you can mainly retrieve Workbooks and Data Sources among a few others — but the potential for AI to interact directly with Tableau content and workflows is substantial, and we expect these capabilities to grow over time.

We’ll be watching closely as new features are added; if you’d like to explore how AI and LLMs can work with your Tableau environment, reach out! We’d love to chat.

The post Connecting LLMs to Tableau: A Practical Guide for Using Tableau MCP appeared first on InterWorks.

]]>
Managing Human Risk and Preventing Data Leaks with Mimecast https://interworks.com/blog/2025/10/24/managing-human-risk-and-preventing-data-leaks-with-mimecast/ Fri, 24 Oct 2025 18:12:55 +0000 https://interworks.com/?p=71428 Author’s note: This is an AI-generated summary of a webinar InterWorks hosted on May 29, 2025. The main presenter was Benjamin Darsigny, Regional Sales Manager — SMB. If you want to watch the whole webinar we summarized for this piece, feel free to watch it...

The post Managing Human Risk and Preventing Data Leaks with Mimecast appeared first on InterWorks.

]]>

Author’s note: This is an AI-generated summary of a webinar InterWorks hosted on May 29, 2025. The main presenter was Benjamin Darsigny, Regional Sales Manager — SMB. If you want to watch the whole webinar we summarized for this piece, feel free to watch it here!

The cybersecurity landscape has undergone a fundamental transformation over the past five years. With the shift to hybrid work models and the explosion of collaboration tools like Slack, Teams and Zoom, the way employees communicate and share data has evolved dramatically. Unfortunately, so have the risks. While organizations have invested heavily in sophisticated defenses for networks, devices and applications, one critical vulnerability remains largely unaddressed: people.

According to industry research, 68% of breaches involve a non-malicious human element. Yet despite this alarming statistic, over 80% of security spending focuses on protecting devices, networks and applications rather than the people using them. This disconnect leaves organizations dangerously exposed. Attackers know that no matter how advanced your technology is, it only takes one person making one mistake for them to be successful.

The Changing Nature of Work

The transition to hybrid and remote work hasn’t just changed where people work. It’s fundamentally altered how they collaborate and share information. Email, once the primary method of communication, now accounts for only about 28% of an employee’s workday. The rest is spent in other applications, collaborating through tools like SharePoint, OneDrive and various messaging platforms.

These new communication channels come with different contexts and conventions. Slack messages don’t read like email messages. They include slang, emojis and reactions that create great opportunities for collaboration but also represent new avenues for compromise. People tend to let their guard down more in these informal channels than they do with traditional email, creating additional security risks that organizations must address.

Enter Human Risk Management

Recognizing these evolving challenges, Mimecast has re-engineered its platform to focus entirely on what it calls the “human layer.” The company has made several strategic acquisitions over the past 18 to 24 months to build out a comprehensive human risk management platform. These include Elevate Security for awareness training and risk management, Code 42’s Insider solution for insider risk management, and AWARE for AI-driven data governance and compliance across collaboration platforms.

This expanded platform delivers three core pillars of value. First, it measures human cyber risk to provide visibility into risky behaviors and targeted attacks. Second, it empowers people through real-time training and feedback to help users make better decisions. Third, it protects what matters most with adaptive policies and advanced detection to prevent breaches before they happen.

At the heart of this approach is the Human Risk Command Center, a consolidated view of all risk signals gathered from Mimecast tools and other cybersecurity tools in an organization’s environment. The platform doesn’t just rely on Mimecast data. It pulls signals from endpoint solutions like CrowdStrike and Microsoft Defender, creating a comprehensive risk score for each user based on their actions, the attacks targeting them and the access they have.

A Different Take on Training

Traditional annual compliance training has proven insufficient for addressing modern threats. Mimecast takes a different approach, aiming for short, engaging content that people actually enjoy so they retain relevant security information almost by accident. The company has developed hundreds of TV-quality micro-learning videos, some as short as 10 seconds, that deliver targeted content at the right time.

These behavioral nudges engage users when risky actions are actually taking place rather than weeks later during a scheduled training session. The system is fully automated based on what users are doing, providing consistent education perfectly tailored to individual behavior. Additionally, personalized risk scorecards give users visibility into how their actions impact the organization as a whole, empowering them to take ownership of their role in keeping the organization safe.

One Mimecast customer, CrowdStrike, saw impressive results from this approach. Within just three months of implementing insider micro-training videos, they experienced a 13% drop in personal Google Drive usage and a 36% reduction in low to moderate risks, all without any hands-on management from their security team.

Rethinking Data Loss Prevention

Perhaps the most innovative aspect of Mimecast’s platform is its approach to data loss prevention through the Insider solution. Traditional DLP tools have earned a reputation for being overly complex, resource intensive and frankly not worth the effort. They typically require significant time and resources to deploy, and the return on investment can feel underwhelming because organizations must define perfect policies upfront, essentially making assumptions about their data before they can gain any visibility.

Incydr was designed from the ground up to address these challenges. Unlike legacy solutions, it doesn’t require complex policies before deployment. Instead, it can be implemented in just three to four weeks and provides complete visibility out of the box. The solution monitors all file movements across endpoints, browsers, email and cloud applications, collecting around 180 billion data points every 90 days to provide a comprehensive view of the data protection landscape.

The platform uses scenario-based analysis to identify both known and unknown risks. For known risks, organizations can create simple rules with just a few clicks. But for unknown risks, Incydr’s prioritization model surfaces hidden threats in a transparent, actionable way. Because the system watches every file movement and understands what sanctioned versus unsanctioned behavior looks like, it can make educated guesses backed by AI about what should and shouldn’t be happening.

Flexible Response Options

One of the most compelling aspects of Incydr is its flexible approach to incident response. Rather than defaulting to aggressive blocking that can disrupt legitimate workflows and create friction between security teams and employees, the platform enables tailored responses based on risk severity.

For low-risk incidents, automated micro-training videos educate users and correct behavior without involving the security team. Moderate-risk incidents can be documented and investigated with detailed information about file movements to support in-depth analysis. Only for high-risk incidents does the system take immediate action like blocking data transfers or revoking access.

This graduated approach prevents the common problem of organizations running DLP tools in monitor-only mode because false positives are too disruptive. It also addresses modern data exfiltration methods that go far beyond USB drives, including AirDrop, GitHub, Salesforce and generative AI platforms.

Looking Ahead

As AI continues to evolve, so does the cybersecurity arms race. Attackers are already using generative AI to improve efficiency in creating phishing attacks and other threats. Organizations must ensure their defenses incorporate AI in tangible, effective ways rather than just stamping AI on existing solutions. Mimecast continues to invest in AI-driven protections, including natural language processing for email analysis and cross-platform analysis capabilities that use AI to determine which filters to apply and when.

The modern work environment demands a new approach to security, one that recognizes people as both the greatest vulnerability and the strongest defense. By providing visibility into human risk, empowering users with timely education and protecting data across the entire work surface, Mimecast’s human risk management platform helps organizations turn targeted users into proactive defenders, accidental users into safe operators and risky behaviors into secured operations. In an era where 68% of breaches involve human error, securing the human layer isn’t optional. It’s essential.

If you want to see the webinar that inspired this post, check it out here!

The post Managing Human Risk and Preventing Data Leaks with Mimecast appeared first on InterWorks.

]]>
5 Ways to Fail with AI https://interworks.com/blog/2025/10/23/5-ways-to-fail-with-ai/ Thu, 23 Oct 2025 18:59:41 +0000 https://interworks.com/?p=71403 You’ve seen the stat, 95% of generative AI pilots deliver zero measurable business returns. According to MIT’s 2025 “GenAI Divide” report, only 5% of companies are finding success. This isn’t a complete surprise: They are called pilots for a reason and new technologies of take...

The post 5 Ways to Fail with AI appeared first on InterWorks.

]]>

You’ve seen the stat, 95% of generative AI pilots deliver zero measurable business returns. According to MIT’s 2025 “GenAI Divide” report, only 5% of companies are finding success. This isn’t a complete surprise: They are called pilots for a reason and new technologies of take time to find their footing. Nevertheless, there’s something to be learned here.

What separates the winners from the other 95% stuck in “pilot purgatory?” It’s not the AI models. It’s not a lack of tech talent. It’s something basic and at the foundation of each project: Flawed strategy, broken workflows and organizational design failures.

Here’s how to ensure your glorious AI failure:

1. Fall in Love with the Magic

The key to failure: Treat your shiny new LLM like a plug-and-play miracle. Get mesmerized by GPT demos and assume that buying access to a powerful model is the finish line. Hand it to your IT team, have them plug it in and wait for the magic.

Why this tanks: The MIT report is blunt: 95% of pilots fail because of “flawed enterprise integration” and “lack of fit with existing workflows.” One CIO put it perfectly after seeing dozens of AI demos: “Maybe one or two are genuinely useful. The rest are wrappers or science projects.”

Pretend that you’re building a car in your garage. You’ve just bought a powerful engine, and it’s mounted on an engine hoist dangling over your bare chassis. Clearly, you aren’t ready to race just yet. You’ve bought a brilliant motor, sure, but you don’t have the transmission, wheels or steering column yet. The model is powerful, sure, but it needs context management and tools. Without the unglamorous work of APIs, data pipelines, security protocols and process redesign, it’s just an expensive noise machine.

What the 5% do instead: They narrow the scope so they can obsess over the required plumbing first. Start with a departmental or use-case specific solution. Map exact workflows, identify friction points and design integrations. They solve business problems where AI happens to be the best too and don’t just slap AI on large, vague problems.

2. Slap AI on your Existing Roadmap

The key to failure: Our friends at Hex recently posted about their bitter lessons from building with AI. Tell me if you’ve heard this story before: Pour money into customer-facing AI projects in sales and marketing. Prioritize initiatives that generate great press releases and excite the board. Bonus points if success is nearly impossible to measure.

Why this tanks: The MIT report shows a clear “investment bias” where companies allocate over 50% of AI budgets to high-visibility, top-line functions that consistently fail. Meanwhile, “successful projects focus on back-office automation.”

A great success example is in the legal field. Law is one of the few areas delivering consistent ROI because it’s text-based (perfect for LLMs), back-office focused and brutally simple to measure: Fewer review hours equals immediate savings.

The 95% are performing “Innovation Theater” where AI pilots are more marketing tools than transformative operational investments or empowering user enablement. 

What the 5% do instead: They mine the back office for gold. They start with legal, finance, compliance and admin. These highly structured processes are perfect for building new AI workflows where automation delivers immediate, quantifiable savings. Less sexy, infinitely more profitable.

3. Build a Tool That Never Learns or Evolves

The key to failure: Deploy your AI like traditional enterprise software. Plan, build and launch as a finished, static product. Walk away and expect it to keep working without any feedback loops, user training or continuous improvement.

Why this tanks: It’s treating AI as a project instead of a product. This is the heart of the GenAI Divide. As MIT puts it: “The core barrier to scaling is not infrastructure, regulation or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context or improve over time.”

You’re applying an outdated mental model. Not only is this a new technology where it is pure hubris to think you’ll get it right the first time, it’s also a non-deterministic technology. AI is not predictable and often the models are changing over time. AI systems are dynamic engines that need continuous learning from user interactions, feedback and organizational data. Without that, they stay stuck at day-one performance while user needs evolve.

No feedback collection. No AI trainers. No human-in-the-loop reviewers. No monitoring for model drift. You’ve built a static masterpiece that can’t adapt — and users will abandon it faster than you can say “ChatGPT is better.”

What the 5% do instead: They build for learning from day one. They budget for feedback loops, prompt evaluations, observability, data curation and user interviews. They measure rate of improvement, not just launch dates. They create the operational structure that will help the AI tools improve overtime.

4. Build Everything Yourself

The key to failure: Embrace a “Not Invented Here” mentality. Insist on building proprietary AI systems in-house, especially if you’re in a regulated industry. Cite compliance and security concerns while embarking on an 18-month, multimillion-dollar journey to reinvent wheels.

Why this tanks: Take a moment to absorb this stat: externally procured AI tools and partnerships succeed 67% of the time. That’s twice the success rate of internal builds. Yet companies, especially in regulated sectors, keep choosing the path that’s statistically twice as likely to fail.

By betting on your own, custom solutions for everything you’re trading proven expertise, accumulated experience and focused R&D from specialized vendors for a low-probability shot at imagined perfection. Meanwhile, your competitors partner with vendors and go from pilot to production in 90 days while you’re still in month six of requirements gathering.

What the 5% do instead: They default to partnerships with specialized vendors. These companies have already solved the integration challenges, compliance hurdles and learning gaps across dozens of implementations. More importantly, they save the custom work for the truly impactful and unique aspects of your company.

5. Crush Your Employees’ Grassroots AI Experiments

The key to failure: When you discover that your employees are using ChatGPT and Claude to get work done, shut it down. Label this “Shadow AI” as a internal rebellion that is nothing more than a security threat. Block the tools, write stern policy memos and discipline anyone caught using personal AI subscriptions for work.

Why this tanks: I’ve seen this story before. Tableau gained popularity as a true “land and expand” product. Some of my earliest customers had simply put a license on their Amex and one was running a secret Tableau Server under their desk. For a long time, Tableau was seen as a threat to data security and the “one source of truth” companies have sought for decades. It turned out that empowering users to answer data was extremely powerful and got more results then simple report factories with months of backlog requests. The same is happening with AI.

More employees, by far, use AI for work than have work supplied access. 90% of employees regularly use LLMs, only 40% of companies have purchased official AI subscriptions. This massive gap reveals widespread use of personal tools for work tasks. And here’s the key insight: this unsanctioned “Shadow AI” often delivers “better ROI than formal initiatives” and “reveals what actually works.”

Your employees are running hundreds of free, real-time micro-pilots every day. They’re validating use cases, identifying high-value workflows and pinpointing exactly where formal AI solutions could deliver the most impact. They’re doing your R&D for free, and you’re shutting it down.

Think of Shadow AI like desire paths, those dirt trails people create by walking the most efficient route instead of using the planned sidewalks. They’re a user-generated map of efficiency. Paving them over is organizational self-sabotage.

What the 5% do instead: They embrace Shadow AI as strategic intelligence. They provide secure, enterprise-grade tools so employees can experiment safely. Some provide clear “AI Stipends” to provide access to a wide range of tools. Some give access to platforms like OpenRouter that can provide access to all major AI models with access, data retention and security controls. They provide clear guidelines and playgrounds for deploying solutions, access data and for safe experimentation. Then, they obsessively study usage patterns to understand which tasks are being automated, which prompts solve real problems and where formal AI investments should go. The 5% follows the desire paths instead of destroying them.

The Bottom Line: It’s a Leadership Gap, Not a Tech Gap

The GenAI Divide isn’t about having better models or more data scientists. It’s about having better strategy and organization alignment.

The 5% who succeed understand they’re building a new organizational capability in an emerging technology field. This requires workflow integration, continuous learning, smart partnerships and grassroots insights.

So, here’s your choice: you can follow the natural path by embracing these five keys to failure and join the 95% with expensive science projects and nothing to show for them. Or flip the script and build something that can empower your employees, make lives better, and create real value. Just remember it takes more than technology. It takes leadership too.

The post 5 Ways to Fail with AI appeared first on InterWorks.

]]>
The New Age of Microsoft 365 Data Protection https://interworks.com/blog/2025/10/23/the-new-age-of-microsoft-365-data-protection/ Thu, 23 Oct 2025 15:42:09 +0000 https://interworks.com/?p=71398 Author’s note: This is an AI-generated summary of a webinar InterWorks hosted on May 29, 2025. The main presenter was Jon Nash, VDC Microsoft 365 Solution Engineer. If you want to watch the whole webinar we summarized for this piece, feel free to watch it here!...

The post The New Age of Microsoft 365 Data Protection appeared first on InterWorks.

]]>

Author’s note: This is an AI-generated summary of a webinar InterWorks hosted on May 29, 2025. The main presenter was Jon Nash, VDC Microsoft 365 Solution Engineer. If you want to watch the whole webinar we summarized for this piece, feel free to watch it here!

When it comes to protecting Microsoft 365 data, many organizations operate under a dangerous misconception. They assume that because Microsoft hosts their email, OneDrive files and SharePoint sites in the cloud, those assets are automatically protected. The reality is quite different, and it’s spelled out clearly in Microsoft’s shared responsibility model: Microsoft is responsible for the service, but you’re responsible for your data.

This distinction matters more than many IT teams realize. While Microsoft ensures the platform stays online and operational, the responsibility for backing up and protecting that data falls squarely on the customer. That means having a backup solution isn’t just good practice. It’s essential insurance against accidental deletions, malicious attacks and compliance violations.

The Evolution of Cloud Backup

Veeam has been in the Microsoft 365 backup space for nearly a decade, but the company’s vision extends far beyond just protecting email and files. The Veeam Data Cloud platform initially focused on Microsoft 365 and Azure virtual machines, but today it encompasses a much broader ecosystem. The platform now includes Veeam Vault for backup and replication storage, Entra ID and Salesforce backups, and soon will add Kubernetes backup capabilities through Kasten.

This expansion reflects a fundamental shift in how organizations work. The platform aims to become a comprehensive cloud data protection solution, not just another point product. It’s an ambitious goal backed by significant investment, but one that addresses the increasingly complex reality of modern IT environments.

Why Native Tools Aren’t Enough

The existence of recycle bins, retention policies and Microsoft Purview leads some organizations to question whether they really need third-party backup solutions. But these native tools weren’t designed for comprehensive data protection. Recycle bins have limited retention periods. An employee cleaning up storage space might accidentally delete compliance-critical data and empty the recycle bin, leaving no recovery option. Someone reducing storage costs could eliminate information that becomes necessary months later for legal or regulatory purposes.

The financial stakes are substantial. Over the past decade, business email compromise attacks alone have cost organizations $55 billion. Insider incidents, whether malicious or accidental, average between $15 million and $16 million per event. These aren’t abstract statistics. They represent real organizations facing real consequences from data loss they couldn’t recover.

A Modern Approach to Protection

Veeam Data Cloud for Microsoft 365 takes a distinctly modern approach to backup, one built with security as a foundational principle rather than an afterthought. The platform has achieved SOC 2 Type 2 certification and maintains multiple ISO certificates, with FedRAMP certification in progress. These aren’t just compliance checkboxes. They represent rigorous third-party validation of security practices that organizations can review at Veeam’s trust center.

The platform currently protects over 23.5 million users across both the cloud platform and traditional on-premises offerings. But perhaps more importantly, it’s designed to eliminate infrastructure headaches. Organizations don’t need to deploy servers, configure proxies, provision storage or manage updates. Veeam handles all of that, allowing IT teams to focus on backup policies and recovery procedures rather than infrastructure maintenance.

Express and Flex: Purpose-Built Solutions

At the heart of Veeam’s offering are two complementary technologies. Express, built on Microsoft’s Backup Storage API, functions as a disaster recovery solution designed for speed. It can restore data at rates between one and three terabytes per hour, leveraging what amounts to a Microsoft superhighway that bypasses normal throttling limitations. Currently, Express covers Outlook, OneDrive and SharePoint sites, with Teams support on the roadmap. The tradeoff for this speed is granularity. Express recovers entire mailboxes, entire drives or complete sites rather than individual items.

That’s where Flex comes in. Veeam’s traditional backup technology, Flex provides the flexibility its name suggests. Organizations can configure retention periods from days to centuries, recover individual emails or files and conduct granular searches across backup data. Flex also allows customers to choose their Azure storage region and provides a unique exit strategy. If an organization decides to leave Veeam, they can assume ownership of their Azure storage account and continue using Veeam’s free community edition explorers to access backup data for the entire retention period they’ve maintained.

This exit strategy sets Veeam apart in a market where many vendors effectively lock customers into their platforms. The ability to leave without losing access to historical backup data provides peace of mind that’s rare in the SaaS world.

Intelligent Recovery Options

The platform’s recovery capabilities reflect thoughtful design around real-world scenarios. For bulk disasters, the purpose-built recovery tool leverages Express to restore massive amounts of data quickly. For day-to-day operations, administrators can recover at various levels of granularity from entire mailboxes down to individual emails or files.

The system includes practical touches that acknowledge how people actually work. Background download options let administrators start a recovery late Friday afternoon and retrieve the results Monday morning without watching progress bars. Flexible targeting means recovering a departed employee’s mailbox into their replacement’s account or a manager’s mailbox for review. Advanced options control everything from version handling to sharing permissions, giving administrators fine-tuned control when they need it without cluttering the interface for simple operations.

Security and Governance Built In

Access to backup data requires more than just passwords. The platform mandates multifactor authentication for all users, including Veeam personnel. Every action creates auditable events, from browsing backup data to previewing emails. This granular auditing addresses a real concern: backup administrators can potentially access sensitive information, and organizations need visibility into who’s viewing what.

Role-based access controls allow organizations to create tiered access models. Help desk teams might access most backup data but not executive mailboxes. Self-service capabilities can let end users recover their own emails and OneDrive files without involving IT. Group-based role assignments make permission management scalable and maintainable. Changes take effect immediately, even for users already logged into the platform.

The SaaS Advantage

Perhaps the most underappreciated aspect of SaaS backup solutions is time savings. Traditional backup systems require ongoing maintenance: patching, updating, monitoring infrastructure and troubleshooting issues. These tasks consume hours that could be spent on strategic projects. With a SaaS platform handling infrastructure, updates and monitoring, administrators reclaim that time. The difference might not feel dramatic initially, but it compounds over months and years into capacity for new projects and process improvements.

One real-world example illustrates the stakes clearly. A SharePoint administrator needed to create a new site but had no available storage quota. They found the oldest SharePoint site, assumed the data was obsolete and deleted it from both the primary and secondary recycle bins using PowerShell. A week later, they discovered the site contained active client files. Without backup, the organization faced legal exposure that likely exceeded the cost of implementing a backup solution many times over.

Looking Forward

The modern workplace depends on cloud collaboration tools in ways that would have seemed impossible a decade ago. Email represents only about 28% of most employees’ workdays, with the rest spent in Teams, SharePoint, OneDrive and various other platforms. Protecting this distributed work environment requires solutions designed specifically for cloud services, not retrofitted from on-premises thinking.

As organizations continue migrating to cloud platforms, the shared responsibility model becomes increasingly important to understand and act upon. Microsoft will keep the lights on, but protecting the data that powers your business remains your responsibility. The question isn’t whether to implement backup for Microsoft 365. It’s whether you’ll implement it before you need it or after you’ve learned an expensive lesson about the true cost of data loss.

The post The New Age of Microsoft 365 Data Protection appeared first on InterWorks.

]]>
How I Made Poker in Sigma https://interworks.com/blog/2025/10/21/how-i-made-poker-in-sigma/ Tue, 21 Oct 2025 19:08:23 +0000 https://interworks.com/?p=69669 How I Made Poker in SigmaOne day, after exploring Sigma’s app builder features for about a month, I had a eureka moment. I came up to my coworker Josias, “I am going to try to make a game in Sigma.” I could not think of a more perfect way to...

The post How I Made Poker in Sigma appeared first on InterWorks.

]]>
How I Made Poker in Sigma

One day, after exploring Sigma’s app builder features for about a month, I had a eureka moment. I came up to my coworker Josias, “I am going to try to make a game in Sigma.” I could not think of a more perfect way to test the tool.  

When I found a deck of cards dataset in our company Snowflake Sandbox, I knew I had to try to make a Texas Hold’em simulator. By leveraging Input tables, dynamic text and conditional action sequences, my idea quickly transformed into a functional simulation.

Loading dashboard...

Call Stored Procedure 

The first step in any card game is dealing the cards. For our poker simulator, that means randomly generating a set of cards from a 52-card deck without replacement. While Sigma doesn’t have a native function for this, we can leverage the power of a stored procedure from the data warehouse. I wrote a simple SQL procedure to handle the randomization and made sure it had the correct permissions to be accessed from Sigma. Then I used the “Call Stored Procedure” action to simulate dealing all the cards necessary for the game.  

Call Stored Procedure in Sigma

It is important to make sure the stored procedure returns as an array so the result can be put into a single cell. 

Now that we have an array with randomly generated card IDs (in this case just numbers 1 – 52), we can start building the rest of the game. If you are confused as to how this works or want to learn more about how to use Sigma actions, check out my guide.  

Clean the Array

Before we can actually use the information generated from the stored procedure, we need to parse the array and convert it into a tabular format.  

Sigma formula for array

I used a combination of SplitPart (which functions the same as Split in Tableau) and RowNumber to split the string and pivot it automatically. The Replace() functions help cleans up extraneous text from the array.  

Leverage Dynamic Text and Image URLS 

Based on the cards drawn from the stored procedure and a data table containing information on all 52 cards we can create a table outlining the information on the cards in play.  

Table creation in Sigma

With our cards now in a usable table format, it’s time to bring the game to life visually. I started by finding some open-source poker card images hosted on public URLs. This was a critical step, as one of Sigma’s coolest features is the ability to integrate dynamic text directly into an image object. This lets you assign a formula to the image itself, allowing it to change dynamically based on the data. 

Sigma dynamic text

Now we can have a single image element display either the back of a card or a specific card from the deck. This is dynamic based on the player or phase of the game. Now we have the ability to reveal cards based on the current phase of the game and also to “deal” new cards each time we refresh the results from the stored procedure.  

All the Controls 

As BI practitioners, we usually work with traditional “tall” data tables; but in Sigma, I often find myself gravitating towards using a few “wide” input tables. I use these as dynamic reference tables, similar to how I use controls, but with a little more flexibility. I created a table to track chip counts for each player and the pot using this technique.  

Dynamic reference tables Sigma

By configuring this as a table, I can easily visualize how many chips each player and the pot have based on the data in this element, which would not be possible if this was a control and would be a hassle if this was a vertical table.   

When using “wide” tables like this, I’ve learned a best practice: assign a unique ID column to the single row of these input tables. Then, create a control that holds this ID. This method makes writing and managing action formulas much more consistent and reliable. 

On the other hand, to manage the flow of the game, I created a segmented control. Now we can model and reference the active phase of the game. Additionally, but using the “Set Control” action we can easily update the currently active phase.  

Sigma Set Control

Conditional Action Sequences 

Conditional action sequences are easily one of Sigma’s most powerful features, and they were the key to making this poker simulation effective. I used a series of conditional action sequences which manage updating the status of the chips table based on who is betting. I used a similar system to update the chips based on who won. The core logic of the game would not be functional without conditional action sequences. Notice how the top bar indicates a custom formula for the condition.  

Sigma conditional action sequence Sigma conditional action sequence Sigma conditional action sequence Sigma conditional action sequence

Using Customizable Page Visibility to Keep Hands Hidden 

Of course, a poker game isn’t a game if everyone can see the cards! To prevent players from cheating, I used Sigma’s custom page visibility feature. I created separate, private pages for each player and configured the visibility to only show a player’s hand on their designated page. In this demo, I made a private page for Player 2 (my friend, Josias) where he can secretly peek at his cards without anyone else seeing them. 

Customize page visibility in Sigma

Potential for More 

This poker simulator is simply a proof of concept to showcase some of the insane capabilities Sigma allows. As such, I did not add every possible feature one would expect to fully automate the poker playing process. That being said, here are some additional features which Sigma would certainly support:  

Betting Phases, Check and Call 

It would be relatively easy to add more phases for each players bets and prompt them to complete a modal before moving to the next player. 

Folding 

This would be pretty easy to implement but take a bit of work to add player fold indicator controls and then change which cards show based on if a player has folded etc.  

Automatically Determining the Winner 

This one would be more complicated. Involving a folding indicator and a series of lookup tables. Also, technically, ties are possible in poker and my proof of concept does not support them. 

Post-Game Report 

If we kept track of the changes to each player’s chip pool instead of just or in addition to updating the main row, we could visualize how the chip distribution changed throughout the rounds of the game.  

Honestly, it is difficult to think of features that Sigma would not be able to handle.  

I was so impressed by how quickly this came together, how effective and good it looked. Sigma is a powerful tool if you know how to use it. What features do you think would make this app even cooler? What project should we make in Sigma next?  

The post How I Made Poker in Sigma appeared first on InterWorks.

]]>
Smarter Cybersecurity with InterWorks and ArmorPoint https://interworks.com/blog/2025/10/21/smarter-cybersecurity-with-interworks-and-armorpoint/ Tue, 21 Oct 2025 14:56:16 +0000 https://interworks.com/?p=71244 Author’s note: This is an AI-generated summary of a webinar InterWorks hosted on May 29, 2025. The main presenter was John Crowley, Partner Development Manager. If you want to watch the whole webinar we summarized for this piece, feel free to watch it here! Modern...

The post Smarter Cybersecurity with InterWorks and ArmorPoint appeared first on InterWorks.

]]>

Author’s note: This is an AI-generated summary of a webinar InterWorks hosted on May 29, 2025. The main presenter was John Crowley, Partner Development Manager. If you want to watch the whole webinar we summarized for this piece, feel free to watch it here!

Modern cybersecurity has reached a critical juncture. The old playbook of deploying security tools and hoping for the best no longer works. Today’s attackers leverage artificial intelligence to craft convincing phishing campaigns, mimic legitimate user behavior, and stay hidden in networks for an average of 277 days before detection. That’s more than nine months of undetected access to your systems, data, and operations.

For organizations already managing cybersecurity, the challenge extends beyond just identifying threats. Security teams face overwhelming alert volumes, disconnected tool sets that don’t communicate effectively, and the constant pressure to maintain compliance while keeping pace with evolving threats. The question becomes less about whether to invest in security and more about how to build something that actually works.

The Real Cost of Modern Threats

The statistics around modern cyber attacks paint a sobering picture. Breaches cost organizations millions of dollars in direct remediation costs, but the hidden costs often prove more damaging: reputation damage, customer trust erosion, and in some cases, business closure. Organizations that have already experienced breaches understand these costs intimately. Those that haven’t are playing a dangerous game of when, not if.

What makes the current threat landscape particularly challenging is the sophistication of attacks. Threat actors use AI to bypass anomaly detection systems. They craft phishing emails convincing enough to fool even trained employees. Once inside a network, they move laterally, mapping systems and escalating access privileges while remaining undetected for months.

The most concerning statistic: attackers can remain hidden in your environment for an average of 277 days before detection. During that time, they’re not idle. They’re learning your systems, identifying valuable data, and positioning themselves for maximum impact when they strike.

Where Security Programs Fall Short

Most organizations face similar challenges in their cybersecurity programs, regardless of industry or size. These gaps create vulnerabilities that sophisticated attackers are quick to exploit.

Overwhelmed Security Teams: Even organizations with dedicated security personnel struggle with excessive alert volumes. When teams receive hundreds or thousands of alerts weekly, alert fatigue sets in. Genuine threats get lost in the noise, and critical incidents go unnoticed until significant damage occurs.

Disconnected Tool Sets: Organizations typically deploy multiple security solutions over time, each addressing specific needs. The problem: these tools often don’t communicate effectively with each other. A firewall sees one thing, an endpoint protection system sees another, and the SIEM sees a third. Without correlation between these data sources, security teams miss the patterns that indicate coordinated attacks.

Compliance Gaps: Pressure to meet specific regulatory frameworks sometimes leads to checkbox implementations where tools get deployed without proper configuration or integration. The result is a false sense of security where compliance boxes are checked but actual protection remains inadequate.

The fundamental issue: organizations can’t defend what they can’t see. Without comprehensive visibility across the entire environment, threats slip through undetected.

The Three Layers of Complete Protection

Effective cybersecurity requires three distinct but interconnected layers. Think of it like protecting your home. You need prevention measures to keep intruders out, detection capabilities to know when someone bypasses those measures, and response protocols to handle incidents when they occur.

Prevention Layer: This includes foundational security measures like software patching, firewalls, security awareness training, and antivirus protection. These tools create barriers that keep common threats at bay. However, sophisticated attackers know how to bypass traditional prevention protocols, which is why the next layers are critical.

Detection Layer: When prevention measures fail, real-time threat detection becomes essential. Modern detection leverages AI and machine learning to identify suspicious activities that indicate someone has breached your defenses. This includes Security Information and Event Management (SIEM) systems, Intrusion Detection Systems (IDS), log monitoring, and threat intelligence. The key is centralizing all this information so patterns become visible.

Response Layer: Detection without response is pointless. This layer includes 24/7 security operations center (SOC) monitoring, forensic investigation capabilities, incident management protocols, and coordinated response procedures. When threats are detected, trained cybersecurity professionals need to know exactly what to do and how to do it quickly.

Without all three layers working together cohesively, organizations leave gaps that attackers exploit. The most sophisticated attackers specifically probe for these gaps, looking for the weakest links in security chains.

ArmorPoint’s Security Operations Platform

ArmorPoint positions itself as more than just a managed SOC provider. The company describes its offering as “cybersecurity as a service,” and that distinction matters. While managed SOC services form the centerpiece of what ArmorPoint does, the broader approach addresses multiple aspects of cybersecurity program management.

At the core sits a custom-built SIEM platform designed from the ground up to function as a security operations platform rather than just a log aggregator. This platform centralizes data and events from across entire environments, including endpoints, servers, cloud infrastructure, and network devices. The system provides real-time threat detection and correlation capabilities that help security teams understand not just what’s happening, but why it matters.

What sets ArmorPoint’s platform apart is its role as more than just a detection tool. It functions as a collaboration platform where security teams can work together on investigations, share insights, and coordinate responses. Some managed security service providers (MSSPs) leverage ArmorPoint’s platform in the background specifically for these collaboration capabilities.

The platform operates from geographically redundant data centers across the United States, with ArmorPoint owning infrastructure down to backup generators at their primary Phoenix facility. For organizations with European operations requiring GDPR compliance, the company maintains dedicated infrastructure in Ireland.

The Human Element: 24/7 US-Based SOC

Technology alone doesn’t stop sophisticated attacks. ArmorPoint emphasizes what they call the “human verification process” to distinguish their approach from purely automated security solutions. While automation plays a critical role in detecting and initially responding to threats, human analysts provide the verification and context that automation can’t deliver.

Here’s a practical example: An endpoint detection and response (EDR) tool might detect malware when someone clicks a malicious link. The automated system blocks the malware and generates an alert. That’s valuable, but it’s only part of the picture. The human verification process takes it further by asking critical questions: Where did this threat originate? How did it reach this user? Are there other compromised systems? What data might have been accessed?

ArmorPoint’s SOC operates 24/7, 365 days per year, staffed entirely by US-based security analysts and threat intelligence specialists. These teams don’t just monitor alerts. They investigate, correlate events across systems, and coordinate incident response activities. When serious incidents occur, they stand up emergency call bridges and work directly with clients through every stage of containment and remediation.

The company has also developed a mobile application specifically for SOC team members, enabling real-time alerts, streamlined incident management, and enhanced collaboration even when analysts are away from their desks. This mobility ensures continuous monitoring and rapid response regardless of circumstances.

Beyond Monitoring: Comprehensive Cyber Services

The “cybersecurity as a service” model means ArmorPoint can engage organizations at whatever stage of their security journey they’re currently in. Some organizations need comprehensive managed SOC services. Others might need help with specific challenges like compliance, vulnerability management, or security awareness training.

Risk Assessment and Compliance: ArmorPoint assists with penetration testing, vulnerability assessments, business impact analysis, business continuity planning, and incident response planning. These services help organizations understand their current security posture and identify areas requiring attention.

Security Awareness Training: The company offers what it calls “human risk management” programs, recognizing that end users remain the weakest link in many security chains. Training programs teach employees to recognize threats and understand proper incident reporting procedures. For organizations with compliance requirements, ArmorPoint also provides policy development and training on those policies.

Incident Response: When breaches occur, ArmorPoint provides full remediation services, including forensic investigations, system quarantining, and complete incident management. The company’s approach includes standing up emergency response infrastructure and guiding clients through every phase of incident containment and recovery.

The company supports organizations across all verticals and sizes, from small and medium businesses to enterprise-level deployments. Client diversity spans healthcare, higher education, retail, and numerous other sectors, each with unique compliance and security requirements.

The Automation Question

One of the most frequently asked questions about modern cybersecurity revolves around automation. How much should organizations rely on automated responses versus human decision-making?

ArmorPoint’s position: automation should play a specific and critical role, but organizations should never rely on it exclusively. Automated systems excel at rapid detection and immediate response to known threat patterns. When someone clicks malware, automated EDR systems can block execution instantaneously, far faster than any human could react.

However, automation has limits. An automated system can report that it blocked a threat, but it can’t answer the deeper questions that matter for comprehensive security. Understanding where the risk originated, identifying how the attack vector reached the user, and determining whether other systems are compromised all require human analysis and investigation.

This philosophy underpins what ArmorPoint calls the human verification process. Automation handles initial detection and response, while trained analysts verify those actions, investigate root causes, and implement additional measures to prevent recurrence. This balanced approach provides both the speed of automation and the depth of human expertise.

Measuring Security Investment ROI

Calculating return on investment for cybersecurity presents unique challenges. The most significant ROI comes from breaches that never happen, making the value difficult to quantify. Organizations must instead think about two key cost categories.

First, what would a breach cost your organization? This calculation includes direct remediation expenses like forensic investigation, legal fees, notification costs, and potential ransom payments. But indirect costs often dwarf direct expenses. Reputation damage, customer trust erosion, regulatory fines, and potential business closure all factor into the true cost of security incidents.

Second, what would building equivalent security capabilities internally cost? Organizations need to calculate the expense of deploying and maintaining comprehensive security tool sets, plus the cost of hiring and retaining qualified security analysts to monitor systems 24/7. Security analyst positions often see high turnover, creating additional recruitment and training costs.

For most organizations, the math strongly favors partnering with specialized security providers like ArmorPoint. The alternative requires significant capital investment in technology, ongoing operational costs for maintenance and licensing, and the considerable expense of building and maintaining a skilled security team.

What Network Visibility Really Means

When organizations think about security visibility gaps, network visibility consistently ranks as the biggest concern. This makes sense when you consider that network-level visibility requires understanding not just what’s happening on individual endpoints or servers, but how all these components communicate with each other and external systems.

Comprehensive network visibility means understanding traffic patterns, identifying unusual communication paths, detecting lateral movement within your environment, and recognizing when legitimate credentials are being used in unauthorized ways. It requires correlating network-level data with endpoint activity, user behavior, and application logs to create a complete picture of what’s normal versus what’s suspicious.

ArmorPoint’s platform addresses network visibility by centralizing telemetry from across the entire environment. Rather than having network devices, endpoints, servers, and cloud infrastructure generating separate logs that live in different places, everything flows into a single system where correlation and analysis can happen effectively.

Getting Started: The First Conversation

For organizations interested in exploring ArmorPoint’s services, the engagement process begins with a straightforward conversation. The first discussion typically runs about 30 minutes and focuses on understanding current challenges: Where does your security program feel weakest? What keeps you up at night? What compliance requirements are you trying to meet?

From that initial conversation, ArmorPoint can identify areas where they can provide the most value and recommend appropriate next steps. For some organizations, that might mean comprehensive managed SOC services. For others, it could start with specific services like penetration testing, security awareness training, or compliance assistance.

The key philosophy: ArmorPoint positions itself as a cybersecurity partner rather than just a vendor. The goal is meeting organizations wherever they are in their security journey and providing the specific help they need, whether that’s comprehensive protection or targeted assistance with specific challenges.

The Evolution of Threats Demands Evolved Defenses

The cybersecurity landscape continues evolving at a rapid pace. ArmorPoint releases new features and functionality weekly to address emerging threats and improve operational efficiency. Much of the current development focuses on leveraging AI and automation to enhance analyst efficiency and improve threat detection capabilities.

The company maintains SOC 2 Type 2 certification and HIPAA High-Tech certification, demonstrating commitment to maintaining rigorous security standards for customer data. With infrastructure spanning multiple geographic regions and compliance with various international data protection regulations, ArmorPoint can support organizations with global operations and complex regulatory requirements.

For organizations currently managing security internally, the question becomes whether that approach remains sustainable as threats grow more sophisticated and compliance requirements become more stringent. The average breach detection time of 277 days suggests that many current security programs have blind spots that allow threats to persist undetected.

Smarter attacks require smarter defenses. That means moving beyond disconnected tools and overwhelming alert volumes toward integrated security operations platforms backed by skilled analysts who can separate signal from noise and respond effectively when threats emerge.

The post Smarter Cybersecurity with InterWorks and ArmorPoint appeared first on InterWorks.

]]>
How Do You Know How Much Ask Sigma Costs? https://interworks.com/blog/2025/10/14/how-do-you-know-how-much-ask-sigma-costs/ Tue, 14 Oct 2025 15:47:37 +0000 https://interworks.com/?p=70852 In a previous blog post following this series about AI, I talked about Ask Sigma, a new AI powered tool that aims to turning dashboards into smart assistants that behave like real data analysts. However, these features come with real costs. Beyond just dollars, companies...

The post How Do You Know How Much Ask Sigma Costs? appeared first on InterWorks.

]]>

In a previous blog post following this series about AI, I talked about Ask Sigma, a new AI powered tool that aims to turning dashboards into smart assistants that behave like real data analysts. However, these features come with real costs. Beyond just dollars, companies need to consider token consumption, the sources being queried, the users generating requests and the databases most frequently accessed. Understanding these factors is crucial for managing budgets, optimizing usage and ensuring transparency across your organization. In this post, we’ll explain how to set up a dashboard that allows us to track Ask Sigma’s usage and expenditure metrics.  

Keep in Mind Accountability 

A few words here must be told about the security, logs and audits. By default, the log entries about Ask Sigma usage are not collected in any type of record unless it is so determined by the company. This is something to remember when you decide to enable Ask Sigma in your organization. Thus, as soon as you enable Ask Sigma, if you care about expenditure, do not forget to install the usage dashboard as well! 

Now, the data collected is very explicit and includes names, full text questions and far more for information. For this reason, it’s recommended to maintain a dedicated schema solely for this purpose and grant access only to system administrators or managers who require that information for their decision-making. 

Install the Backbone Database 

There are a few technical requirements to check before starting. In simple terms, whoever wants to install this feature must have an administrative account both in Sigma so as (or at least) write access to its data warehouse. We assume that your Sigma environment is already functional, it is connected to either Snowflake, Databricks or BigQuery and it has AI enabled (Check my post about how to make your Sigma instance AI ready).  

The steps are actually very simple. We will create the respective database and schema and connect Sigma to that source. Then, Sigma will run the scripts to create the views to make the dashboard functional. 

Following there is the code for Snowflake but if your organization uses Databricks or BigQuery I am sure you will know how to do the same there. Either way a link to the code for those platforms is available here. 

  1. Create a dedicated database and schema to store Ask Sigma logs. I would suggest keeping all Sigma-related data in a single database but separate the schemas according to their purpose:
    CREATE DATABASE IF NOT EXISTS My_Database_Name; 
    
    CREATE SCHEMA IF NOT EXISTS My_Database_Name.My_Ask_Schema;
  2. Grant access to the user that Sigma uses to connect to the data warehouse:
    GRANT USAGE ON DATABASE My_Database_Name TO ROLE Sigma_Data_Reader; 
    
    GRANT USAGE ON SCHEMA My_Database_Name.My_Ask_Schema TO ROLE Sigma_Data_Reader; 
    
    GRANT CREATE TABLE, CREATE VIEW ON SCHEMA My_Database_Name.My_Ask_Schema TO ROLE Sigma_Data_Reader;

Enable the Ask Sigma Usage Dashboard 

The next steps are performed in the Sigma administration settings panel. Under AI Settings, there is a form to enable the Ask Sigma usage dashboard (see screenshot below). If you get a SQL-related error, that means your database or schema does not have the write access enabled to allow Sigma to run the script. 

Above: The form to set up the connection to the database that will store the usage logs. Once you click “Update,” Sigma will run the appropriate script to create the necessary views. 

Then, in the end, how much is Ask Sigma costing me? 

You can get an idea about how the final dashboard looks like from the recording below. You will see how Sigma keeps track of the users, questions, the most-used sources and the overall performance.  

However, the actual piece of information that tells the costs associated is the total number of tokens each prompt consumes, and the token expenditure depends on the LLM model your company has chosen to enable in Sigma. 

 

For example, ChatGPT counts about one token for every four characters of text. The nice part is they only charge per million tokens, so most prompts and replies end up costing just a tiny fraction of a cent. (The official explanation from ChatGPT’s website is here). This means that the cost depends fully on the LLM model your company has chosen, and not on Sigma itself because it acts as a very efficient middleman that addresses all your AI-related operations to your preferred AI provider (more information in my blog: How to make your Sigma environment AI-Ready). 

Well, that’s the nutshell version of how to track both the performance and costs of your Ask Sigma wizard. If you’re still curious about how tokens are computed in real time, I invite you to explore my sample dashboard, “How Much Does My AI-Powered Dashboard Cost?” which is about to be published in our Viz gallery very soon. There, you can see tokens and dollars being calculated after every click and watch your AI spend unfold live. 

The best part is that beyond just tracking costs, Ask Sigma brings the power of AI directly into your decision-making process, turning raw data into clear insights and saving you hours of manual work. It’s fast, intuitive, and built to help you focus on what matters most: making smarter, data-driven choices. I look forward to getting your ideas about it!

The post How Do You Know How Much Ask Sigma Costs? appeared first on InterWorks.

]]>
Client Relationships: Our Differentiator https://interworks.com/blog/2025/10/10/client-relationships-our-differentiator/ Fri, 10 Oct 2025 17:50:40 +0000 https://interworks.com/?p=70887 At the start of this year, I changed roles at InterWorks from technical enablement to sales. On the surface, the move seemed like it would be a big change, going from teaching how to build meaningful data visualizations and dashboarding best practices into a pure...

The post Client Relationships: Our Differentiator appeared first on InterWorks.

]]>

At the start of this year, I changed roles at InterWorks from technical enablement to sales. On the surface, the move seemed like it would be a big change, going from teaching how to build meaningful data visualizations and dashboarding best practices into a pure revenue-generating position almost overnight. But what struck me most was how similar the two roles actually are. 

Both require communication. Both require listening. Both require standing in front of clients, asking questions, learning pain points, understanding where we can provide the most value, and ultimately shaping the future paths for successful long-term data and analytics or even IT strategies. At the core, both jobs are about people. 

In a world where technological shifts are now in intervals of seconds, it’s quite refreshing to know that our mission remains constant at InterWorks. We want to have the best people, doing the best work, for the best clients. Truly, we seek to partner rather than provide. We seek to connect as a means to consult. We strive to build trust. In doing so, we will not only help clients evolve, transition, migrate, upgrade and endure, but we’ll have an equal part in the ownership of the outcomes. 

And that’s why I believe client relationships are what matter most to our success at InterWorks. As consultants, if we can’t bring value to the table and deliver competitive advantage to our clients, then we’re expendable. When we bring our collective expertise combined with a unique conviction of always wanting to exceed each client’s expectations, then you’ve got the magic formula that becomes our differentiator. 

Power in Being Present 

InterWorks celebrated 25 years of being in business not too long ago. Our framework has been grounded on solving challenges, building credibility and being dependable. But I’ve also seen firsthand this unspeakable power of being truly present with clients in our time together. This is why we promote high touch points but only when there’s quality behind the interactions. We’ll go visit clients in person over a meal or drinks just to minimize distractions and maximize one-on-one time while together. 

While You’re Here 

It may surprise you, but one of our favorite things to hear is “While you’re here…” Of course, we love the praise our consultants get from our clients at the end of projects. And for me in sales, I love it when a new lead comes from a referral because a client shares their satisfaction externally with someone else. But through regular client check-ins, we’re not just evaluating progress toward goal or target. We’re keeping communication streams open and ongoing in hopes of hearing those favorite words of ours. And in turn, that may lead to us saying our favorite thing in return: “Yes we can.” 

Relationships Are Work 

In order to be good at something, you have to practice. Practice means progress. The same holds true in communications with clients. More time spent together directly equates to a better understanding our clients. Not just about the broad issues or known problems, but personal pain points, too. I want to know each individual and what matters most to them. In doing so, we deepen our ability to deliver custom outcomes that have the biggest possible impact.  

People Buy From People 

Buying decisions are subconsciously grounded in emotion. People most often make purchases based on feelings such as joy and pain. In the 1993 book Endless Referrals, Bob Burg wrote “People buy from people they know, like, and trust.” It’s important to recognize consumer psychology as an emotional driver in sales. Clients need reassurance you care and have conviction for the subject matter. This is why when we hire someone, we make sure they check a box that says “Hell yes” when sending an offer. We lead with passion and bring that to all of our client interactions.  

Better, Together 

At the end of the day, we know that our work isn’t about a deliverable. We’re in the business of creating partnerships. Those partnerships are grounded in lots of individual relationships that grow stronger with every conversation. Through those relationships, we build trust. The more trust, the more willing a client is to give their time. The more time together, the more intentionality is put toward a best resulting outcome. And the better the outcome, the more value received. This is Client 360. It’s our proven framework. 

As you hopefully can gather, the sum of our collective client relationships is the first 25 years of success here at InterWorks. It’s also the foundation for the next 25 years as well. A future that is bright only because we care without being told to. We have smart, passionate people, who deliver great work consistently. We meet to build, grow and collaborate. To me, that’s the true differentiator in how we work. Want to chat more? Let’s grab a cup of Joe. 

The post Client Relationships: Our Differentiator appeared first on InterWorks.

]]>
Dialogue with Your Data Using Ask Sigma https://interworks.com/blog/2025/10/09/dialogue-with-your-data-using-ask-sigma/ Thu, 09 Oct 2025 17:36:45 +0000 https://interworks.com/?p=70819 Sigma provides multiple AI-driven capabilities to help users gain insights more quickly. Particularly, Ask Sigma is a natural language query (NLQ) tool that enables users to explore their data by asking questions and receiving AI-generated insights. It supports both factual queries and visual outputs like...

The post Dialogue with Your Data Using Ask Sigma appeared first on InterWorks.

]]>

Sigma provides multiple AI-driven capabilities to help users gain insights more quickly. Particularly, Ask Sigma is a natural language query (NLQ) tool that enables users to explore their data by asking questions and receiving AI-generated insights. It supports both factual queries and visual outputs like charts, which can be further analyzed within a workbook. However, Sigma aims to go beyond just offering an interactive interface, they’re working to harness agentic AI to transform dashboards into intelligent assistants that function like real data analysts. 

The Basics 

If you’ve just jumped into my AI-focused Sigma blogs for the first time, I encourage you to spend five minutes reading through How to Set Up AI in Your Sigma Environment. In that publication, you will find a summary of all the current AI features Sigma is offering. Besides, to make the most out of this post, you will need to make sure your Sigma environment, and your user rights, are configured to support at least Ask Sigma. 

Embed Ask Sigma in your Website  

One of the main advantages I see in Sigma is that it’s a fully internet-native platform. This makes Ask Sigma especially easy to set up: Just embed its URL into an iframe on your company’s website and you’re good to go. Here, the trickiest part is the construction of a proper URL that is visible for internal usage (and by that, I mean users who have a Sigma account). 

The main tweak to keep in mind here is that Ask Sigma is only accessible via a secure URL that is signed with a JSON web token (JWT). In practice, the final URL will have a structure very similar to this: 

https://app.sigmacomputing.com/{org-slug}/ask?:jwt=<jwt>

For example: 
https://app.sigmacomputing.com/interworks/ask?:jwt=sfnfihvdjalhgiuh&:embed=true&:theme=Surface

What does it mean? Well, basically, I am saying that the URL points to InterWorks’ Sigma domain. The “jwt=” parameter states the credentials and session configuration settings.  For further information about how to generate secure URLs with JWT, check this article. For the moment, you will need the help of your webmaster to get the URL ready since it entails running some JavaScript code every time the page is loaded.  

Now, focus your attention on the last two parameters displayed in the sample URL above. Those are exclusive for Ask Sigma’s customization, and their meaning are explained here:  

Above: Parameters to customize Ask Sigma in your organization. Source.

Once the URL is ready, we only need to add it to an iframe in our company’s website to start testing Ask Sigma’s capabilities. 

Start Asking Questions 

Once you load the Ask Sigma environment, the new Ask Sigma Discovery panel will appear in the home screen. The main purpose of this panel is to help users quickly explore and understand their organization’s data. It’s especially useful for onboarding new users by showing what data is available and what it can do, making it easier to ask meaningful questions through Ask Sigma.   

Above: The first visualization shown once Ask Sigma finishes loading all data and Ask Sigma Discovery is displayed. Note that the information in the boxes is generated according to the data sources available for the user. You can click on each one of the boxes to get more insights about the respective dataset.  

The steps that follow are very easy to grasp:

  1. Select the data source you want to query. For each user, only those marked as “Highlighted” are the ones available.  
  2. Start typing your questions in the text box using natural language. The fun part here is that Sigma keeps displaying the step-by-step decision logic it used to determine the answer. 

If you’re not sure from where to pull your results, Ask Sigma will still analyze your question and scan the data sources you have access to. It then selects the one it determines is best suited to answer your query, based on factors like semantic relevance, data quality (metadata), and how frequently each source is used. 

The video below gives you a full overview about how the experience of creating and working with Ask Sigma looks like. Some important highlights I want you to note from the recording are: 

  • I embedded Ask Sigma inside of a workbook, which is not the usual application but it still works 😉 
  • Note how Ask Sigma chooses the appropriate data source for us based on our question. It even provides a description and explanation of why it is the best option 
  • All the steps performed are fully displayed and we can edit them according to our needs. 
  • Once ready, we can export the visualization to a workbook, but you need to have rights to create workbooks granted in advance. 
  • Ask Sigma cannot perform joint queries or combine data from multiple tables, data models or metrics. 
  • Ask Sigma may not generate insights or observations if the dataset you’re working with is too large. If this happens, try refining your question by narrowing down the scope or applying filters to reduce the data volume. 

That’s a wrap! I hope you have fun exploring Ask Sigma. You’ll be surprised by how powerful and intuitive it is, turning simple questions into rich, actionable insights. Dive in and see what your data has to say! 

The post Dialogue with Your Data Using Ask Sigma appeared first on InterWorks.

]]>
How to Set Up AI in Your Sigma Environment https://interworks.com/blog/2025/10/09/how-to-set-up-ai-in-your-sigma-environment/ Thu, 09 Oct 2025 17:35:23 +0000 https://interworks.com/?p=70801 Sigma provides multiple AI-driven capabilities to help users gain insights more quickly. Its strategy centers on empowering every team to work with data more effectively by integrating artificial intelligence directly into its platform. By enabling natural language queries, AI-assisted formula creation, and direct access to...

The post How to Set Up AI in Your Sigma Environment appeared first on InterWorks.

]]>

Sigma provides multiple AI-driven capabilities to help users gain insights more quickly. Its strategy centers on empowering every team to work with data more effectively by integrating artificial intelligence directly into its platform. By enabling natural language queries, AI-assisted formula creation, and direct access to generative models from cloud data warehouses like Snowflake and Databricks, Sigma removes technical barriers and accelerates insights. This approach ensures that users can explore, analyze, and act on data with speed and confidence, regardless of their technical expertise. That’s why it’s fair to say Sigma is working toward using agentic AI to turn dashboards into smart assistants that behave like real data analysts. 

Understanding the Basics  

Fundamentally, all AI features currently available in Sigma’s development environment are powered by external AI models which each organization needs to choose and set up. This means that The AI connection is used by Sigma as an AI provider and is separate from where data is stored.  

Sigma plays a role as very effective middleman that facilitates the usage of the chosen AI model’s capabilities to leverage its environment for analyzing and interpreting the data the client company is processing. 

Accordingly, when you use Sigma’s AI-powered features, you agree, on behalf of your organization, that your Customer Data and User Information may be shared with third-party services like OpenAI or Azure OpenAI, depending on the integration you choose. This data sharing is essential for the AI features to work properly and deliver meaningful insights.  

In this regard, before you panic, let’s simplify things: Sigma needs to forward your AI request to the provider you’ve selected, otherwise, how would the AI features even work? For full transparency, you can review their complete disclaimer here. 

While Sigma’s AI-powered features are designed to enhance the user experience and deliver advanced analytical insights, it’s important to remember that users should apply their own judgment. We encourage you to cross-check AI-generated information with trusted sources. Ultimately, even though AI agents may simulate human intelligence, they are not human. Their suggestions should be treated as helpful tools, not final answers. 

Overview of Sigma’s AI Features 

To the date when this blog was published, Sigma supports 4 applications of AI within its ecosystem as explained below: 

  • Ask Sigma + Ask Sigma Discovery: Ask Sigma is a feature that enables users to interact with their organizational data using natural language. It allows you to pose questions like “How are the average sales and revenues compared on monthly basis?” and instantly receive AI-generated insights and visualizations. This tool helps users explore data through interactive charts and tables, making it easier to refine questions and uncover deeper insights without needing technical expertise.
    Ask Sigma Discovery, on the other hand, is a feature that was recently released to work jointly with Ask Sigma. It is designed to help users quickly explore and understand the data available in their organization. It automatically generates curated data collections based on the data sources a user has access to, offering summaries, relevant tables, and associated workbooks. This makes it easier for users, especially new ones, to grasp what data exists, how it’s structured, and how it can be used to formulate meaningful questions and insights. 
  • Explain this Chart: The “Explain this chart” feature in Sigma uses AI to automatically interpret and describe any chart within a workbook. This explanation may include key insights, trends, data distributions, and other observations to help users better understand the visualized data and make informed decisions. Users can interact with the explanation by copying the text, providing feedback, or using it to enhance their reports. 
  • Formula Assistant: It is a beta feature that help users write, correct, and explain formulas within workbooks and data models. It interprets natural language descriptions to generate accurate formulas, identifies and resolves formula errors, and provides clear explanations of existing formulas by detailing referenced columns, transformations, and expected outputs. 
  • Leveraging cloud data warehouse AI functions: If your cloud data warehouse supports SQL functions that interact with generative AI models, you can use Sigma to run those functions directly. This means you can execute AI-powered queries on specific data columns, leveraging the AI model hosted in your data warehouse, and viewing the results within Sigma for further analysis. 

First get your Organization AI-Ready  

In simple terms this means setting up the AI provider your company has picked to use in the Sigma administration section. Then it follows choosing which users will have the permission to use the AI features and some configurations to each individual AI feature like Ask Discovery and Ask Sigma.  

It is important to remember that users need to be assigned the Admin account type to configure any AI feature in the Sigma environment. Besides, you must be able to provide any authentication credentials necessary to connect the external AI providers  

  • Step 1: Configure an AI Provider  
    • The first option at hand is to use the AI features your cloud data warehouse has integrated. Right now, we can enable the usage of Snowflake, Databricks, and BigQuery warehouse AI models. These warehouse-hosted AI models can support the Ask Sigma and the Formula Assistants but not the Explain Charts with AI feature. One important factor is to make sure the LLM of your choice is supported in your account’s region. A complete list of the steps and configuration factors to keep in mind is available here at the Sigma documentation website.   
    •  The other alternative is the integration of an external AI LLM. To date, companies can choose from OpenAI directly, Azure OpenAI Foundry, or with Gemini through Google. Overall, the configuration settings are quite straightforward because they ask for the AI Model name and the API key to get access to their servers. However, because there are some variations from model to model, I recommend reading the specifics at this link 
  • Step 2: Configure permissions for AI features 
    • The allocation of permissions for AI features is done under the Account Types menu in the Administration portal. We must remember that Sigma automatically assigns a license to each account type based on the highest license tier of the enabled permissions. Still, regardless of the type of license, the permissions remain the same as displayed in the screenshot below. Further information about which license tiers have actual access to these permissions check that out here.
      Above: Section to enable the permissions to use AI features on an account type. 
    • To automatically transfer analyses from Ask Sigma into a new workbook, you’ll need an account type that includes permission to create, edit, and publish workbooks, in addition to access to AI features. 
  • Configure Specific AI Features 
    • Configure Ask Sigma data sources: This is the last step to make Ask Sigma usable companywide. In the Ask Sigma data sources section, you can use the search bar to locate specific tables, datasets, or data model elements that you want to prioritize for answering user queries. The list of data sources is organized based on how frequently each one is used. Interestingly, regardless of the number and the type of data sources published (either data models, tables or datasets) each user can only see and query those he has access to. In case you want to use a Sigma Data Model, only the published versions will appear in the data source list.
      Above: Section of the administration portal for selecting the data sources that will be available to Ask Sigma users. The interface allows filter by element name and status. Source: Configure AI features for your organization. 
  • Configure Ask Discovery features for your organization: Ask Sigma includes discovery settings that help surface relevant data assets when users interact with the platform. By default: Ask Discovery is turned on, automatically generating data collections when users access the Ask Sigma page. Ask Discovery assets are also enabled, allowing users to see linked sources and workbooks. If this option is disabled, the discovery feature will display only plain text, without clickable links to data sources. These settings help control how much context and interactivity users get when exploring data through Ask Sigma.
    Above: Ask discovery features as they appear in the administration portal. 

And that’s it, you’re ready to rock and roll! If you wish to start exploring Sigma’s AI features, check out my next post where I’ll show you how to interact with your data using natural language through the Ask Sigma Agentic AI. See you there! 

The post How to Set Up AI in Your Sigma Environment appeared first on InterWorks.

]]>