Sometimes It’s an Agent. Sometimes It’s a Flow. Always—It’s the Data.
September 16, 2025
By Ryan Jockers, Deputy Director of Data & AI
There’s a lot of excitement around AI agents—systems that can plan, call tools, and take action on our behalf. They’re powerful, and we’re using them where they make sense. But here’s the truth we’ve learned across the North Dakota University System (NDUS): the best solution is often the simplest one that reliably meets the need. Sometimes that’s a rules-based workflow or a standard dashboard. And regardless of the approach, solid data foundations—governed, documented, and ready for analytics—decide whether anything “AI-powered” delivers value.
At NDUS, we’re building that foundation on a lakehouse architecture so our reporting, analytics, and AI all work from a single, governed copy of data. This choice allows us to connect familiar tools like Power BI directly to curated Delta tables today and support AI use cases tomorrow—without duplicating data or reinventing pipelines for every project. Internally, we’ve demonstrated this pattern by connecting Power BI to pilot datasets and sharing a “Data + AI curve” vision for the system—showing that AI readiness rides on disciplined data engineering.
Agents vs. Automation: “Pick the simplest tool that meets the SLA”
When we reach for an AI agent
We use agents when tasks require language understanding, multi-step reasoning, or dynamic decision-making. Examples include knowledge experiences that synthesize policy language across many documents or workflows that must branch intelligently based on unstructured inputs. Public-sector platforms now make this practical, offering managed ways to host agents, wire up tools like search and file retrieval, trace runs, and integrate with event-driven triggers—without building a custom orchestration stack. On the data side, frameworks now let teams author, evaluate, and deploy agents close to the lakehouse, with built-in tracing and governance.
When we stay with straightforward automation
Much of our day-to-day value still comes from deterministic automations—Power Automate flows, scheduled data jobs, or standard templates—because they’re faster to ship, easier to govern, and cheaper to run when the process and inputs are stable. Industry guidance echoes this: use workflows for predictable, rules-based processes; use agents when variability and unstructured context would break fixed rules. Independent perspectives reinforce the point: automation excels at precision and repeatability, while agents trade determinism for adaptability—great when you need it, overhead when you don’t.
NDUS Examples at a Glance
- Policy Q&A Knowledge Agent (Pilot): We’ve tested an LLM-powered agent that answers questions against NDUS policy content, built to share patterns and code with campuses. This is precisely where agents shine: multi-document retrieval, synthesis, and traceable responses.
- Template Dashboards & Flows (Keep It Simple): Across campuses, we’ve standardized financial dashboard templates and delivery processes—getting consistent value quickly without introducing agent complexity.
The Foundation That Makes Both Work: An AI-Ready Lakehouse with Governance
Lakehouse + Medallion Architecture
We organize data using the Medallion (bronze/silver/gold) pattern: raw ingestion, validated/cleaned, then business-ready aggregates. This approach raises data quality step-by-step, supports both streaming and batch, and provides one governed source of truth for BI and AI. Our internal platform plan formalizes this direction—“Data → Decisions → Delivered.”
Governance with Unity Catalog & Microsoft Purview
Agents and automations only behave as well as the permissions, lineage, and quality rules around their data. That’s why we’ve invested in Unity Catalog for uniform governance within Azure Databricks, Microsoft Purview and Microsfot Fabric for cataloging, lineage, business glossary, and policy. Our direction is clear: unify governance for data and AI artifacts—tables, models, prompts, evaluations—so we can audit, explain, and trust outcomes at scale. NDUS has already chartered Purview for system use and is working on proof-of-concept work to move from pilot to production.
Responsible AI by Design
Whether deploying agents or automations, we align with the NIST AI Risk Management Framework to govern, map, measure, and manage risk—including data quality, access, monitoring, and transparency. This is a pragmatic che
How We Decide: A Quick Decision Frame
- Is the process stable and inputs structured?
If yes, start with a low-code automation (flow, scheduled job, template report). It’s faster, cheaper, and easier to govern. - Do we need language understanding, retrieval across many sources, or multi-step reasoning?
When rules break down, an AI agent makes sense—especially if it can use tools within a governed sandbox. - Is the data AI-ready?
Check that the dataset flows through bronze → silver → gold, is cataloged, and has clear lineage and permissions. If not, fix the foundation first.
Use agents when you need reasoning over messy, changing inputs. Use automation when rules suffice. And in all cases, invest in clean, governed, lakehouse-ready data—that’s the multiplier for everything we build next.

