Skip to main content
Low-Code Kit

CoE Power BI Dashboard: What Every Page Tells You

Practical guide to the CoE Starter Kit Power BI dashboard — what each page shows, which metrics matter, and how to use the data for governance decisions.

By Dmitri Rozenberg | 31 March 2026 15 min read Verified 31 March 2026

Why the Dashboard Matters

The CoE Starter Kit Power BI dashboard is where raw inventory data becomes actionable insight. The Admin View app in Power Apps gives you a record-by-record look at your tenant, but the dashboard shows you patterns, trends, and anomalies that are invisible when you are looking at individual records.

More importantly, it is the artefact you share with leadership. When someone asks “how much adoption do we have?” or “are we at risk of environment sprawl?”, you point them to the dashboard. It turns abstract governance concerns into concrete numbers.

The dashboard connects directly to the Dataverse tables in your CoE environment. It pulls the same inventory data that the sync flows populate, plus audit log usage data if you have that configured. Every page filters and visualises a different slice of your tenant.

Setting Up the Dashboard

Before diving into the pages, a quick note on setup:

  1. Download the .pbit template file from the same GitHub release as your CoE Kit version. Version matching matters — a dashboard template from version 2.x will not work properly with a version 3.x data model.
  2. Open it in Power BI Desktop. When prompted for the environment URL, enter your Dataverse environment URL (format: https://yourenv.crm.dynamics.com).
  3. Authenticate with an account that has read access to the CoE Dataverse tables.
  4. The first refresh takes 10 to 30 minutes depending on your data volume.
  5. Publish to the Power BI service and set up a scheduled refresh (daily is sufficient since the CoE sync also runs daily).

If the refresh fails with “data source error” messages, the most common cause is that your Power BI account does not have a security role in the CoE Dataverse environment. Add the account with at least the “Basic User” role plus read access to the CoE tables.

Monitor Section: What Is Happening in Your Tenant

The Monitor section gives you the operational picture. Think of it as the “what exists” view.

Overview Page

This is your landing page. It shows tenant-wide totals: total environments, total apps, total flows, total makers, and total custom connectors. The numbers themselves are useful, but the real value is the trend lines underneath them.

What healthy numbers look like:

  • Steady, gradual growth in apps and flows — this means adoption is happening.
  • Maker count growing proportionally to app/flow count — this means adoption is spreading across the organisation, not concentrated in a few power users.
  • Environment count growing slowly — rapid environment creation often indicates environment sprawl or a misconfigured DLP policy forcing people to create new environments.

Warning signs:

  • A sudden spike in app or flow creation may indicate a team building rapidly without governance oversight.
  • A plateau in maker count while app count grows means existing makers are building more, but new makers are not joining. This might be fine, or it might mean barriers to entry are too high.
  • The total count includes everything — production apps with thousands of users and test apps that someone built once and forgot. The Overview page does not distinguish between them. You need the deeper pages for that.

Environments Page

This page lists all environments with creation dates, types (production, sandbox, developer, default), and the maker who created each one.

What to look for:

  • Environment sprawl — If you see dozens of environments created in a short period, someone is either testing extensively (fine) or working around a DLP policy (not fine). Filter by creator to identify who is responsible.
  • Sandbox vs production ratio — A healthy ratio depends on your governance model, but if you have significantly more sandbox than production environments, makers may not understand when to promote their work.
  • Default environment usage — The default environment is the one everyone lands in. If it has hundreds of apps and flows, that is a governance problem. Resources in the default environment are harder to manage, back up, and secure.
  • Orphaned environments — Look for environments created by people who have since left the organisation. These need to be reassigned or decommissioned.

Apps Page

The Apps page is one of the most important in the entire dashboard. It shows:

  • Total app count and the split between canvas apps and model-driven apps.
  • Production apps — defined as apps with more than 50 sessions or more than 5 unique users. This is the number that actually matters. Most tenants have hundreds of “apps” but only a fraction are genuinely in use.
  • Top connectors used across all apps.
  • App creation trend over time.
  • Apps by environment.

The production app metric is key. When leadership asks “how many apps do we have?”, do not give them the total count. Give them the production count. Total count includes every half-finished experiment, every copied template, every app someone built during a training session and never touched again. The production count reflects real business value.

Connector usage matters for security. If you see apps using connectors to sensitive systems (SQL Server, SharePoint admin, HTTP, custom connectors), those apps need a closer look. The Apps page shows top connectors, but for deep analysis you will use the Connector Deep Dive page in the Govern section.

Cloud Flows Page

Similar to the Apps page, but for Power Automate cloud flows. Key metrics:

  • Total flows and the split by state: active, suspended, and stopped.
  • Suspended flows deserve attention. A flow gets suspended when it fails repeatedly, or when the owner’s licence expires, or when a DLP policy blocks a connector it uses. A high number of suspended flows suggests systemic issues.
  • Stopped flows are less concerning — they were intentionally turned off. But a large number might indicate flows that were built for one-off tasks and never cleaned up.
  • Creation trends and connector usage follow the same logic as apps.

What action to take: If your suspended flow count is above 10% of your total flow count, investigate. Common causes are licence expiry (someone left or changed roles), DLP policy changes that retroactively block connectors, and API connection credential expiration.

Custom Connectors Page

This page is easy to overlook, but it is one of the most important from a security perspective. Every custom connector represents an API endpoint that someone in your organisation has configured Power Platform to connect to. That could be an internal system, a third-party SaaS API, or in the worst case, a personal server.

Review every custom connector. For each one, you should know:

  • What system does it connect to?
  • Who created it and why?
  • Is the API endpoint an approved corporate system?
  • What authentication method does it use?
  • Is the API key or credential stored securely?

In many organisations, custom connectors are the biggest unmonitored attack surface in Power Platform. This dashboard page is your tool for cataloguing and reviewing them.

Bots Page

If your organisation uses Copilot Studio (formerly Power Virtual Agents), this page shows:

  • Total bot count, split by published and draft.
  • Which environments contain bots.
  • Who created them.

Published vs draft ratio matters. If you have many draft bots, makers may be experimenting (fine) or struggling to publish (investigate). Published bots that no one uses are also worth flagging for cleanup.

Govern Section: What Needs Attention

The Govern section shifts from “what exists” to “what is risky or wasteful.” These pages help you make decisions about cleanup, compliance, and capacity.

Environment Capacity Page

Dataverse storage is not free, and every environment consumes some. This page shows:

  • Storage consumption by environment (database, file, and log storage).
  • Environments approaching their capacity limit.
  • Growth trends over time.

How to identify bloated environments:

  • Sort by database storage descending. The top consumers are not always the ones you expect.
  • Look for environments with high log storage — this often indicates flows that create excessive audit records or apps that log extensively.
  • Compare storage consumption against the number of apps and flows in each environment. An environment with 2 apps consuming 5 GB of storage suggests data accumulation that needs investigation.

Action to take: For environments approaching capacity, work with the owners to archive old data, optimise table storage, or move workloads to a dedicated environment with more capacity. Running out of storage causes all apps and flows in that environment to fail, so this is not something to defer.

App and Flow Deep Dive

These pages use a decomposition tree visual that lets you drill into the data from multiple angles. You can start with an environment, then drill into apps within it, then into connectors those apps use, then into the maker who built them.

This is the page you use when the Overview tells you something is off but you need to find the specific cause. For example:

  • “We have 50 apps using the HTTP connector” — drill in to find which specific apps and who owns them.
  • “Environment X has unusually high activity” — drill in to see which apps and flows are driving it.
  • “Three apps use a deprecated connector” — drill in to identify them for migration.

Archive Score Page

The CoE Kit calculates an archive score for both apps and flows. For apps, the score ranges from 0 to 6. For flows, it ranges from 0 to 10. Higher scores indicate resources that are more likely safe to archive or delete.

What goes into the app archive score (0-6):

PointsCondition
+1No sessions in the last 60 days
+1Not modified in the last 60 days
+1Has no co-owners (single point of failure)
+1Not shared with anyone
+1Uses only standard connectors (lower business impact if removed)
+1In the default environment (not formally managed)

What goes into the flow archive score (0-10):

The flow score follows a similar pattern but includes additional factors like whether the flow has failed recently, whether it is suspended, and whether it uses premium connectors.

How to use archive scores in practice:

  • Score 5-6 (apps) or 8-10 (flows): Strong candidates for archiving. These resources are unused, unshared, unmaintained, and low-impact. Send the owner a notification asking if they are still needed. If no response within 30 days, archive.
  • Score 3-4 (apps) or 5-7 (flows): Review individually. Some of these are genuinely unused, others are seasonal (a flow that only runs during year-end close, for example).
  • Score 0-2 (apps) or 0-4 (flows): Leave alone. These are actively used, shared, and maintained.

Do not automatically delete anything based solely on archive score. The score is a prioritisation tool, not a deletion trigger. Always notify the owner first.

Connector Deep Dive

This page shows connector usage across your entire tenant, split by standard and premium connectors.

Why this page matters:

  • Premium connector identification — Every premium connector requires a premium licence. If makers are using premium connectors, ensure they are properly licensed. This page helps you identify unlicensed usage before Microsoft’s licence enforcement catches it.
  • Risky connector patterns — The HTTP connector, the custom connector category, and connectors to file-sharing services (Dropbox, Google Drive) outside your corporate ecosystem are all worth scrutinising.
  • DLP policy validation — Compare what connectors are actually in use against your DLP policies. If your DLP policy blocks a connector in a specific environment but this page shows it is still being used, your policy may not be applied correctly.

Filter by “blocked” or “non-business” categories to find connector usage that conflicts with your governance intent.

Nurture Section: Community Health

Pulse Survey Results

If you have deployed the Nurture module’s pulse survey, this page shows maker sentiment over time. Key metrics include:

  • Overall satisfaction score (1-5 scale)
  • Specific satisfaction dimensions: ease of use, support quality, learning resources, governance experience
  • Free-text feedback themes
  • Response rate — a declining response rate is itself a signal that makers are disengaging

How to use this data:

  • Track satisfaction trends quarter over quarter. A downward trend after a governance policy change tells you the policy is too restrictive or was poorly communicated.
  • Low scores on “support quality” mean your help channels (Teams channel, support tickets, office hours) are not meeting demand.
  • Low scores on “governance experience” mean your governance processes are creating friction. This does not mean you should remove governance — it means you should make it easier to comply.

Key Metrics to Check Weekly

You do not need to review every dashboard page every week. Here is a focused weekly review checklist:

MetricWhere to Find ItWhy It Matters
New apps and flows created this weekMonitor > Overview trendsSpot unexpected growth or decline
Suspended flow countMonitor > Cloud FlowsIndicates systemic issues (licence expiry, DLP changes)
Environments approaching capacityGovern > Environment CapacityPrevents outages from storage limits
High archive score resourcesGovern > Archive ScoreKeeps your environment clean
Custom connector changesMonitor > Custom ConnectorsSecurity monitoring
Orphaned resources (owner disabled)Govern > App/Flow Deep DivePrevents unmanaged resources

This weekly check takes 15 to 20 minutes once you are familiar with the dashboard. Set a recurring calendar appointment for it.

Red Flags That Need Immediate Attention

Some things should not wait for a weekly review. Configure alerts or check for these proactively:

  1. New custom connector created — Every new custom connector is a potential security issue. Review it within 24 hours to verify it connects to an approved system with appropriate authentication.

  2. Environment capacity above 80% — At 80%, you have limited runway before apps and flows start failing. Work with the environment owner immediately to reduce storage or increase capacity.

  3. Sudden spike in suspended flows — If 20 flows suspend simultaneously, something systemic happened. Common causes: a DLP policy was changed, a shared connection credential expired, or a connector had an outage.

  4. New environments in production type — Production environments consume premium capacity and are harder to decommission than sandbox. Verify each new production environment has a legitimate business justification.

  5. Apps using blocked connectors — If your Connector Deep Dive shows apps using connectors that your DLP policy should block, your DLP policy is either not applied to that environment or has a misconfiguration.

  6. Maker creating resources at abnormal rate — If one person creates 15 apps in a week, they might be running a workshop (fine), or they might be copying production apps to personal environments to circumvent governance (not fine).

Making the Dashboard Work for Leadership

The dashboard is a governance tool for admins, but it is also a communication tool for leadership. Here are practical tips for making it effective in both contexts:

Create a leadership-specific page. The full dashboard has too much detail for most executives. Create a summary page with:

  • Total production apps and flows (not total count — production count)
  • Month-over-month growth in adoption
  • Number of active makers
  • Estimated cost avoidance or ROI if you have that data
  • Top business processes automated
  • Any risks requiring executive attention

Use the data to justify CoE investment. When you need budget for additional licences, training, or staff, the dashboard provides the evidence. “We have 400 production apps used by 2,000 people, managed by a team of 2” is a compelling data point for resource requests.

Establish a monthly reporting cadence. Send a brief summary to stakeholders monthly. Keep it to five key metrics and one callout item. Do not attach the full dashboard — link to the published Power BI report so they can drill in if they want to.

The CoE Power BI dashboard is the most underutilised component of the Starter Kit. Most organisations install it, look at it once, and forget about it. The organisations that get the most value from their CoE are the ones that look at it weekly and act on what they find.

Share LinkedIn X Reddit

Related Tools