POWER BI NEWS NOVEMBER 2025

 

Edited by Riccardo Dominici

 

MICROSOFT 365 APPLICATIONS FOR WORK COURSES - Tell your friends about them, they're very cheap


SEARCH ENGINE AND COURSE CARDS ON MICROSOFT 365 APPLICATIONS


ORGANIZE YOUR STUDY WITH THIS GANTT CHART


WIKI CARDS EXPLORER WITH AUTOMATIC SUMMARY ON CLICK

 

 

 

1 Euro

 

1. Introduction and overview of the update. 3

2. Events and Announcements Fabric Data Days and FabCon 2026. 4

Fabric Data Days: Live training and community action. 4

FabCon 2026: The Power BI and Fabric Community Conference. 4

3. General News Deprecation of Visual R and Python. 5

What is changing and in what context 5

Timing of the deprecation. 6

What users and organizations need to do. 7

Practical implications and considerations. 8

4. Copilot and AI. Artificial Intelligence at the Service of Data Analysis. 9

Standalone Copilot in the Mobile App Ask Anything, Anywhere . 9

Standalone Copilot Web Updates. 10

Improved Copilot Report 12

Improvements to Verified Answers. 13

Remote Power BI Model Context Protocol (MCP) Server. 15

Automatic column expansion in matrices ( Grow to fit ) 17

New Card View.. 18

Enhanced image visualization. 20

OneLake Catalog User Data Functions in Translytic Streams. 21

6. Modeling News. 23

Versioning of the semantic model 25

TMDL in Visual Studio Code (GA) Advanced Tabular Model Editor. 26

7. Data Connectivity. Next-generation Spark and Impala connectors. 27

8. What's New in Views. Part One. 29

KPI monitoring range coverage. 30

Decomposition Tree. All Expanding mode. 31

Dynamic Legends in Zebra BI Charts. Legends that change with the filter. 33

Drill Down Bubble PRO by ZoomCharts. Interactive multi-level bubble charts. 34

9. What's New in Views (Part 2) 35

Power BI Theme Generator: Create custom themes with AI and best practices. 35

Power Gantt Chart by Nova Silva. Managing task dependencies. 37

10. Conclusions. 40

 

 

What's New in Power BI November 2025

 

Welcome to this eBook dedicated to the new features of Microsoft Power BI introduced in the November 2025 update. This update is packed with features and improvements, ranging from AI integration into analysis, to new reporting and visualization options, to updates to data connectivity and modeling. Each chapter of this eBook corresponds to a slide from the original presentation and delves into the topics covered in detail, ensuring a complete technical and functional overview. The language is professional yet accessible, designed to be useful to both Business Intelligence analysts and developers as well as less technical business users who want to understand the impact of these new features.

Before we get into the details, let s briefly outline the highlights of the November 2025 Update.

In the following chapters, we'll explore each new feature in detail, providing in-depth explanations, technical/functional context, practical implications for different types of users (analysts, developers, business users), concrete use cases, connections to other Power BI features or related Microsoft tools, as well as any advantages and limitations to consider. Additionally, where appropriate, we'll include references to official Microsoft sources for those who want to delve deeper.

1. Introduction and overview of the update

The first chapter introduces the Power BI November 2025 Update, providing a high-level overview of the major innovations and changes. This monthly update is particularly significant as it touches on nearly every aspect of the Power BI platform, from AI capabilities to visualizations, from core modeling to data connectors.

Why is this update important? Microsoft updates Power BI on a monthly basis, introducing new features and deprecating those less aligned with future strategies, in order to keep the platform cutting-edge. November 2025 is no exception: among the most significant new features are substantial improvements to Copilot (the integrated AI assistant), strategic changes such as the deprecation of R and Python-based visuals in certain embedding scenarios, and numerous reporting and visualization features requested by the community.

technical perspective, these new developments indicate a strengthening of Power BI's integration into the Microsoft Fabric ecosystem and a growing focus on generative AI applied to BI. For example, the presence of a standalone Copilot in the mobile app and improvements to Verified Answers signals a desire to make conversational data analysis more intuitive and pervasive. At the same time, the deprecation of R/Python visuals in some contexts underscores the commitment to greater security and performance in embedded solutions.

functional standpoint, business users will notice benefits such as more interactive and engaging reports (thanks to new visuals, hero images in cards, etc.) and greater immediacy in data querying (thanks to improved Copilot). Analysts will have enhanced tools to create reports faster and with less manual effort (Report Copilot, auto- resize in matrices), while developers will have access to advanced features for managing models and integrations (local MCP server, TMDL extension for Visual Studio Code, updated connectors).

November 2025 update improves the Power BI experience on multiple fronts: events and training, stability and security, AI and automation, reporting and modeling features, data integration and custom visualizations.

References: For an official summary, the Microsoft Power BI blog from November 2025 lists the key points: deprecation of R/Python visuals in Embed for your customers , enhancement of Copilot/AI (Copilot mobile, Verified Answers ), new reporting, modeling, and connectivity options, and visualization enhancements. This overview serves as a guide to exploring each aspect in more detail in the following chapters.

2. Events and Announcements Fabric Data Days and FabCon 2026

In the second chapter, we delve into the events and announcements highlighted in the November 2025 update. These relate to community initiatives and conferences that enrich the Power BI and Microsoft Fabric ecosystem. In particular, the corresponding slide highlights two key events: Fabric Data Days and FabCon 2026.

Fabric Data Days: Live training and community action

Fabric Data Days is billed as a two-month learning and community event around Microsoft Fabric and Power BI. Kicking off on November 4, 2025, it features a series of live sessions, contests, and networking opportunities focused on data culture. Users can participate in interactive lessons to improve their skills on Power BI and Fabric services, try their hand at QuickViz challenges (likely rapid data visualization contests), and earn discount vouchers for certifications (exams PL-300 for Power BI and DP-600 for Fabric, as mentioned).

Practical implications: For analysts and developers, Fabric Data Days is an excellent opportunity to update themselves on the latest Power BI features and improve their analytical skills through hands-on sessions. IT professionals and managers can benefit from networking with the community, discovering use cases and experiences from other companies. Even less technical users can participate in the introductory webinars, gaining greater familiarity with the platform in a guided context.

Tip: During these events, it's a good idea to leverage social channels and dedicated forums (such as the Microsoft Fabric Community and local groups) to ask questions and share learnings. Active engagement offers added value: speakers (including members of the Microsoft team ) often provide additional materials and demos that enrich the training.

FabCon 2026: The Power BI and Fabric Community Conference

FabCon 2026 is another big announcement: it's the third edition of the FabCon Americas conference, scheduled for March 16 20, 2026, in Atlanta, GA (USA). FabCon is described as the ultimate community-led event for Power BI, Microsoft Fabric SQL, Real-Time Intelligence, AI, and Databases. In essence, it's a community-organized but Microsoft-supported event, packed with technical sessions, keynotes, workshops, and entertainment:

      Sessions and Keynotes: Presentations by Microsoft experts and community speakers on advanced topics in BI, AI applied to data, real-time analytics, etc. (e.g., there may be a keynote on the future directions of Power BI).

      Expo Hall and Partner Day: An exhibition area featuring Microsoft partners (solution providers, integrators) and a dedicated partner day where you can connect with vendors and discover complementary products.

      Community lounge & networking: Informal spaces to meet other attendees, exchange ideas, and perhaps ask experts 1:1 questions ( Ask the Experts sessions mentioned).

      Power Hour & Data Viz Championship: Special activities such as Power Hour (a live session showcasing creative solutions in one hour) and a global Data Visualization competition great opportunities to see visualization best practices in action.

      Final party: last but not least, a moment of fun and networking, in this case at the famous Georgia Aquarium, to seal the end of the event in a convivial atmosphere.

Practical implications: FabCon is a must-attend for Power BI superusers and developers who want to delve deeper: it offers advanced training and visibility into preview features or innovative ways to use the platform. Business analysts and BI team leaders can also gain valuable insights into adoption strategies and industry trends. Last but not least, attending (or virtually, if online sessions are available) demonstrates the company's commitment to data culture, allowing them to then bring the acquired knowledge back internally.

Tip: Registration for FabCon 2026 is now open (with the discount code FABCOMM, which offers a $200 discount on the registration fee, as advertised). If you can't make it to Atlanta in person, check to see if any recorded or livestreamed sessions will be available the community often shares key content post-event.

In summary, this chapter highlights how Microsoft offers not only software tools, but also training events and community conferences to support the growth of Power BI users and practitioners. Investing time in these initiatives can accelerate the effective adoption of new features introduced later.

3. General News Deprecation of Visual R and Python

Let's now delve into a major change announced in the November 2025 Update: the deprecation of support for visuals created with R or Python in reports embedded for external users through the " Embed for your customers" solution. This chapter explains what this is, why Microsoft made this decision, and what the practical implications are for those who use these types of visualizations.

What is changing and in what context

Embed for your customers (also known as the "app owns data" scenario) is a Power BI embedding mode in which a custom application displays Power BI reports to its end users without them having a Power BI license of their own. In other words, the app handles authentication and "owns" the data, providing users with an integrated experience. A typical example: a company creates a portal for its customers that embeds interactive reports created with Power BI; customers navigate the report without directly logging in to Power BI.

Microsoft has announced that starting in May 2026, it will end support for embedding reports and dashboards that contain visuals built in R or Python in two specific scenarios:

      Embed for your customers (app owns data), above;

      Publish to web, the other public embedding method (when generating a non-authenticated public link to the report).

After the deprecation date (May 1, 2026), reports embedded in these ways will still load, but any visuals based on R or Python code will appear blank. Important: This change does not impact embedding scenarios. internal:

      Embed for your organization ( user owns data ), i.e. embedding reports where the end user logs in with their own Power BI credentials (for example, embedding a report in internal SharePoint), will continue to support R/Python visuals.

      Secure embeds in SharePoint, portals, or similar internal sites are not affected by this deprecation.

In short, the restriction only affects unauthenticated or external app-centric scenarios. The following table summarizes the change:

Embedding Scenario

Description

Visual R/Python support post-May 2026

Embed for your customers (App owns data)

Custom app reporting for external users (no license)

Not supported (R/ Py visuals shown blank)

Publish to web ( public link )

Report published publicly

Not supported (R/ Py visuals shown blank)

Embed for your organization (User owns data)

Embed report with authenticated user (internal)

No impact (visual R/ Py still works)

Secure embed (SharePoint, Portal)

Embed in secure authenticated context

No impact (visual R/ Py still works)

 

Why the deprecation? According to Microsoft, the decision is part of its ongoing commitment to providing a secure, scalable, and robust analytics platform. R and Python-based visuals, while powerful, involve running external code (R/Python) within reports. In open contexts such as public embeds or for external clients, this can pose security risks and require complex supporting infrastructure. Furthermore, the company refine the feature set based on market needs and performance: the use of such visuals in external scenarios is likely limited by maintenance costs. By focusing on alternative solutions (for example, custom DAX-based visuals or the use of other integrations such as Fabric Notebooks ), Microsoft can ensure better performance and a smaller surface area for potential vulnerabilities.

Timing of the deprecation

Microsoft has provided a clear timeline for this transition, which we summarize:

This timeline gives organizations approximately 6 months from the announcement to take action on their reports.

What users and organizations need to do

The announcement also includes practical recommendations on how to address deprecation:

      Mapping and Assessment: Administrators or BI teams should identify all reports and embedded dashboards targeting external or public customers that use R or Python visualizations. A comprehensive inventory helps understand the extent of the impact.

      Plan the migration: For each identified report, you need to find alternative solutions for R/Python visualizations. This may mean:

o  Replace the visual with a Power BI standard (e.g. a native chart or a certified custom visual available on AppSource ) that offers similar functionality without running R/ Py scripts.

o  Py logic to DAX or Power Query if possible, then move the calculation into the Power BI data model.

o  Using Notebooks in Microsoft Fabric: If R/Python visualization was used for advanced analysis that couldn't be easily replicated in DAX (for example, machine learning or specific statistical graphs), Microsoft suggests considering using Fabric Notebooks. In Fabric (the platform that integrates various services including Power BI), Notebooks allow you to run Python/R code in a controlled context and then optionally integrate the results into a report. This strategy maintains advanced analytical capabilities but moves execution outside of the report embed, into a dedicated environment.

      Update documentation and inform stakeholders: The application development teams performing the embedding, as well as customers/end users if necessary, should be notified. For example, if a certain public dashboard will no longer display a specific analysis, this should be communicated and perhaps replaced with a link to an internal report or alternative document.

      Testing and Quality Assurance: After updating your reports, it is essential to test them in the relevant embed scenarios to ensure that all visuals function correctly and that the user experience is not degraded.

Practical implications and considerations

For Power BI developers and BI managers with embedded solutions in production, this deprecation is a reminder of the need to actively monitor cloud product roadmaps. Power BI Embedded Analytics is an important subset, and support changes like this are likely to occur as the service evolves. Fortunately, Microsoft has provided adequate advance notice.

For end-user business users using embedded reports, the direct impact should be minimal if report owners take timely action. In some cases, they may notice a change in visual type (e.g., a different chart replacing the previous R-based one), but ideally, the information value will remain the same.

A long-term benefit of this change is potentially improved stability and security for embedded reports: by removing the need to run R/Python code on Power BI servers for public scenarios, potential security vulnerabilities and computational resources are reduced. Additionally, this change could incentivize Microsoft to further enhance Power BI's native visuals to address any gaps, or improve integration with solutions like Python Scripts in Power Query or Notebooks.

Limitations to consider: Obviously, the downside is the loss of flexibility. Some very specific visualizations obtained via R/Python libraries (for example, advanced statistical graphs like genetic heatmaps or custom maps) may not have an immediate counterpart in Power BI. In such cases, one consideration is whether Embed for your customers is still the right scenario: for example, an app could load static images generated elsewhere, or the raw data could be exposed via API and processed by the client. These are more complex solutions, so it's best to evaluate the importance of the visual in question on a case-by-case basis.

In conclusion, the deprecation of R/Python visuals in external embed contexts is a strategic change aimed at strengthening the platform in terms of security and performance. Organizations should take this opportunity to renegotiate their advanced reporting practices, making the most of standard visuals or the new integrated analytics tools offered by Microsoft Fabric. As additional resources, Microsoft encourages consulting the official documentation on Embed for your customers and secure embeddings, as well as reaching out to the community forums or your Microsoft representative for support during the transition.

4. Copilot and AI. Artificial Intelligence at the Service of Data Analysis

One of the cornerstones of the November 2025 Update is the extensive set of improvements and new features in the Copilot and AI area. Microsoft is investing in making artificial intelligence a key element of the Power BI experience, both to facilitate content creation (reports, dashboards) and to enhance data consumption with natural interactions. In this section, we'll explore:

      Introducing standalone Copilot to the mobile app (new preview).

      Updates to the Copilot chat with your data experience in the web/desktop version.

      Improvements to Report Copilot (AI that helps you create and edit reports).

      Feature Extensions for Verified Answers (verified answers).

      Announcing a remote MCP (Model Context) server Protocol ) for chat with your data.

Each of these elements contributes to a common goal: making interacting with data in Power BI more natural, immediate, and intelligent.

Standalone Copilot in the Mobile App Ask Anything, Anywhere

The first new feature is the arrival of a dedicated Copilot in the Power BI mobile app (for both smartphones and tablets). Essentially, the mobile app (for both iOS and Android) will have a button on the homepage that opens a chat interface with Copilot, allowing users to ask questions in natural language about their data directly from their phone.

Key Features:

      Integrated in-app chat: Users can ask text-based questions (by typing them) or even dictate them by voice (voice dictation is supported on iOS, so iPhone/iPad, as specified). For example, a traveling executive could ask, "Show me total sales this quarter by product," simply by speaking into the phone.

      Responses with graphs and text: Copilot doesn't just respond with text, but when possible, it directly generates a relevant visualization. For example, if you ask for a KPI, Copilot might automatically create a small bar graph or a card with the number. The user can tap the generated graph to expand it, analyze it further, or interact with it (for example, to see details).

      Citations and further information: An important aspect is that Copilot provides citations or references to the sources of the data used for its response. This means that, for example, if Copilot returns the number of sales, it will display a reference to the report or dataset from which it drew the data. By tapping the citation, the user can open the original report for further information or to check the context. This feature increases confidence in the AI's responses, as it allows the user to trace the source data (which is crucial for making decisions based on those responses).

      Instant insight sharing: If a user discovers something interesting through Copilot (say, a graph showing a decline in sales in a region), they can share it right away: the app lets you share the generated visual (presumably via screenshot or as a copy of the graph in the message) or copy the text of the response, making it easy to send via email, chat, or insert into presentations on the fly.

      Availability: This feature is listed as Preview and will be available in the weeks following November 2025 via an app update on the stores. Therefore, it requires no special configuration other than updating the app when the update is released.

Context of use: Imagine a business user say, a sales manager who's out of the office. Traditionally, they'd have to open their laptop, connect to Power BI Service, and search for the right report to find a certain number. Now, they can simply open the app on their phone and ask Copilot directly. Within seconds, they'll get the answer, perhaps with a small chart. This makes Power BI much more accessible on the go. For an analyst or data scientist, the feature is convenient for quickly cross-checking data or gathering insights without having to manually build queries on the fly.

Technical considerations: It is mentioned that using Copilot requires that the data be on Fabric capabilities (i.e., in an AI-enabled environment). Additionally, semantic models must be AI-ready with techniques such as enabling Verified Answers and providing context (e.g., table descriptions, synonyms, and well-curated Q&As). If the model is not AI-ready, the app should alert the user with clear messages. This means that to take full advantage of Copilot Mobile, the tenant or organization must have enabled AI previews and prepared the datasets according to the guidelines (an IntendedUse field in the dataset, Q&As enabled, etc.).

Current limitations: Since it's a preview, some features may still be missing. For example, dictation is iOS-only at launch. Furthermore, it's likely that Copilot Mobile will primarily support reading already modeled and published data, and not editing or creating content (you can't create a report from your mobile phone with Copilot; that's more of a function of the Copilot Reports desktop/service side). It's a consumption and query tool, not an authoring tool.

Standalone Copilot Web Updates

Alongside the mobile launch, Microsoft announced improvements to the Copilot chat with your data experience in the web version (the Copilot launched in Power BI Service and Desktop a few months earlier). These improvements aim to make using Copilot simpler and more intuitive. Some noteworthy new features include:

      Automatic data source selection: Previously, when asking Copilot a question online, if there were multiple possible relevant datasets or reports, the system would ask the user to choose one from a list before proceeding. This step has now been streamlined: Copilot automatically tries to identify the best data source and selects it in the background. Only if the question is too general or ambiguous will Copilot ask for clarification (for example, if you ask "Show me sales," and you have separate datasets for different businesses, it might ask "Do you want Sales Europe or Sales USA data?"). This automation makes the experience more seamless: in many cases, you get the answer immediately without additional clicks, at the cost of waiting an extra second or two for processing (the extra time allows Copilot to reason about the source choice, but it's an acceptable trade-off to avoid interruptions).

      Streamlined search results: When you ask a query like "Find 2023 sales reports," Copilot could return a list of relevant items (reports, dashboards, workspaces). This list is now presented in a more streamlined, text-based format, making it easier to read, rather than cluttered graphical tabs. This helps users quickly locate what they're looking for.

      Direct access from the Home page: By the end of November, Copilot will be accessible directly from the Power BI Service home page with immediately available question input. This means that as soon as you log in to the Power BI portal, you'll find a prominent "Ask your data a question..." box. If you prefer the classic home interface, you can always switch back (there appears to be a toggle for the home page style).

      Smart suggested prompts: Copilot will start showing question suggestions based on items you've recently opened. For example, if you recently opened a sales report, Copilot might display a prompt like "Ask: What was the best product in the Q3 sales report?" This feature guides the user through the data without even having to fully formulate the question, reducing the blank page problem.

      Improved item attachment to questions: If you want to specify a specific report or dashboard as the context for your question (rather than relying on automatic selection), the item attachment interface is now more convenient. Items (reports, datasets, etc.) are sorted by most recent and filterable by type, with a design consistent with other parts of Copilot and Microsoft 365. This makes it easier to manually set the context when needed.

Practical implications: For an analyst using Copilot in Power BI Service, these improvements mean less friction. They can query data more quickly, even for simple tasks like finding the location of a piece of information, thanks to the built-in search function. For a new user or a business user, the presence of question suggestions is an invitation to use Copilot as a starting point for exploration: it makes the experience more guided. A concrete example: Mario, a manager, opens Power BI Service to see sales trends. Instead of navigating through dozens of reports, he notices the Copilot box with a suggestion, Want to know the trend by region over the last month? He clicks and receives the answer with a graph and a link to the main report, which he can open only if he needs to dig deeper. He has saved time and gained immediate insight.

technical/administrative standpoint, enabling these features requires the Power BI administrator to activate Copilot (which is still in preview in 2025 and must be explicitly enabled at the tenant level). Furthermore, search and suggestions require the service to properly index content. Security-conscious companies can rest easy because Copilot respects permissions: a user can ask "find sales reports," but will only see those to which they have access.

Improved Copilot Report

Report Copilot is the feature that helps automatically create or modify reports using AI, based on user requests. Introduced a few months earlier in preview, it will be further enhanced with the November 2025 update, making it more capable of understanding user intent and producing visuals that meet expectations.

Three specific improvements are mentioned:

      Smarter visual recommendations: Copilot now automatically chooses the most appropriate visual type for the data the user wants to display. For example, if I ask "Create a chart of monthly sales by product category," Copilot will decide whether to use a histogram, a timeline, a clustered column chart, etc., based on best practices. Previously, it could use more generic choices; now it's smarter in presenting the information in the most effective way.

      Expanded visual library: Copilot supports more visual types than before. This means it can also use additional visualizations (perhaps it now includes advanced maps, gauges, etc., where previously it was limited to bars/lines/pies). This gives the AI more options to compose reports that best meet needs; for example, if the data is geographical, it could directly propose a map, which it couldn't do before.

      Better context understanding: Copilot can interpret more complex commands and nuanced requests. You can describe in more detail what you'd like to see, and it can grasp the details. For example, "Create a report with total sales, average margin, and a chart showing monthly trends, filtered by the last year" Copilot should now understand all the components (metrics, visual type, time frame) and generate a report page with these characteristics. In the past, it might have been confused or ignored some of the instructions.

The claimed result is that Copilot can generate entire pages of reports in seconds with more accurate and useful results than the initial version. And it's available on both the Power BI service (in the browser) and Power BI Desktop, so both traditional report authors and those working on the web service can use it.

Practical example: A data analyst needs to quickly prepare an initial report for Q4 sales. Instead of starting from scratch, he opens Power BI Desktop, activates Copilot, and writes: "Create a page with: a card with Q4 sales totals, a bar chart of sales by region, and a table with the top 10 products by revenue." In a few seconds, Copilot inserts these elements, already configured in the open dataset. The analyst can then refine the details (format, additional filters), but has saved substantial time in the basic creation. This allows for faster iteration with business colleagues: you can generate a draft report, show it, and then adjust or deepen it based on feedback.

functional standpoint, Report Copilot further lowers the barrier to creating complex reports. A business user with no DAX knowledge might be able to generate meaningful analyses using text descriptions alone. However, it's still important to understand what to ask for: AI doesn't replace business logic. Furthermore, for Copilot to be effective, datasets must be well-modeled (relationship coverage, pre-built measures, etc.), otherwise the AI will have little to use.

Considerations: Report Copilot is currently evolving; it's always advisable to carefully review the generated reports. AI is powerful, but it can lack the critical sense to understand whether a graph is relevant or showing misleading correlations. This leaves the analyst in the loop as a supervisor. Over time, however, these improvements could lead to a future where drafting a report is a matter of description, and the analyst spends most of their time interpreting and narrating, rather than manually constructing graphs.

Improvements to Verified Answers

The Verified Answers (verified answers) is a feature related to Copilot/Q&A in Power BI that allows you to pre -set validated answers to specific data questions. Essentially, a report creator can "certify" that certain questions correspond to specific visuals and numbers, so that when a user asks that question, they get a reliable and validated result. The November update introduces several improvements to this feature, making it more flexible and powerful:

      Full visual state inheritance: Previously, saving a verified answer would capture any filters currently applied to that visual, but not necessarily any other state such as slicers or broader contexts. Now, a verified answer retains the entire visual state, including filters applied via slicers, cross-sections, drill- throughs, etc. For example, if I have a page with year and region slicers and generate a verified answer to What is the turnover in Europe in 2024? with year=2024 and region=Europe selected, then that answer will retain those selections. In the future, when the user asks Turnover in Europe 2024, Copilot will return the appropriate visual already contextualized to Europe and 2024, without the need to reset filters.

      Support for additional visual states: The additional states supported in the verified answers are explicitly mentioned:

a)    Slicer selections (already mentioned),

b)    Field parameter choices ( i.e. if the user has selected for example Views for [Month/Quarter] with a parameter, it is now taken into account),

c)    Filters and cross-highlighting (when you click on an element in a chart that filters another),

d)    Drill - through filters (when you drill to a detail page).

This means that almost any condition that influences a visual can be part of a verified response. The result is that AI better understands what to show with the right context when recalling that response.

      More configurable filter combinations (3-10): When configuring a verified response, the report creator can define a few filter variations for which that response is valid. For example, they might say: This visual is the answer if the user asks for sales filtered for any combination of (Year=2022 or 2023) and (Region=Europe or USA). Previously, there was a limit of 3 predefined filter permutations; now, this goes up to 10 permutations. This allows you to cover more complex use cases, such as multiple years x multiple regions x multiple products in various combinations, if you know those questions are common. The observation was that most real-world questions have 1-3 filters, but some have more, so this update aims to cover those needs as well.

      Improved filter reliability: Behind-the-scenes improvements have been made to the mechanism by which Copilot matches the user's question to the correct verified answer and applies filters. This has essentially reduced the number of cases where the AI misinterprets intent or forgets a filter. This ensures that if an appropriate verified answer exists, it is used correctly and with the correct filters applied, reducing discrepancies between the question asked and the displayed answer.

      Support for new visual types (card and Azure Maps): With the introduction of the new Card visual (which we will discuss in the next chapter), it is now possible to use this in the Verified mechanism as well. Answers. The same goes for Azure Maps. Previously, some visualizations weren't supported in verified answers; now, the list is expanded, allowing for richer answers (for example, you might want the question "show sales by city" to return a map rather than a table, and this is now possible if you configure it as a verified answer).

Why is this important? These improvements ensure that verified answers are truly integrated into the user experience, faithfully reflecting the state the user expects. In the past, there were complaints: When I ask a question via Q&A, the number I get doesn't take into account the filter I had in the report. Now, however, thanks to state inheritance, the answer appears consistent with what the user is seeing and with their interactions. This increases trust in using Q&A/AI: the user sees that the AI doesn't give random numbers, but respects the filters and intent.

Use case: A report created by an analyst might have a built-in set of frequently asked questions. Example: In the Customer Satisfaction 2025 report, the analyst sets up verified answers for questions like What is the average CSAT score for [Product] in [Region]? with corresponding visuals. A manager opens the report, selects product X from the slicer, and then asks Copilot (or in the Q&A box) What is the average CSAT? Copilot recognizes that there is a verified answer for this question (covering the context of the product, selected region, etc.) and directly displays the default visual. The manager gets the answer immediately and knows it has been verified by the analyst, so they consider it trustworthy and contextualized.

For report authors, Verified management Answers is now more laborious (you have to consider more states), but also more powerful. It almost becomes a conversational experience design job: you have to predict what questions users might ask and ensure you offer prompt and appropriate answers. It's a new field of BI design, straddling data analysis and conversational UX.

Remote Power BI Model Context Protocol (MCP) Server

The latest news in the AI space is a feature that is very much geared towards developers and integrators: the Remote Power BI Model Context Protocol (MCP) Server. This is a server component (coming soon, in preview) that allows you to enable the chat with your data functionality outside of Power BI, via agents or custom applications, all in a secure and controlled way.

Simply put, Microsoft is enabling developers to build their own custom Copilots that interact with Power BI models. To do this, the MCP server offers three key tools to external AI agents:

      Get Semantic Model Schema: An API that allows the agent to retrieve the structure of the Power BI semantic model (dataset). This allows the agent to understand the tables, columns, measures, and relationships in short, the vocabulary and structure of the data.

      Generate Query: A service that uses Power BI Copilot logic to generate a DAX query based on user input and best practices, starting with a natural language question. So, if the user asks What is the total sales per product this year? , the agent can call this tool, which will return an appropriate DAX query (e.g., CALCULATE(SUM(Sales[Amount]), YEAR(Date)=2025) in pseudocode ).

      Execute Query: A mechanism to actually run the DAX query against the Power BI model and get the results.

By combining these three capabilities, an external agent can converse with the data: it understands the question, translates it into DAX, runs DAX on the dataset, and returns an answer (which can then be further processed and presented as desired).

How and where can it be used? Microsoft suggests that any application hosting an MCP client can implement these tools. For example, Visual Studio Code could integrate it (perhaps for developers testing conversational queries directly in VSCode ). Furthermore, non-Microsoft clients can also be supported, using Service Principal authentication over Entra ID (Azure AD). This means that third-party companies could create custom applications, bots, and chat services that connect to this MCP server to provide answers about company data. A concrete example: a company could develop an internal bot (on Teams, or on its own portal) where an employee asks "show me the performance of KPI X this month" and the bot uses the Power BI MCP server to return the value and perhaps an ASCII chart or image.

Security: Everything is authenticated with Microsoft Access ID (Azure AD), so only legitimate agents with the appropriate permissions can access data through the MCP server. This keeps the data protected according to the roles and permissions already in place in Power BI.

Implications for developers: With this, Power BI developers/ architects have a new way to integrate Q&A and Copilot capabilities into custom flows, extending the analytics experience beyond Power BI itself. Until now, asking a natural language question to the data required using the Power BI interface (visual Q&A or Copilot). With the MCP server, you could potentially have Q&A integrated into a corporate smart speaker, or within Microsoft Teams through a bot. This opens up analytics scenarios. conversational omnipresent.

Status and considerations: At the moment (end of 2025) it is in preview and available " soon " so it's probably in private testing or will be released soon for Power BI Premium users. It should be noted that it will almost certainly require Premium or Fabric capabilities, since we're talking about queries on models (i.e. datasets that support multiple query concurrency, etc.). Furthermore, using these tools presupposes that the models are well described: the agent will generate sensible DAX only if the model is organized, with intuitive names and perhaps with predefined measures. Otherwise, it risks giving incorrect or null results.

In conclusion of this substantial chapter, we can say that Copilot and Power BI's AI capabilities in November 2025 represent a significant leap forward. The aim is to:

      Bring AI everywhere (on your phone, inside and outside Power BI),

      Making the experience simpler and more integrated (fewer clicks, more context retained),

      Give more control to power users (customizable verified responses, developer hookup via MCP).

To take advantage of these innovations, companies must prepare their data and models (so that Copilot works well) and train users on these new capabilities. An informed user who knows they can ask Copilot questions on their phone, or that they can trust verified answers, will benefit enormously in efficiency. On the other hand, expectations must also be managed: AI does not replace data governance; on the contrary, it strongly requires it (clean datasets, certified measurements, correct permissions, etc. are prerequisites for AI to truly shine).

Helpful references: For further information, Microsoft provides documentation on how to create reports with Copilot and prepare data for AI and Verified Answers. Developers can read more technical details about the MCP server in the dedicated documentation.

 

5. Improved reporting views and functionality

Chapter 5 focuses on what's new in Reporting, namely those features that improve the experience of creating and viewing traditional reports in Power BI. The November 2025 update includes several notable improvements:

      The option to automatically expand the columns of a matrix to fill the available space.

      The new Card view is becoming generally available with advanced features like hero images and collage layouts.

      Image view enhancements with new styles and interactive states.

      An integrated enhancement with Fabric: the OneLake Catalog now supports user feature selection for Translytical workflows (in preview).

Let's analyze each of these entries, explaining the context and practical usefulness.

Automatic column expansion in matrices ( Grow to fit )

Power BI tables and matrices are essential tools for displaying detailed or aggregated data. Often, a matrix (a pivot table with rows and columns) doesn't completely fill the available horizontal space in the report: perhaps there are only a few columns with fairly narrow data, leaving a lot of empty space on the right. In the past, the report designer had to manually resize the width of the columns or the visualization to achieve an aesthetically pleasing result.

With the October 2025 update, Microsoft introduced a feature called Grow to fit for tables, and now in November 2025 it is extending it to matrices as well.

What does Grow to Fit do? In short:

      When enabled, the view automatically expands columns to fill all available space. The algorithm takes the extra space in the view and distributes it evenly across the existing columns.

      This only happens if the total column width of the content is smaller than the width of the visual. If, on the other hand, the content is too wide to fit, the default behavior remains ( horizontal scrollbar if necessary).

      It helps to avoid large empty areas and improve readability, without having to manually intervene on each column.

How do I enable it? In the matrix formatting options, under "Column Headers" > "Options," there's an auto-size toggle and a resizing behavior setting, which now includes Grow to fit. Enable this option and the matrix immediately adjusts. The announcement gives a concrete example: if you have a matrix that's, say, 800px wide but the columns together only cover 500px, enabling Grow to fit will allow them to expand proportionally to 800px, eliminating any empty spaces and that unsightly feeling of a "small" table inside a large box.

Previous bug fix: In the initial October implementation, there was a bug noted: an unwanted horizontal scrollbar would appear even when it wasn't needed. This was fixed in November, so if Grow to fit is enabled and there's enough space, no scrollbar will obstruct the view (the scrollbar will only appear if the content actually exceeds the available space).

Practical implications: This improvement, while small, is very useful for report designers. Analysts and report developers can now ensure, with a simple toggle, that their pivot tables appear clean and utilize space optimally, regardless of the filtered or inserted data. This reduces the number of cases where a business user sees a report and thinks something is missing because there's a large gap on the right that gap will now be filled by the wider columns, improving aesthetics and readability (slightly larger fonts spread across more space).

One detail: Grow to fit is particularly useful in scenarios where the columns of a matrix are dynamically generated (for example, columns per year, if I have few columns in certain filters and many in others). Previously, the analyst had to choose: either size for the maximum possible case (with the risk that in minor cases everything will be compressed to the left), or size for the most common case and accept that when there are many, scrolling will appear. Now, with Grow to fit, you can leave management to the system, which adapts scenario by scenario. More automation, less manual maintenance.

Note: This option doesn't change the data, just the presentation; its effect is immediately visible and doesn't affect exports (e.g., if you export to Excel, the columns will have their base size, not their fictitious growth). This option is for interactive visualization in Power BI only.

For more details, see the updated documentation on tables and matrices in Power BI, which includes the Auto-size width and Grow to fit options.

New Card View

A Card is a widely used Power BI visualization type for displaying a single key value (a KPI, a summary number). Until now, there was the classic "Card" visualization that displayed a value with perhaps a label. However, Microsoft has had a new, more flexible and modern version of the Card in preview for some time. With the November 2025 update, this new Card becomes General Availability (stable and production-ready), bringing with it all the advanced features developed.

The features of the new Card visual are quite rich:

      Hero Images: The card now supports the addition of a hero image. In design, a hero image is a visually striking image, typically used as a focal point. Within a card, a hero image can be a logo, a product photo, or an icon representing the KPI. It serves to provide immediate visual context to the number. For example, if the card displays 95% Customer Satisfaction, you could add a customer satisfaction icon or the company logo to make the box more eye-catching. It complements the classic callout image (which was a smaller image associated with the value). There are now two types: the hero image (prominent) and the callout image (surrounding).

      Flexible image sources: The image (both hero and callout ) can be uploaded manually, provided via URL, or even taken from a column in the dataset. This last option is very powerful: it allows for dynamic cards with images that vary depending on the data. For example, if each product has an image URL in the dataset, a Card showing sales for a certain product could automatically display that product's photo as the hero image.

      Control image positioning and adaptation: You can define how the image fits on the card (filling, alignment, cropping, etc.) and you have styles for borders, effects, filters, therefore highly customizable rendering.

      Dynamic collage layout: The new Card introduces a layout called Collage, where the three elements (main value, reference label, and hero image ) can be arranged with different priorities. Essentially, you can decide to highlight one of the three in a larger space and the other two in smaller spaces within the card. For example, you could give ample space to the number and image and make the label (e.g., Q4 Sales ) smaller. Or, if the focus is on the image (say, you're showing a photo of a top seller), you can make the image take up the majority and the value (e.g., revenue) and label are secondary. The collage layout offers flexibility beyond the classic uniform block layout.

      Customizable component order: You can choose the display order of the three components (Value, Reference Label, Image). For example, you can decide that the image appears on top, the value in the middle, and the label at the bottom, or the value on top, with the image in the middle semi-transparent, etc. This means that the report designer has complete control over how the card appears, always putting what matters most to the audience at the center.

      Unified and modern formatting experience: The new Card has been designed to have a formatting panel that is consistent with other modern visuals, with clear categories and consistent naming (for example, similar to the new button slicer or other visuals introduced in 2024). This makes options easier to find and also ensures consistency across visuals: paragraph styles (font, colors, background ) work the same way on cards as on other elements, making life easier for formatters. They've also updated the default card styling, so they look nicer out-of-the-box (modern colors and fonts).

      Compatibility with the old card: Please note that the old Visual Card remains available if you prefer it or if it's needed for backwards compatibility (perhaps there are reports that used non- migratable customizations ). However, the push is clearly toward using the new one, which offers much more.

Practical implications: For report designers, this new card allows you to create much more attractive and informative KPI dashboards. For example, a management dashboard can have different cards: YTD revenue with the company logo as the hero image, customer count with perhaps an icon of a group of people, customer satisfaction with a smiley emoticon, and so on. Instead of just big numbers and text, each key indicator can now be visually contextualized. This captures attention and also helps business users immediately connect the number to the entity (because a picture is often worth a thousand words in terms of recognition).

Furthermore, the ability to take images from the dataset opens up dynamic scenarios: imagine a Card showing Top Performer of the Month with the photo of the number 1 seller and their turnover: there is no need to manually update the image, if the dataset allows it, it will change on its own based on the data.

Accessibility and consistency: The effort to standardize controls and layout also means that new cards meet accessibility standards (contrast, image alt text, etc.) and are consistent with the overall themes. This helps organizations that must follow brand guidelines: with theme generation (which we'll discuss in the visualizations chapter), it's easier to ensure a card adheres to the corporate colors and integrates well with other objects.

Practical use case: A financial analyst is preparing a quarterly report. He decides to include a summary page at the beginning with the key KPIs: revenue, operating margin, costs, etc. Using the new Cards, he manages to include not only the number but also a small trend graph (as an image? He could upload a mini chart as a hero, but more typically, no) or, more usefully, the department logo on each card, to create a pleasing visual effect. The margin, perhaps, is illustrated with a growing graph icon, and costs with a piggy bank icon. These touches ensure that when the manager looks at that page, he immediately associates the symbol with the concept and then reads the number: the information is absorbed more quickly than if it were just text.

technical side, the visual being GA means it's considered stable, so it can be used in production with the assurance that any major bugs have been fixed and that it will remain forward-compatible. To take advantage of it, simply update Power BI Desktop to the latest November 2025 release (or higher) and you'll find the new Card in the visual panel (perhaps replacing the old one as the default option).

Those interested in learning more can consult the detailed announcement on the blog and the documentation. Microsoft obviously encourages you to use the new Card and share feedback. The documentation provides step-by-step examples on how to add hero images and customize the layout.

Enhanced image visualization

Another improvement in the reporting section concerns the Image visual. Until now, the image object in Power BI was pretty basic: you inserted an image (physically or via URL) and displayed it. There weren't many styling options, nor any interactivity beyond the hyperlink, if set.

With the update:

      Advanced formatting options are added for the image, similar to other elements: for example you can set a background to the image, a border with color and thickness, shapes (round corners or special shapes), and Effects like drop shadows. This allows you to better integrate decorative images or logos into the report design without having to pre -process them externally (previously, you might have manually added a background to the image itself, but now you can do it via Power BI).

      Even more interestingly, the image now supports different states: Default, Hover, Pressed (and a general All state). This means the image can behave like a button: you can, for example, have it change its outline color or appear slightly brighter when hovered ( Hover ), and have another effect when pressed (clicked). This effectively turns the Image visual into a sort of custom button. In fact, many people were already using images as buttons (to navigate between pages, to activate bookmarks, etc.), but had to settle for a single static state; now they can make that experience richer by giving the user visual feedback when interacting with the image.

      Multiple image sources: As with the Card, it seems the Image visual now allows you to use URLs or data as the image source (not just manual uploads). This means the visual image can be linked to a model field: e.g., I have a table with image URLs, I can link it and have the visual image dynamically display the image corresponding to the context (filters). It's not entirely clear whether it's that extensive, but from the text " Expanded image sources (URL, data column, upload)" it seems so.

      Fit Options: They've expanded the options for how the image fits into its container: Fit, Fill, Center, Stretch. This lets you decide whether the image should fill the entire space by cropping out any excess ( Fill ), maintain the aspect ratio by centering ( Fit /Center), or stretch to fill the entire space (Stretch). Again, it was very rigid before (I think there was only fill vs. fit ).

Implications and Uses: With these new features, a report designer can achieve many things:

      Build eye-catching navigation menus using images as buttons that highlight feedback when hovered over (imagine an icon menu for navigating report pages: now on hover you can make a shadow appear or slightly change the icon to indicate it's clickable).

      Show dynamic images, for example in a product report, selecting a product from the filter displays the product photo in the Image visual (which gets the URL from the filtered data).

      Enhance your brand identity in your reports by adding your company logo in a corner with a well-defined style (perhaps a rounded edge or shadow), or by adding decorative background images with transparency. All directly in PBI without external editing.

For end users, this means more interactive and polished reports. For example, if they click an image of an arrow to move to the next page, the arrow now animates on click, giving the feeling of an application rather than a static PDF. These are small details, but they matter in the user experience, making Power BI feel more like a professional app and not just a "sheet of charts."

Limitations: Despite the improvements, the Image visual isn't a "data" visual in the strict sense (it doesn't aggregate data, it only displays images). Furthermore, using images from the model can impact the model's size if not managed properly (in general, it's better to use URLs to images hosted on a CDN or SharePoint, rather than importing them all into the.pbix ). Make sure images uploaded via URL are accessible (if the report is running, the URL must be public or managed with authentication that PBI can use).

OneLake Catalog User Data Functions in Translytic Streams

This part is a little more technical and tied to the Fabric ecosystem. OneLake is the unified data lake within Microsoft Fabric, and OneLake Catalog is the centralized data catalog where one can register various assets, including user- defined data functions. functions ).

The update mentioned states that OneLake is now Catalog supports the selection of user-defined data features in translytic streams. Let's try to understand this in simple terms:

      Translytic flows is a Fabric term for processes that combine transactional and analytical operations (typically in the data engineering space on Fabric). For example, in a Fabric data pipeline, you might want to apply custom functions.

      User data functions in this context I imagine are definitions of transformations or calculations (like functions in SQL, Spark, or other) that a user has created and saved for reuse.

Before this update, finding and using these features in the OneLake catalog was likely limited. Now:

      You can browse and search for user functions in the OneLake catalog when building a flow that requires one.

      You can see the details of the function and filter them (for example, filter by my functions vs. company approved functions).

      The context is when you are selecting a function for an action of type Data function in a data flow (presumably, for example in Data Factory inside Fabric, you integrate the function into the pipeline ).

This feature is referred to as Preview.

Practical implications: This is primarily aimed at data engineers or developers working on Fabric and integrating the results into Power BI. The connection with Power BI is that the platform is tightly integrating with OneLake (the data lake behind the datasets). Having user functions at hand means reusing logic defined elsewhere within your own processes. For example, if someone on the team wrote a function to calculate a customer's tax age and published it to the catalog, I can now search for and apply it in an ETL flow on Fabric without having to rewrite it.

For a pure Power BI user, this update might go unnoticed, as it affects data preparation more than the report experience. However, on a strategic level, it signals Microsoft's intention to unify the experience between Power BI and the rest of Fabric: using functions from the catalog in a report isn't yet a concept, but who knows, maybe you'll be able to do so in the future (for now, it remains within pipelines and dataflows ).

Example: A company has a data science team that created a function in Spark to classify customer loyalty based on history. This function has been saved in OneLake. Catalog. Now a data engineer preparing a fact table for Power BI can drag an Apply function operation into their Fabric pipeline, choose the customer_loyalty_score function from the catalog (they can find it by searching the integrated catalog), and apply it to the customer data in the pipeline, generating a field that then goes directly to the ready-to-use Power BI dataset. Without that integration, they would have had to know where the code resides, replicate it, etc.

Benefits: reuse, consistency (everyone uses the same certified logic), governance (only approved functions appear if marked, thanks to filters like Endorsed in your org ).

functional perspective, for a BI manager, this potentially means fewer discrepancies between measurements taken by different teams, because if there's common logic, it's encapsulated in a function in the catalog and reused. This improves the reliability of the data that ultimately appears in reports.

Note: This is a very Fabric feature, which further suggests Microsoft is pushing Power BI users toward Fabric adoption (which includes data engineering, data factories, etc.). For companies still only using standalone Power BI, this might be a sign that Fabric is worth exploring to get the most out of the integrated tools.

In this chapter on new features in Reporting, we've seen how even seemingly minor elements (like sizing columns or adding an image) can make a difference in Power BI's day-to-day usability. Adaptive matrices reduce manual work, the new Card transforms KPIs visually, giving us more visual impact, the advanced Image view allows for richer and more dynamic interfaces, and the OneLake integration highlights the path to an increasingly unified and reusable data analysis environment.

Microsoft Sources: The information presented here is taken from the Power BI November 2025 Feature Summary and official documentation (e.g., the Learn table and matrix page for the Grow to fit option ). For more information, we recommend reading the original Power BI blog post for additional examples and screenshots of the new options, as well as the updated documentation for the Card and Image visuals for full details.

6. Modeling News

In this chapter, we address innovations related to data modeling in Power BI. These features are primarily intended for power users, developers, and BI engineers who design and manage datasets and data models on which reports are built. Specifically, the November 2025 update brings:

      Power BI Modeling MCP Server (Preview): a local server to enable AI agents to interact with the model (related to the Copilot discussion, but here from a modeling perspective ).

      Semantic Model Version History: Now generally available, allows you to preserve and restore previous versions of a dataset.

      TMDL Extension for Visual Studio Code (General Availability ) - The tabular model definition editor in VS Code with new features.

Let s take a detailed look at how each one improves the lives of those who work under the hood of Power BI.

Power BI Modeling MCP Server (Preview) On-Premise AI for Your Model

This functionality, complementary to the one discussed in the Copilot chapter, provides a Model Context server A locally executable AI Protocol (MCP) that enables AI agents to interact with Power BI models using natural language and commands.

In practice, if the remote MCP server part (discussed before) allows you to chat with data on a cloud server, the local Modeling MCP Server seems to be a component designed for Power BI Desktop or local development, where AI can help build/modify the model itself.

From the short summary:

      This local server allows you to agents (like Copilot) to interact with models like those opened in Power BI Desktop, providing the ability to build and edit models using natural language.

      It allows you to perform bulk operations on the model, apply best practices and agent workflows (perhaps automated scripts).

      It is available as an extension for Visual Studio Code (which suggests that it works in synergy with VS Code to let the AI talk to the model).

This sounds like: you could have an AI agent in VS Code that, via the local MCP server, does things on the Power BI model. For example, I could tell it, "Create a date hierarchy in the Calendar table and add a Year -to-Date measure on sales," and the agent could perform these actions in the open model.

Since this is in preview, they are likely testing these capabilities with a limited audience or on specific features.

Practical implications: For a BI developer, this could be a game- changer in model design: imagine being able to ask AI to create 50 standard measures (Margin, Growth %, etc.), or rename fields using consistent notation, or implement formatting rules. It could accelerate data modeling tasks that currently require manual DAX writing or repetitive clicks.

While still immature, it suggests that Microsoft envisions a future where AI- assisted modeling allows more people to successfully build a model without knowing every detail of DAX or Power Query.

Example: A BI developer is working on a complex model with many tables. In VS Code (with the appropriate extension), they launch the MCP server and open an AI chat for the model. It asks: Add a date table to the model with Year, Month, and Quarter columns and set up relationships with the Sales and Orders tables on the DateKey columns. If the AI is advanced enough, it could do exactly that: generate the DAX for the date table (CALENDAR), create the columns, and establish the relationships. The developer then checks the result, but if everything is OK, they've saved considerable time.

Considerations: Obviously, entrusting AI with model management requires utmost care: an error could break existing measurements or introduce ambiguity. It's likely that initially it will be used for tedious but simple tasks, while the critical parts will always be verified by humans.

Versioning of the semantic model

This is a long-awaited new feature in Power BI dataset management: the ability to keep a history of model versions and revert to a previous version when needed. It will become generally available in November 2025, meaning all datasets (likely in Premium workspaces or on Fabric) will benefit from this feature without having to opt into preview.

How does Version History work? From the notes:

      Up to 5 recent versions of the model are automatically captured in certain events, such as:

o  When you open a template in edit mode in the service.

o  When you publish or upload a. pbix file to the service.

o  When you do a version restore (I believe this also generates a checkpoint).

      These versions are viewable in a history panel similar to the one in Office (those who have used version history in SharePoint/OneDrive will recognize the concept).

      You can restore a previous version directly from this list.

Essentially, it provides a safety net: if an update to the dataset causes problems (e.g., you accidentally delete an important measure, or make a mistake in reporting), you can revert to a version with one click instead of having to manually re -publish the old version (which sometimes isn t even easy to preserve).

Practical implications: BI development teams often mimicked versioning by maintaining different.pbix files (v1, v2, etc.) or using tools like ALM Toolkit to save models. Now, some of this is integrated, at least for the latest releases. This simplifies lifecycle management: if a bug is discovered after a release, you can quickly roll back to the previous working version, minimizing downtime for users.

For IT professionals and administrators, this represents an improvement in governance and compliance: being able to retrieve previous versions helps with audits or in situations where it is necessary to understand "what has changed" even if diff is not explicitly mentioned, having the historical version allows for manual comparison.

Example: An analyst changes the "Net Profit" calculation in a dataset and publishes the update. A day later, it's discovered that the new calculation was incorrect, causing confusion in the reports. With Version History, the analyst opens the dataset in the service, goes to the Version History section, sees the version from two days ago (before the change), and clicks Revert. Within seconds, the dataset reverts to its previous state (with the correct metric), and the reports display the correct value again. He can then correct the metric offline and republish again. Without this feature, he would have had to find the old.pbix file and republish it, or rewrite the formula from memory.

Limitations: It maintains up to 5 versions so it's not infinite versioning. A backup strategy is still required if you want a longer history or if you want it to be replicated in code. It probably only works on the Service (so for cloud datasets); for those who only work locally on.pbix, versioning remains manual, but you can always publish it to get the functionality.

TMDL in Visual Studio Code (GA) Advanced Tabular Model Editor

TMDL stands for Tabular Model Definition Language, a scripting language (similar to JSON/YAML in syntax) for defining Power BI tabular models (which share the same Analysis Services engine ). Microsoft released an extension for Visual Studio Code to work with.tmdl files in the context of Power BI Projects.

With the November 2025 update, this TMDL extension becomes generally available, and brings several new features:

      Syntax DAX highlighting in TMDL: When writing DAX expressions in the.tmdl file, DAX syntax is now highlighted (colors for functions, fields, etc.). This makes it easier to read and write measurements and calculations directly in the text code.

      Power Query (M) support: The extension likely now also handles the Power Query part of the model (the M queries), perhaps offering intellisense or at least structuring those sections.

      Breadcrumb, code navigation: Improvements in navigability, to jump between sections of the model easily, suggesting that if I have many tables I can have an index or similar.

      Code actions and formatting: possible shortcuts (e.g. generate a measurement stub, fix formatting) and automatic formatting of the document, to make it consistently readable.

      Localization: Multi-language support, useful if your IDE and locale names have variations.

Power BI Projects workflow, where the entire Power BI workspace is treated as a project with source files (a concept introduced to bring Power BI closer to Source Control and DevOps ). In this scenario, being able to modify the model via code allows for git version management, merges, etc.

Practical implications: For BI developers who prefer a code-first approach or have advanced ALM needs, this GA extension means the tool is ready for everyday use. You can develop a model in the comfort of VS Code with all the tools (and even the benefit of writing measurements in a robust text editor) and then deploy it to the service. This also facilitates continuous integration: you can create build scripts that validate the TMDL and publish it.

Not all BI teams will adopt TMDL, because it requires coding skills and is outside the user-friendly environment of PBI Desktop, but for complex enterprise projects with dozens of tables and hundreds of measures, managing the model as code can reduce errors and foster collaboration (multiple people can work on different parts and then merge the changes).

Example: A team of three developers is working on a large enterprise model. Instead of passing the.pbix file around, they use Power BI Projects: each can edit different parts (one adds a dimension table, another writes new financial calculation measures) directly in the TMDL on separate branches in Git. Thanks to the IntelliSense extension, the measure author sees tooltips for existing columns or measures while writing DAX, and syntax errors are highlighted. They can code review the changes as they would any other code (for example, checking that the names follow the standard). Finally, they integrate the changes, and the latest commit is published to the service as a new dataset. This flow provides greater control over the changes and enables scenarios like rollbacks and parallel branching (development vs. production) without too much difficulty.

Considerations: TMDL is conceptually similar to the JSON schema that defines a dataset, but with a simpler syntax. With the GA extension, we can expect Microsoft to push Power BI Projects more in the future as a natural progression for development teams (especially when combined with Fabric, where a project can include pipelines, report pages, etc.).

For those who prefer the traditional approach (PBI Desktop), nothing directly changes: it's a plus for those who want to use it.

In summary, this chapter on what's new in Modeling highlights how Microsoft is simultaneously:

      Paving the way for AI even in the modeling phase (MCP agent server, preview).

      dataset management and governance capabilities (ready-to-use historical versions).

      Providing professional development tools for those who want to treat Power BI as code (TMDL extension in VS Code).

These are innovations that empower advanced users without directly impacting the end user, but the benefits also trickle down to the latter: more robust models and faster development translate into reliable reports available in less time.

References: The Power BI blog announcement for Modeling and documentation on TMDL and Version History are sources for this information. For example, the section on Version History explains how history is captured and managed (similar to Office). For TMDL, Microsoft Docs and GitHub have demo resources, while the extension can be downloaded from the VS Code marketplace.

7. Data Connectivity. Next-generation Spark and Impala connectors

Moving on, we come to the news in the field of Data connectivity. In November 2025, the main focus is on two very important connectors in the Big Data ecosystem: Spark and Impala. The update brings version 2.0 of these connectors into general availability ( Generally Available). Available ), with a renewed architecture based on Arrow Database Connectivity (ADBC).

Spark and Impala 2.0 Connectors Faster and More Secure with Arrow (ADBC)

Apache Spark and Cloudera Impala are query engines on big data clusters (Spark is general- purpose Distributed computing (Impala is a SQL engine on Hadoop ). Many companies have large datasets in data lakes or Hadoop clusters, and connecting them to Power BI is essential for analyzing that up-to-date data.

The news is that Microsoft has released a new implementation for these connectors:

      Based on ADBC (Arrow Database Connectivity): ADBC is an open-source driver (sponsored by Apache Arrow) that aims to provide an efficient connection layer between analytical applications and databases, leveraging the in-memory Arrow columnar format. Essentially, it's a driver that optimizes the exchange of tabular data between systems.

      The Spark/Impala 2.0 connector built on ADBC promises better performance and increased security.

      Specific benefits mentioned:

o  Reduced overhead: Arrow is known for minimizing data copying and enabling zero-copy transfers between processes. This can make data extraction between the database and Power BI more efficient.

o  Memory safety: Arrow drivers are written with memory safety in mind, reducing the risk of crashes due to buffer overflows etc. (issues that could affect native drivers).

o  Seamless integration with Fabric and PBI Desktop: Being a common driver, it makes it easy to align across environments (for example, using the same driver on Fabric Data Factory and Power BI Desktop).

      Spark and Impala are part of this shared implementation, so it's possible they now have similar connection parameters and better reliability.

Simply put, if previously Spark/Impala connectors could be based on classic ODBC/JDBC, they are now moving to a more modern system (Arrow) that should speed up import and DirectQuery times, and reduce any compatibility issues, by leveraging open standards.

What changes for the user? Ideally, nothing drastic in terms of how they connect: the user will see the connector updated to the new version when they choose "Spark" or "Impala" in the data sources in Power BI. They might notice:

      Improved performance: Queries that used to take 2 minutes now maybe take 1 minute, or faster dataset loads.

      Stability: Fewer connection errors or dropouts.

      Maybe some new connection options: for example, Arrow allows you to pass complex data types; they could have added support for types that previously didn't map well.

This GA update means it's considered ready for production use; those who were already using the preview of these connectors (if available) can now count on official support.

Implications for different roles:

      For data analysts: if you're working in DirectQuery on a Spark cluster for near-real-time reporting, you can expect lower latency. At the same time, Arrow, being columnar, could compress data better in transit, so even importing millions of rows from Impala could be more efficient.

      For IT /database admin: Arrow drivers are open and optimized, giving you more confidence that you won't run out of resources. Additionally, Arrow is designed for efficiency on modern architectures (multicore, vectorized). operations ). So perhaps less load on clusters and gateways, with the same queries.

      security front, a modern driver reduces the risk of exploits via outdated libraries. And having an open-source project behind it can facilitate bug fixes and transparency.

Use case: A company has a data lake on Azure that runs Spark (Azure Databricks, for example) to prepare data. With PBI November 2025, they upgraded the gateways and Desktop to the new version and started using the Spark 2.0 connector. They noticed that a dashboard that previously struggled with DirectQuery (visuals perhaps taking several seconds to load) now runs smoother, perhaps because the Arrow driver exchanges the data in better columnar blocks. Furthermore, where there were previously issues with certain types (e.g., high DECIMALs from Impala), they are now imported correctly.

Additional considerations: Arrow is a rapidly evolving project, and ADBC is relatively new. By embracing it, Microsoft is signaling its commitment to adhering to open standards. This could also pave the way for other future connectors on Arrow (e.g., connectors for cloud databases like BigQuery and Snowflake, if not already present, could leverage Arrow for consistency).

From the perspective of the common Power BI user, the difference is under the hood, but it will contribute to a smoother experience when working with big data.

This chapter, while focused on a single topic, emphasizes the importance of efficient data connectivity. A great model and beautiful visuals are of little use if you can't connect well to your data sources. The Spark and Impala updates reflect the continued focus on this aspect: keeping connectors up to date and leveraging new technologies to speed up access to heterogeneous data.

Anyone interested in learning more can check out the detailed announcements and perhaps any benchmarks Microsoft or the community may publish on pre- and post-Arrow performance. The official Power BI Spark/Impala connector documentation will highlight any requirements (for example, it may require installing new drivers on the on- prem gateway if used).

8. What's New in Views. Part One

The latest innovations to be covered concern custom visualizations and visual enhancements in Power BI. The November 2025 update introduces several new visualizations available on AppSource or built-in, and enhances some existing ones. For convenience, we'll split the discussion into two parts (Chapters 8 and 9) given the number of items.

KPI monitoring range coverage

      Activity Gauge by Powerviz

      Decomposition Tree All Expanding (new feature of the breakdown tree view)

      Dynamic legends in Zebra BI Charts

      Drill Down Bubble PRO by ZoomCharts

These cover a range from multiple KPI tracking, multidimensional analysis, adaptive financial visuals, to interactive bubble charts.

Activity Gauge by Powerviz Measure multiple progress at a glance

Activity Gauge is a new custom visual (developed by Powerviz ) that allows you to view progress against targets for multiple categories simultaneously. Think of it as a cross between a gauge (a semicircular indicator like a speedometer) and a multi-category graph.

Characteristics:

      It is able to display multiple gauges in one, one for each dimensional category, allowing you to see progress towards a goal for each.

      It supports multiple targets and custom data labels, with the ability to define thresholds (e.g. different colors at 50%, 80%, 100% of the target).

      It offers detailed color customization, with 7 predefined schemes and over 30 available palettes. This is useful for making it match corporate colors or emphasize certain states (green/yellow/red).

      Smart labels: Labels that avoid overlapping and position themselves intelligently even if there are many gauges (perhaps they rotate or jump if space is tight).

      Customizable Center: The center of the gauge can be customized; for example, in a classic gauge there is a central value; here you could have a common text or icon that qualifies the whole.

      Does it support interaction and tooltips as expected in PBI visuals, and is it [ certified ]?? I don't know but probably yes if it appears in the summary.

      Other functions: Fill patterns, conditional formatting, ranking (maybe show top N gauge if many), annotation, grid view (maybe it switches from circular gauges to some sort of progress bar? I don't have the exact details, but it sounds rich in configurations).

Practical use: A typical scenario for the Activity Gauge might be a performance dashboard for different departments or regions: for example, display a series of gauges, one for each region, indicating the percentage of monthly sales target achievement. This way, in a single view, you can immediately see who is above 100% (perhaps in blue), who is at risk (in yellow), and who is below target (in red).

For a manager, this is very intuitive: a quick glance and they understand the relative position of various elements. Unlike a normal gauge, here you see them all together instead of just one; unlike a bar chart, here you retain the "target" indication more immediately (the middle of the gauge, the needle). And it also creates a strong visual impact if designed well.

Implementation in Power BI: Since it's a custom visual, it will need to be imported from the marketplace ( AppSource ) into Power BI. Once imported, it appears as a visual in the report. The report author must map:

      A category field (e.g. Region).

      A progress value (e.g. Current Sales).

      A target value (e.g., sales goal). And then configure the details (colors for different ranges, etc.).

Does this visual offer drill-down capabilities ? Unclear, but with "Activity" in the name, perhaps it could represent hierarchies? Not mentioned, but consider: could "Activity Gauge" also suggest timelines? However, the description mentions multiple static categories.

Advantages and limitations:

      Advantage: allows comparison of attainment across multiple elements, better than many separate, disordered gauges.

      Limitation: If there are too many categories (say, 50 products), the multiple gauge isn't ideal; a bar chart is better. So, it's situational: useful for comparisons of a small number of entities (say, up to 10?).

      Another limitation could be comprehensibility: make sure to label well which gauge corresponds to what (smart labels help, but if they are too many or close together, it can be confusing). The grid view perhaps transform the layout to mitigate this.

Implications for users: An analyst can use it to present performance results in a more engaging way to the business. A business user might find it motivating to view their department as a gauge, like a car dashboard, encouraging target completion.

Decomposition Tree. All Expanding mode

The Decomposition Tree is a native Power BI visual that's very useful for drill-down analysis where the user explores the contributions of different factors to a total value. The update introduces a new feature called " All Expanding .

What it does: Allows you to add a field as a legend or series that creates additional bar charts next to each tree node showing a parallel breakdown on another dimension.

In practice:

      Previously, the tree worked like this: you have a total value (e.g., total sales), you choose a dimension (e.g., region), the tree splits into branches (Europe, Americas, Asia, etc.) with proportional lengths. Then you choose a node and another dimension (e.g., product), and that expands into sub-branches, and so on. It's an iterative drilldown where at each step you choose the next dimension to break down.

      With All Expanding , it seems that you can add an additional dimension ( legend ) and then the tree displays a small graph next to each main node representing the breakdown according to that other dimension. For example, if the main tree is by Region, you could add Market (Consumer vs. Business) as a legend and the tree would look something like: a node for Europe with a mini-column chart next to it (two columns, one for Consumer and Business showing the share of European sales of those segments). Simultaneously, a node for Americas, with its own mini-chart for Consumer/Business.

      Indeed, the note mentions "more column charts beside each node showing breakdown from another dimension." So exactly that: for each node, a small column chart appears that expands "all" of that element with respect to another dimension.

      also image support in nodes (perhaps to put icons or photos related to size values, e.g. country flags).

Why it's useful: It removes a limitation: previously, you could only decompose one branch at a time in depth; if you wanted to see the composition in two dimensions simultaneously for all of them, that wasn't possible with a single view. Now you can at least do that for one additional level. In practice, the decomposition tree can now do a parallel split on two dimensions: the first (that of the nodes) and the second (that of the legend) displayed alongside.

A use case: sales analysis where you want to see the first level by geographic area and, within each area, the division between product lines. Now you don't have to expand one area at a time; you see them all side by side: Europe with the corresponding Product A/B/C columns, Americas with A/B/C, etc., immediately comparable. If you then want a third level, you return to the usual drilling mechanism.

This turns the Decomposition Tree into something more like a visual mix between a tree and a comparative cluster bar chart.

Usage considerations: The user can still interact: perhaps they can choose the as -legend dimension and activate it, and the tree remains static at two levels. Or they can still expand a certain branch further.

Adding images to the nodes is an aesthetic touch: now that there's room for a defined node, being able to add an image (country logo, product icon) makes the tree prettier and more identifiable.

Practical implications:

      For analysts and business users, the decomposition tree was already a highly appreciated tool for root cause analysis (including its root cause suggestion feature). This addition also makes it a multidimensional presentation tool. For example, in a meeting, instead of showing two separate charts (a chart by regions and a percentage breakdown of segments by region), you can show a single integrated visual.

      Maintaining the possibility of interaction: the user could say I see that in Asia the red column (e.g. B2B) is dominant, now I expand Asia to see the countries , and the tree updates.

Limitations: Spacing is important: if you have too many nodes plus mini-charts, readability on a single screen can suffer. It probably works best if the first level doesn't exceed maybe 5-7 nodes, otherwise the infographic becomes tiny.

Dynamic Legends in Zebra BI Charts. Legends that change with the filter.

Zebra BI is a well-known provider of custom visuals for financial and other analysis. They have visuals optimized for FP&A, standard IBCS reports, and more. One of their visuals is listed as Dynamic chart legends.

What it is: Dynamic legends mean that the legend labels (typically the series names in a chart) automatically adjust based on the selected filter or scenario. More specifically, it seems to be used for scenario comparisons:

      For example, in financial planning, you have a Budget vs. Actual vs. Forecast scenario. Often, in a chart, if you select to see Budget and Actual for a certain year, the legend might display those static names (and the user can use filters to choose which scenario).

      With this feature, it seems that if the user filters by Scenario = Budget vs Actual , the legend understands and changes the names accordingly or displays those entries. Or in Year -Over- Year comparatives, if I'm looking at 2025 vs 2024, maybe the legend will say 2025 and 2024 instead of Year 1 and Year 2 .

The context provided: eliminating confusion from static labels and useful for YOY planning and analysis, regions etc.. So:

      Before: A static legend could say e.g. Current vs Previous and the user had to understand that Current = 2025, Previous = 2024 for the selected.

      Now: the legend can expand Current (2025) vs Previous (2024) automatically, or even change to Budget vs Actual if I have those categories.

Zebra BI Charts already had a lot of flexibility, here it appears as a refinement to make the charts more self-explanatory.

For report designers, this is great in contexts where a single chart template analyzes multiple scenarios/years: the dynamic legend prevents interpretation errors. The end user doesn't have to remember what the blue and red series represent depending on the filter; they read it directly.

Implications: Less need to add titles or explanatory notes, more immediacy. Especially in self-service contexts, if a user plays with scenario filters on a financial report, the visual remains clear without having to be customized for each user.

Drill Down Bubble PRO by ZoomCharts. Interactive multi-level bubble charts.

ZoomCharts is another vendor known for its interactive visuals with great animations and smooth drill-downs. Drill Down Bubble PRO is a new visual that allows you to create bubble charts with drill-down capabilities into hierarchies.

Characteristics:

      Multi-level drill-down on bubbles: Categories can be represented as bubbles (circles of size relative to a value), and if a bubble represents a group (e.g. Electronics aggregates products), the user can click it to go down to the next level (see the sub-bubbles of Electronics, e.g. TV, Computer, Phone).

      options for bubbles: colors, sizes, and even custom shapes (marks).

      Images as markers: Perhaps a bubble can have an image or icon inside, or the bubble can be represented by an image, which is useful in many contexts (e.g. brand logos).

      Trendlines, thresholds, area shading: Very interesting, this means that in addition to bubble points, they could support trendlines if the data has a time dimension, and thresholds with colored areas (perhaps to highlight quadrants for X/Y values). This enriches the analysis, almost becoming an advanced scatter plot with extra features.

      Intuitive UI Interface: ZoomCharts typically has features like wheel zoom, panning, and built-in filters. They likely offer a more customized experience here as well (perhaps if there are a lot of bubbles, there's a mini-map or size slider).

Typical uses: Bubble charts are used to represent two-dimensional distributions (different X and Y variables) with a third value as the bubble size (e.g., sales as X, margin as Y, bubble size = number of customers). Integrating drill-down:

      You could start with bubbles by market sector, then clicking the Industry sector gets bubbles for individual companies in that sector, etc.

      Or geographically: continent bubble, country drill, city drill.

For analysts, such a tool allows business users to explore data in a discovery way: they see large circles, click on an interesting one, and navigate within it without having to change pages or views.

Compared to native scatterplot: PBI has basic scatterplots with bubbles, play animations, but no hierarchy drill (you can have one point per aggregate and you don't have a click to show subpoints). So this fills a gap. ZoomCharts generally focuses on performance even with many points, thanks to its efficient canvas.

License: ZoomCharts PRO usually requires licensing for production use (not free beyond dev or small limits ). The summary at the end mentions Includes free Developer License for Desktop , which suggests: you can try on Desktop without restrictions, but to publish you may need to purchase a license. This is to be considered in real use; however many enterprises invest in such visuals if it is worth it.

End-user impression: It's definitely a visual wow factor. Seeing bubbles expand with a smooth animation when you click them, perhaps with images inside, is captivating and brings the presentation to life. However, you need a suitable dataset and story (don't overuse it if you don't need it).

In conclusion of this Part 1 dedicated to visuals:

      Activity Gauge provides a new way to visualize progress across multiple objects simultaneously, useful for directly comparing multiple KPIs.

      Decomposition Tree enables multi-dimensional analysis in one go, enhancing an already powerful visual.

      Zebra BI continues to refine itself with presentation refinements (dynamic legends) that make the difference in financial reporting.

      ZoomCharts Bubble enriches the visual library with an exploratory tool for complex data, combining aesthetics and interactivity.

All these additions aim to give report creators more options for communicating information clearly, engagingly, and effectively. Each of these improved visuals/apps has its own ideal scenarios: the key is knowing how to choose them and configure them appropriately for your target audience.

In the next chapter (Part 2) we will continue with further visualization innovations, completing the picture.

9. What's New in Views (Part 2)

We continue our review of the new visualization features with the second part, which includes:

      Power BI Theme Generator (a revamped tool for generating consistent and accessible themes).

      Power Gantt Chart by Nova Silva (updated with task dependency management).

      Synoptic Panel by OKVIZ (visual for interactive custom maps, evolved with new features).

Power BI Theme Generator: Create custom themes with AI and best practices.

Customizing themes (colors, fonts, visual style) is important for creating reports with a corporate identity and for complying with design and accessibility guidelines. Microsoft and the community have long had tools for generating theme JSON files, but they've often been a bit technical.

The Power BI Theme Generator mentioned appears to refer to a tool called BIBB Report Theme Generator, which has been rebuilt and expanded. It's likely a web app or add-in that helps you create themes visually, and it's now integrated with AI capabilities and new options.

Features mentioned:

      Smarter color engine with contrast control: Helps generate palettes that meet contrast criteria (essential for accessibility, WCAG). It could flag if two colors have insufficient contrast and suggest alternatives.

      Creating Gradients: Allows you to easily create graduated ranges of colors, useful for heat maps or diverging series.

      AI-generated themes: You can likely give input (like a logo, image, or keywords) and the AI will automatically suggest a color theme, perhaps inspired by your brand.

      Brand Color Presets: Perhaps it contains palette collections from famous brands or industry types.

      Applying the theme via Fabric: Perhaps integrated with the Fabric interface to apply it on the org ? Or publishing it to Fabric OneLake ? Not very clear, but it sounds like there's now some awareness of the Fabric environment around the theme generator.

      Font Picker: An interface to easily select theme fonts (previously you had to write in JSON).

      Live preview: As you edit, see a preview of a sample report or test visuals.

      Advanced BI.ST Mode: This is curious. BIBB is an author, BI.ST Mode is perhaps a proper name, I don't know. Could it be a "Business Intelligence Style Template" design mode with advanced parameters? I don't have the full context, but presumably a set of advanced options for those who want fine-grained control.

In short, an improved theme generator app, made more user-friendly and powerful.

Practical utility:

      For report creators who aren't design experts, having a tool like this allows you to easily generate a professional-looking theme while maintaining consistency. For example, you can enter the codes for 2-3 corporate colors, and the generator produces extensive palettes (tints and shades) and ensures readability.

      BI Center of Excellence managers, it's a way to define a unified style for all company reports and share it as a themed.json file. And with built-in accessibility checks, it ensures that all themed reports are readable even for colorblind users or on projectors in bright environments.

Example of use: A bank's BI team wants all dashboards to consistently use the corporate colors of blue and orange. Using Theme Generator, they upload the logo (does AI extract color palettes from the logo?) or specify key colors. The tool generates possible combinations and previews of how a pie chart, bar graph, etc., will look. The team adjusts a couple of tones, checks that the white text on blue has sufficient contrast (the tool indicates OK/NO). They choose a corporate font (which appears in the list). Finally, they export a JSON of the theme. This file is distributed to report developers and set as the default theme in every PBIT file. The result: uniformity and visual compliance.

AI Implications: If artificial intelligence really exists, perhaps it could generate something, given the description "modern, elegant theme with cool colors." This could speed up the creative process for those unsure where to start.

Ultimately, this update pushes for reports to be not only data-accurate, but also beautiful and usable. Beautiful design increases adoption and understanding.

Power Gantt Chart by Nova Silva. Managing task dependencies

Nova Silva's Power Gantt Chart is a specialized visual for project timelines ( Gantt charts ). With the November 2025 update, it introduces Finish-to-Start (FS) dependencies between tasks, visualizing them with arrows connecting the bars.

What it means: In projects, one task can depend on the completion of another (FS = Finish A to start B). Now the Gantt chart can graphically represent these dependencies:

      An arrow will be drawn from the end of the previous task bar to the beginning of the next dependent task bar, usually.

      This allows you to easily see the sequence and understand if there is a critical path or which activities are connected.

Implications:

      Without dependencies, a Gantt chart only shows parallel and temporally sequential tasks, but you don't know if two consecutive tasks are unrelated or if the second is waiting for the first. Now you know.

      It therefore allows you to understand bottlenecks: for example, if you see many arrows converging on a task, you know that that is a crucial node (if it delays that one, all the subsequent ones will delay).

      Helps with project monitoring: if a dependency has elapsed (task A finished late?) and B is attacked, you immediately see the impact.

Practical use: A project manager can use this visual in a Power BI report to monitor the status of the project portfolio. For example, have a slicer for each project and view the Gantt chart complete with dependencies. In the progress meeting, sharing the Power BI report, they can filter on project X and discuss: "We see that the 'Implementation' task depends on 'Design Completed' (arrow). If the design is late, move the implementation forward." This is a representation that is traditionally available only in tools like MS Project, but having it in Power BI allows for cross-referencing with other data (e.g., costs, resources) in a single platform.

Technical considerations: To use dependencies, the dataset will need a way to specify them (perhaps a " PredecessorID " field for each task). The visual will assume these columns and draw lines accordingly. Nova Silva, as a certified partner, will have instructions on how to format the data.

Benefits vs. limitations:

      The main advantage is that it turns the Gantt chart into a realistic scheduling tool, not just a static timeline.

      One limitation is that it currently only supports FS (Finish-to-Start) and not other types of relationships (Start-to-Start, Finish-to-Finish, etc.). FS is the most common, but complex projects will require the others as well. Perhaps future updates will include them.

      However, FS covers most cases, and understanding that order is already a step forward.

Conclusion on Gantt: With this feature, Power BI can replace the need to export project data to Gantt PDFs, offering an interactive view. This is very useful in PMO (Project Management Office) contexts that require portfolio dashboards.

Synoptic Panel by OKVIZ SVG Image Data: Interactive Maps and More

Synoptic Panel is a visual from OKVIZ (a well-known visualization provider, Gianmarco, etc. ) that allows you to take a vector image (SVG) and transform it into an interactive map linked to data. In practice, it allows you to color regions of a drawing based on data values, click on them, etc.

The November update announces an evolution of this visual:

      Transform any SVG image into an interactive visual : reiterates the basic functionality of being able to use any drawing as a base. Examples: floor plans, technical diagrams, custom geographic maps, organizational charts, etc.

      Multi- level navigation: You can have different levels and navigate (perhaps like drilling down on areas? For example, level 1 world map, I click on Europe zooms to Europe map with details).

      Dynamic switching: Perhaps dynamically switching images based on selections? Certainly not, or it will vary the visible layers based on the data.

      Map Editor and Label Designer: Integrated tools for easily drawing maps and adding labels. Previously, creating the right SVG required using external tools (Illustrator, Inkscape ). Perhaps now there's an in-app editor or a companion app for defining interactive areas and labels.

      Advanced coloring: Ability to color in advanced ways, such as gradients, patterns, or based on threshold values.

      Map search: if it's a floor plan, does it search for an element by name and highlight it? This is useful for large maps (e.g., type "Meeting Room 1" and the visual will find it).

      Integrated security: I'm not sure exactly, perhaps it supports shared map security? Or perhaps it means the mapping data is kept securely within the PBI environment. Or maybe it's alluding to preventing content injection from SVG.

Use cases:

      Building Floor Plans: An office can have a floor plan and use Synoptic Panel to color occupied vs. vacant offices, and click on them to see info.

      Assembly line or production process: when loading a diagram, color parts of the line by intensity (machines with multiple outputs highlighted).

      Interactive organizational chart: silhouette figures or blocks for units, colored by metrics (e.g. performance).

      Custom geographic map: E.g., manually drawn sales area boundaries (not necessarily corresponding to known administrative areas) can be implemented via SVG, then linked to sales data.

Implications for business users: It opens up highly personalized data storytelling. Instead of limiting itself to generic geographic maps, a company can present results precisely according to its structure. For example, a plant manager could see the outline of the plant in a report and understand which departments are experiencing high or low production by color. It's more intuitive than reading a table by department.

Complexity: Configuring Synoptic Panel traditionally required preparing an SVG file with appropriately named elements. With the new integrated editors, it could become simpler: perhaps upload a base image (even a non-SVG one), then draw clickable areas over it and name those areas in the visual. This would make it accessible even to non-designers.

Limitations: It still requires an initial design if one doesn't exist (you can't automatically generate the map, you need a reference image). It works best with qualitative/categorical data (discrete colors by state, intensity of a measurement over regions). Not for precise numerical analysis, but for monitoring and navigation.

Security mention: Perhaps it indicates that this visual supports built-in RLS ( Row -Level Security), meaning if a user shouldn't see certain elements, it doesn't show them? Or the result map is safe ( no injection ), which was perhaps a concern with custom visuals and external inputs.

Summarizing the views of this Part 2:

      Theme Generator: A tool for easily improving the aesthetics and accessibility of reports, even using AI.

      Gantt with Dependencies: enriches a project management view, bringing PBI closer to tools like MS Project for timeline and constraint overviews.

      Synoptic Panel update: gives you the flexibility to display data on any image, opening the door to highly customizable dashboards (e.g., plant maps, technical diagrams).

They all aim for one goal: better communication. Whether it's through a uniform and pleasant style (themes), through clear planning ( Gantt FS), or through customized interfaces ( Synoptic maps ), these innovations help to convey the right information in the most understandable way for the end user.

10. Conclusions

We've come to the end of this eBook on what's new in Power BI in the November 2025 Update. Throughout the chapters, we've taken a deep dive into each section of the original presentation, dedicating a chapter to each slide and fleshing out each topic with explanations, context, practical implications, and use cases.

To briefly recap the key points covered:

      We initially highlighted the surrounding events and announcements: Fabric Data Days, which offered training and community opportunities, and the announcement of FabCon 2026, a major conference for Power BI and Microsoft Fabric enthusiasts.

      We then discussed a strategic shift with the deprecation of R/Python visuals in public embed scenarios ( app owns data ), emphasizing the importance of planning alternatives by May 2026.

      We've been diving into the many new features of Copilot and AI, such as the arrival of Copilot on mobile to query data anywhere, improvements to Copilot chat and reporting to make analysis more intelligent and contextual, the evolution of Verified Answers for increasingly reliable and contextualized AI responses, and openness to external agents with the MCP server.

      In the reporting space, we've seen how small touches (e.g. auto - resize matrices ) and big additions (the new Card view with hero images, images with interactive states, OneLake integration) have made it easier to integrate with existing features. Catalog ) make it more efficient to create beautiful reports that are integrated with the data ecosystem.

      modeling front, the focus was on tools for professionals: dataset versioning to avoid work losses, the TMDL editor in Visual Studio Code to manage the model as code, and even a taste of AI applied to modeling with the local MCP server.

      Data connectivity has been enhanced with faster and more robust Arrow-based Spark/Impala connectors, ensuring Power BI continues to play well with modern big data platforms.

      Finally, a large group of visualizations and visual design tools: new custom visuals (Activity Gauge, Drilldown Bubble ), improvements to existing visuals ( Decomp Tree with parallel breakdown, Zebra BI with dynamic legends, Gantt with dependencies), and design tools such as the Theme Generator and the revamped Synoptic Panel, which help present data in the most intuitive and business-friendly way.

Together, these new developments reveal a clear trend: Power BI is becoming smarter, more integrated with AI and cloud fabric, more visually rich, and more user-friendly for both developers and report users. The push toward AI (Copilot Everywhere, Verified Answers ) promises to further democratize data analysis, while usability improvements (from visuals to connectors to small fixes like the correct scrollbar ) show the constant attention to user feedback.

Implications for different users:

      For BI professionals and developers: these updates mean new tools to learn (VS Code extensions, MCP APIs), but they can increase productivity and solution quality. It's worth experimenting with Copilot to accelerate prototyping phases, and adopt version control practices with Version History and TMDL. Particular attention should be paid to R/Python deprecation the removal of those components, if present, should be included in the roadmap.

      For analysts and data scientists: with Verified Answers and Q&A improvements will likely see a reduction in ad-hoc requests because end users will be able to get answers on their own more reliably. Furthermore, new visuals offer more ways to communicate insights you can be creative with Synoptic Panel, or effective with Activity Gauge. Storytelling will need to be prioritized: AI can create graphs, visuals are abundant, and the analyst's added value will be piecing together the right story and choosing the right visualizations.

      For business users and decision makers: many of these innovations aim to improve their direct experience. They will be able to request data on the fly from their phones instead of waiting for formal analysis. They will find more interactive and tailored reports (images and formatting that speak their company's language). They must be encouraged to adopt these new paradigms (for example, training managers to use Copilot and to trust it, while understanding its limitations).

Next steps and suggestions:

      Tools Update: Make sure you are using the latest version of Power BI Desktop ( November 2025 or later) to immediately take advantage of GA card visuals, self-expanding matrices, etc. and if your company has PBI Report Server, evaluate a cloud adoption timeline to have these features.

      Training: Consider internal enablement sessions on Copilot (it's new to many users) and workshops on new visuals for reporting teams. For example, demonstrate how a visual like the All-Expanding Decomposition Tree can solve an analysis problem that previously required multiple steps.

      AI Governance: With the massive arrival of AI, define guidelines: which datasets to enable for Copilot ( AI prep ), how to validate verified responses, how to monitor usage (to avoid wrong conclusions from a Copilot used with outdated data, for example).

      R/Python Roadmap: Inventory the use of R and Python in the enterprise BI ecosystem: if limited to external embeds, address it; if used within PBI for internal analytics, no immediate impact, but consider more integrated alternatives (e.g., using Python in Power Query or Notebook).

      Controlled Experimentation: Design a small pilot using the Remote MCP server to understand how to integrate Power BI Q&A into an internal decision support app; or try TMDL VS Code on a new model to understand its pros and cons compared to Desktop.

 

/* ========================================================= 3) UI DEL SELETTORE: carica langs.json, popola tendina, ricerca, change → traduci ========================================================= */ async function loadLanguages() { const res = await fetch('langs.json', { cache: 'force-cache' }); if (!res.ok) throw new Error('Impossibile caricare langs.json'); return await res.json(); // array [{code, name_native}] } document.addEventListener('DOMContentLoaded', async () => { const select = document.getElementById('language-select'); const search = document.getElementById('language-search'); // Carica lingue dal tuo file let langs = []; try { langs = await loadLanguages(); } catch (e) { console.error(e); return; } // Popola la tendina function renderList(list) { select.innerHTML = ''; list.forEach(l => { const opt = document.createElement('option'); opt.value = l.code; opt.textContent = `${l.name_native} (${l.code})`; select.appendChild(opt); }); } renderList(langs); // Preselezione: preferenza salvata → lingua dalla URL → lingua browser → 'it' const fromLS = (() => { try { return localStorage.getItem('preferred_lang'); } catch(e) { return null; }})(); const currentPath = window.location.pathname; const matchLangFromPath = currentPath.match(/^\/([a-zA-Z-]{2,})\//); const pageLang = matchLangFromPath ? matchLangFromPath[1] : null; const browserLang = (navigator.language || 'it').toLowerCase().split('-')[0]; const initial = fromLS || pageLang || browserLang || 'it'; const exists = langs.some(l => l.code.toLowerCase() === initial.toLowerCase()); select.value = exists ? initial : 'it'; // Ricerca istantanea search.addEventListener('input', () => { const q = search.value.trim().toLowerCase(); const filtered = langs.filter(l => l.name_native.toLowerCase().includes(q) || l.code.toLowerCase().includes(q) ); renderList(filtered); const keep = filtered.some(f => f.code === select.value); if (!keep && filtered.length) select.value = filtered[0].code; }); // Cambio lingua → traduci select.addEventListener('change', () => { const lang = select.value; applyGoogleTranslate(lang); }); });