Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Power BI is turning 10! Let’s celebrate together with dataviz contests, interactive sessions, and giveaways. Register now.

Reply
markmsc
Helper I
Helper I

table refresh doesn't find changes but partition refresh does

Hello colleagues.  I have a Power BI premium model.  Some of the tables are quite large, but have the advantage that once a new day starts, much of the prior day's data can be removed.  We have a daily script that runs early each morning to mark the rows that can be removed on the first model refresh of the day.

 

The IR filter criteria include the "delete me" flag in addition to the usual RangeStart and RangeEnd parameters, resulting in the proper dataset for that date.  The mark-for-delete script also tickles the datatime column we use for Detect Data Changes, so the IR process will see that the row has changed.

 

We partition based on business day and since the data to be removed each day is from "yesterday", this process should update the prior day's data partition.  And it has been working fine for a couple of months now.  Each day, our update script runs, and a short while later the first IR of the day kicks off and drops the marked data from the model.

 

But this has recently stopped working.  The IR process is ignoring the marked rows, and they are remaining in the model, making these tables too large.  Manially refreshing the tables in question does not help.  I can even see the message "no data changes detected' in the partition with the rows marked to be deleted.  However, if I manually refresh just the partition itself, and not the whole table, then it works fine, and the marked rows drop out, just as expected.

 

Any thoughts on what is going on here?  Why did it stop working with no changes on our part, and why does it still work as expected if I refresh the partition but not the table (all partitions)?

 

 

1 ACCEPTED SOLUTION
v-nmadadi-msft
Community Support
Community Support

Hi @markmsc ,
Can you please share an update to your issue and let us know if you were able to resolve it by using azure pipeline-orchestrated XMLA refresh of the model, as this seems to be the best possible workaround currently available.

Sharing an update and accepting the reply which may have helped you in the direction will surely help other community memebers who may experience the same error in the future.

Thank you


View solution in original post

19 REPLIES 19
v-nmadadi-msft
Community Support
Community Support

Hi @markmsc ,
Can you please share an update to your issue and let us know if you were able to resolve it by using azure pipeline-orchestrated XMLA refresh of the model, as this seems to be the best possible workaround currently available.

Sharing an update and accepting the reply which may have helped you in the direction will surely help other community memebers who may experience the same error in the future.

Thank you


Hello -- No, this issue is not resolved.  I expect that switching to a pipeline-orchestrated XMLA refresh will take care of it.  However, this is an enterprise application and IT budgets and blah blah, and I cannot simply implement this on a whim, unfortunately.  The change is scheduled to be done later this year.  Until then, I cannot mark that suggestion as the solution, although I fully expect it to work.

Hi @markmsc ,
Thank you for providing an update on the issue.

For now we are closing this thread should you need further assistance in the future, we encourage you to reach out via the Microsoft Fabric Community Forum and create a new thread or reopen this thread. We’ll be happy to help.

 

Thank you for your understanding and participation.

v-nmadadi-msft
Community Support
Community Support

Hi @markmsc,

I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If our responses has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.


Thank you.

Hello.  No, the problem is not solved.  I will mark helpful posts with a thumb but I can't mark any as the solution.

Hi @markmsc,

We really appreciate your efforts and for letting us know the update on the issue.
I’ll continue exploring alternative solutions in the meantime, feel free to keep using the Fabric Community Forum for additional support and guidance.
Thank you

v-nmadadi-msft
Community Support
Community Support

Hi @markmsc,

May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.

Thank you.

SaiTejaTalasila
Super User
Super User

Hi @markmsc ,

 

You can try this -

Schedule a partition refresh (instead of the full table) via the XMLA endpoint or Power Automate.

 

If feasible, consider switching to a delete + append strategy instead of flagging rows for deletion.

 

I hope it helps.

 

Thanks,

Sai Teja 

Hello @SaiTejaTalasila and thank you for replying.

In the next couple of months, we will be moving to an Azure pipeline-orchestrated XMLA refresh of the model so will have the opportunity to do more granular things, in the spirit if your Power Automate suggestion, pretty soon.

 

As for delete + append, I assume you are referring to deleting the rows from the physical table?  If so, we do not want to delete the rows from the database, only from the model after the business day to which they apply has passed.  Hence the mark-for-delete strategy we have used.  And again, that was working fine for the past couple of months after we put it in; it only stopped working this week for some reason.  🤔

Deku
Community Champion
Community Champion

Are you doing a "full" refresh on the table. If not then only active partitions will be touched. Make sure the partition in question is definitely in the active scope, also check the bookmark on the partition and make sure your more recent data has a different value, to ensure it is not being skipped


Did I answer your question?
Please help by clicking the thumbs up button and mark my post as a solution!

Hello @Deku and thanks for replying.

 

The scheduled IR's are run by the service and I assume that is doing full, but I don't know since I don't think that is configurable.  But if I refresh a table myself (using TE3), then yes, I am running a full.  The partition in question is definitely in the active scope. Our active period is five days and the archive period is five days before that.  The partition in question is "yesterday", so well within the active scope.  

 

I'll double check the bookmarks on the partitions in question.

Deku
Community Champion
Community Champion

You can see the types of refresh here

 

In the service it likey uses automatic on the model. Refresh on tables and partitions are slightly different.

 

Sounds like it might be bookmark related. See the blog on polling expressions.


Did I answer your question?
Please help by clicking the thumbs up button and mark my post as a solution!

Hello again @Deku .

I took a look at the partition properties this morning before the first scheduled refresh.  The bookmark date is before the updated date on the rows I would have expected to be dropped from the model by this first refresh.  So at least from that angle, things seem to be in order.

 

However, you mentioned that the scheduled refresh runs an Automatic type.  I saw in this article that Automatic will do things to a partition when it is in a state other than Ready.  If in Ready, presumably the Automatic refresh does nothing.  For the partition in question in my model, I saw today, prior to the scheduled refresh, that it is in Ready state.  So I assume this is the issue: the refresh is skipping it because of its state.

 

I am trying to find information on what Ready means, and what might put a partition into some other state such that an automatic refresh would pay attention to it.  I wondered if you have any pointers here.

 

Mark

Deku
Community Champion
Community Champion

If your force a refresh of the partition does the bookmark change?


Did I answer your question?
Please help by clicking the thumbs up button and mark my post as a solution!

I tested that this morning and interestingly, no.  Here's the metadata for the partition before doing anything:

 

image (20).png

 

Here is the metadata after a scheduled refresh of the model run by the service.  You've mentioned that a secheduled refresh runs an Automatic.  You can see that the bookmark and the last processed datetime for the partition are unchanged from the above after the scheduled refresh:

image (21).png

 

Next, here is the metadata after then running a manual Full on the table in question.  This still did not drop the rows from the partition.  Both process datetime and bookmark are again unchanged. 


image (22).png

Last, here is a manual Full refresh of the partition.  This did drop out the rows as expected.  Notice that the last processed datetime has now advanced, but the bookmark still has not changed:

 

image (23).png

Deku
Community Champion
Community Champion

Sounds like you need to dig deeper on the bookmark and make sure the source does have a different date from the bookmark. If it does, then I'm not quite sure what is going on.


Did I answer your question?
Please help by clicking the thumbs up button and mark my post as a solution!

Source definitely does have a different, later datetime each norning than the saved bookmark, and yet the detect data changes polling expression on that column finds no changes and does not process the partition.  Or at least now it doesn't; before this week, it worked fine.  Now the only way to get those changes picked up is to force a full refresh on the partition.  

 

I appreciate you trying to help.  I did learn a thing or two from your comments, so I am better off than when I started here, even if my problem is not solved.  Thank you.

Deku
Community Champion
Community Champion

Just one final check. The changed date is applied to your older data as well right, not just today


Did I answer your question?
Please help by clicking the thumbs up button and mark my post as a solution!

The specific sequence is:

  1. last scheduled refresh of the day runs at 7:30 PM.  Presumably the bookmark should reflect this datetime.
  2. at 5:00 AM the next day, a process runs that identifies the prior day's data that should be dropped from the model, and sets a delete bit on each applicable row, along with the datetime of that update in the row's lastupdated column.  This should be after the bookmark from step 1.
  3. at 6:00 AM that next day, one hour later, the first scheduled refresh of the day runs.  Detect Data Change is set to look at the lastupdated column, and the query filter is set to exclude rows marked as deleted.  Despite step 2, it finds no changes from its last run the prior evening, in step 1.

Helpful resources

Announcements
June 2025 Power BI Update Carousel

Power BI Monthly Update - June 2025

Check out the June 2025 Power BI update to learn about new features.

May 2025 Monthly Update

Fabric Community Update - May 2025

Find out what's new and trending in the Fabric community.