Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hi everyone,
We are currently facing issues with our data ingestion process from SQL Server to our Lakehouse in Microsoft Fabric.
We have set up 4 pipelines in order to ingest around 300 tables in total between all of them. Each pipeline ingests up to 4 tables simultaneously using the COPY command in a loop (that's the batch count set on each loop). This means we are processing up to 16 tables at the same time (4 tables per pipeline x 4 pipelines).
However, there's some times when system runs out of memory and we get this error:
Failure happened on 'destination' side. ErrorCode=SystemErrorOutOfMemory,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=A task failed with out of memory.,Source=Microsoft.DataTransfer.TransferTask,''Type=System.OutOfMemoryException,Message=Exception of type 'System.OutOfMemoryException' was thrown.,Source=Microsoft.DataTransfer.ClientLibrary,'
Decreasing the batch count is not an optimal solution in this case because it would take a lot of time for the pipelines to ingest all the tables. Considering these pipelines execute on a daily basis, it is not an optimal solution.
This are the settings we have put in the copy activity which is encountering the errors:
Has anyone faced similar issues or have any suggestions on how to optimize our ingestion process to handle multiple tables simultaneously without running into these errors?
Hi, I am also facing the same issue. My source is lakehouse and destination is a on premise server. The pipeline is loading 62 tables. Till morning today it was running successful. But after that all the subsquent manual runs are getting failed. The fabric capacity is alreayd F8
Can I get any help please
In my case I have reduced the number of tables simultaneusly and I haven't had the issue in the last days. However, I don't think is the best approach, because I have a lot of tables. In my case I'm using F64, it should be enough.
Hi @amaaiia ,
Thanks for the update. It’s good to hear that reducing the number of simultaneous table queries helped.
If the response helped resolve your issue, it would be greatly appreciated if you could mark it as the Accepted Answer doing so helps others in the community who may be facing a similar challenge.
If you still have questions or need further assistance, feel free to share more details — we’re always here to support you.
Thanks again for being an active part of the Microsoft Fabric Community!
Were you able to check the capacity consumption metrics whether full capacity is being throttled during the job execution time? And is the source an On Prem SQL server for which you are using a gateway?
Hi,
Yes, the source is on prem, accessing by data gateway.
I've checked capacity metrics report by I see nothing interesting. The error has happened al 8:34AM:
And this is throttling at the same time:
And the Utilization: