Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
I have developed metadata driven pipeline to load data from on prem sql server to snowflake . I am using data flow for load process and create the table automatically using dataflow. I observed that pipeline fails when my source table has a column datatype as binary. The error I get is :
Operation on target Data flow2 failed: {"StatusCode":"DF-Snowflake-InvalidDataType","Message":"Job failed due to reason: at Sink 'sink1': The spark type is not supported in snowflake","Details":""}
Solved! Go to Solution.
Hi @DivyaniPardeshi ,
Binary data types aren’t directly supported when writing to Snowflake, which causes the DF Snowflake InvalidDataType error. This happens because Spark, which powers Fabric Dataflows, doesn’t have a direct mapping for binary types in Snowflake.
To fix this, before the Sink step, add a Derived Column transformation that converts the binary field into a Base64-encoded string. This ensures the data is written to Snowflake without errors. Make sure the corresponding column in Snowflake is set to accept VARCHAR or VARIANT so it can store the converted data properly.
FYI:
I hope this helps..
Hi @DivyaniPardeshi ,
Could you let us know if your issue has been resolved or if you are still experiencing difficulties? Your feedback is valuable to the community and can help others facing similar problems.
Thank You.
Hi @V-yubandi-msft I found your response helpful But I cant find button to mark the response as 'Accepted answer'
Hi @DivyaniPardeshi ,
Thanks for your replay. You can accept a solution in two ways
1. Below the response - On the right hand side at the bottom of the post, you'll see three options – Like, Reply, and Accept as Solution. Click Accept as Solution there.
2. From the top right of the response - Click the ellipsis (...) in the top right corner of the reply. A dropdown menu will appear, and you'll find the Accept as Solution option there.
Regards,
Yugandhar.
Hi @DivyaniPardeshi ,
Thank you for your patience and response. If possible, please try logging in from a different system to see if the options appear as expected.
Best regards,
Yugandhar
Hi @DivyaniPardeshi ,
Has your issue been resolved, or do you require any further information? Your feedback is valuable to us. If the solution was effective, please mark it as 'Accepted Solution' to assist other community members experiencing the same issue.
Thank You.
Hi @DivyaniPardeshi ,
Thanks for posting your query in the Microsoft Fabric Community. In Microsoft Fabric, binary data types are not directly supported when writing to Snowflake using Dataflows, which causes the pipeline to fail.
To resolve this.
1. You can transform binary columns into STRING or VARIANT formats to ensure compatibility with Snowflake.
2. If it is necessary to maintain raw binary data, encoding it as Base64 or converting it to Hex can be helpful. Additionally, it is crucial to ensure that the schema mapping in Dataflows aligns with Snowflake’s supported data types.
3. Using a VARIANT column is an effective alternative if binary storage is required. These adjustments will help the pipeline run smoothly without encountering unsupported data type errors.
If my response solved your query, please mark it as the Accepted solution to help others find it easily!
And if my answer was helpful, I'd really appreciate a 'Kudos'.
Hi @V-yubandi-msft ,
Thanks a lot for response.
I tried converting the column into varchar in source and it gives bad data.
I created that column as VARIANT in snowflake and the pipelines fails again saying :
I am not sure how can I convert same into hex..If you have any refernce material please share.
Thanks again.
Hi @DivyaniPardeshi ,
Binary data types aren’t directly supported when writing to Snowflake, which causes the DF Snowflake InvalidDataType error. This happens because Spark, which powers Fabric Dataflows, doesn’t have a direct mapping for binary types in Snowflake.
To fix this, before the Sink step, add a Derived Column transformation that converts the binary field into a Base64-encoded string. This ensures the data is written to Snowflake without errors. Make sure the corresponding column in Snowflake is set to accept VARCHAR or VARIANT so it can store the converted data properly.
FYI:
I hope this helps..
Thanks for the update. If my response resolved your query, please mark it as the Accepted Solution so others can easily find the answer.