Learn Azure Data Services with Dinesh Priyankara
Learn Azure Data Services with Dinesh Priyankara
  • 42
  • 457 366
Boost Performance with Fabric’s Native Execution Engine
Microsoft Fabric just got faster! In this video, I explore the Native Execution Engine (NEE)-a new feature designed to accelerate data processing and query performance. It’s still in preview, but the results are promising!
💡 What you’ll learn in this video:
✅ What is the Native Execution Engine?
✅ How does it improve performance?
✅ A test run in my environment with billions of records
✅ A sample code demo (similar to my actual test)
Right now, NEE isn’t enabled by default, so I’ll show you how to turn it on and test it yourself.
📌 Find the sample code used in this video on my GitHub:
🔗 github.com/dinesql/youtube-demo-codes/tree/main/boost-performance-with-fabric-native-execution-engine
📌 Microsoft Docs page
🔗 learn.microsoft.com/en-us/fabric/data-engineering/native-execution-engine-overview?tabs=sparksql
📌 Try it out and let me know your thoughts in the comments!
🔔 Subscribe for more Microsoft Fabric insights.
👍 Like & Share if you found this useful!
#microsoftfabric #dataengineering #performanceoptimization #NativeExecutionEngine #bigdata #sql #dataanalytics #FabricPerformance #deltatable #spark #parquet #velox #apachegluten
Переглядів: 3

Відео

Wrong Stats on Delta MERGE in Fabric? Ways to Get Accurate Metrics - Part 2
Переглядів 354 години тому
🛑 Still struggling with incorrect stats in Delta MERGE operations? In Part 2, we dive into more reliable and scalable techniques! In this video, I cover: ✅ Using Python-Tracked SQL - Capturing metrics with a structured approach ✅ Using Transaction Annotations - Adding metadata to track each operation ✅ Using a Marker Column - A method that ensures precise row-level tracking 📌 Missed Part 1? Wat...
Wrong Stats on Delta MERGE in Fabric? Ways to Get Accurate Metrics - Part 1
Переглядів 1,7 тис.4 години тому
🔍 Are you getting incorrect metrics when using MERGE with Delta Tables in Microsoft Fabric? You're not alone! Many developers face challenges when retrieving the correct count of inserted and updated records. In Part 1, I explain: ✅ Reading Last Versions - Why it may return outdated stats ✅ Using Timestamp - A better but still risky approach ✅ Using Version - Can it guarantee accurate results? ...
Text Analytics Made Easy: Prebuilt AI Models in Microsoft Fabric
Переглядів 11 тис.14 днів тому
Discover how Microsoft Fabric makes AI accessible for everyone with its prebuilt AI models! Whether you’re a data engineer, analyst, or someone exploring AI, you no longer need to configure Azure AI Services or build custom models. In this video, we’ll explore Azure AI Services and how you can use them directly in Fabric for: ✅ Sentiment Analysis ✅ Language Detection ✅ Translation With SynapseM...
Power BI Pro License: Essential for Microsoft Fabric or Optional?
Переглядів 6 тис.21 день тому
Do You Need Power BI Pro for Microsoft Fabric? Find Out Here! In this video, we address one of the most common questions we hear from customers during Microsoft Fabric implementations: Do you really need Power BI Pro licenses for building modern data solutions? Many assume that once they invest in a capacity, specifically like F64, anyone can work with Power BI reports seamlessly-but is that re...
Misconfigured My Microsoft Fabric Workspace? My Costs Went Up! 😱
Переглядів 10 тис.Місяць тому
Are you struggling to manage the costs of your Microsoft Fabric workspace? In this video, I dive into two common misconfigurations that can significantly impact your Fabric performance, pipeline efficiency, and capacity unit (CU) consumption. We'll cover: 1️⃣ Enabling high concurrency for pipelines running multiple notebooks 2️⃣ Adding an exit function to notebooks to avoid unnecessary resource...
Automating Pause and Resume of Fabric Capacity via API with ADF
Переглядів 164Місяць тому
Learn How to Automate Pause and Resume of Fabric Capacity Using APIs and ADF In this video, discover how to automate the process of pausing and resuming Microsoft Fabric capacity using APIs and Azure Data Factory (ADF). 🔍 What You'll Learn: - A step-by-step guide to the available APIs for managing Fabric capacity. Essential security measures for safely accessing and controlling Fabric capacity ...
Key Concepts of Microsoft Fabric OneLake
Переглядів 258Рік тому
Understand key concepts of OneLake, the Data Lake for Microsoft Fabric with a simple illustration. See what is a shortcut, how Open Access works, how One Copy helps us and how security works. For more details, watch the full video at ua-cam.com/video/D2L13Zs5V1g/v-deo.html #MicrosoftFabric #Fabric #OneLake #DataLake #analytics #powerbi
Microsoft Fabric creates a Warehouse automatically - Microsoft Fabric - III
Переглядів 3,8 тис.Рік тому
Discover the power of Microsoft Fabric in automating data warehousing with our latest video, “Microsoft Fabric creates a Warehouse Automatically”. This informative video demonstrates how you can effortlessly create a data warehouse without the need for complex and time-consuming traditional warehousing processes. Join us as we explore how Microsoft Fabric automatically generates a default data ...
Microsoft Fabric - II - OneLake
Переглядів 4,6 тис.Рік тому
This video explains OneLake: The Data Lake for Microsoft Fabric. The video discusses following items with illustrations and demonstrations. 1. OneLake 2. Workspaces 3. OneLake artifacts (Lakehouse, Warehouse) 4. OneLake Explorer 5. Shortcuts 5. Accessing OneLake using Power BI, Synapse and Databricks
Microsoft Fabric - Understanding and enabling Microsoft Fabric
Переглядів 2,6 тис.Рік тому
Let's explore Microsoft Fabric and see. This video discusses "What is Microsoft Fabric?", "What workloads it supports", "Why we have to use it" and "How to enable Microsoft Fabric Trial". Watch and see, let me know if you have any question.
Unleashing Power BI in Jupyter Notebook
Переглядів 1,7 тис.Рік тому
This video discusses two things: embedding Power BI report in Jupytor Notebook and creating Power BI report directly in Jupytor Notebook. Embedding Power BI is not something new, it was announced in May 2021. Creating Power BI report in Jupytor Notebook was announced in last month, April 2023. Both topics are discussed in detail with demonstrations. Here are related links: powerbi.microsoft.com...
Real-time streaming in Power BI - Push Dataset
Переглядів 11 тис.Рік тому
This video discusses the way of showing stream data with Power BI visuals, specifically with Push Datasets. It compares types of streaming datasets supported, way of creating the Push dataset, sending values using a .Net application and creating reports and dashboards.
Azure Synapse Analytics - Part III - Serverless SQL Pool
Переглядів 1,3 тис.2 роки тому
Azure Synapse Serverless SQL Pool facilitates data exploration, transformations and data warehousing with multiple functionalities, allowing us to work with it using SQL. This video discusses how it works, what we can do and demos on use cases. This is the content: - Introduction to Serverless SQL Pool - How useful it is - Is it same as Dedicated SQL Pool? - Demo - Data exploration - Data trans...
Azure Synapse Analytics - Part II - Hyperspace
Переглядів 7172 роки тому
Azure Synapse Analytics - Part II - Hyperspace
Azure Synapse Analytics - Part I
Переглядів 15 тис.4 роки тому
Azure Synapse Analytics - Part I
Configuring Incremental Refresh in Power BI
Переглядів 4,2 тис.4 роки тому
Configuring Incremental Refresh in Power BI
Power BI Desktop Model - Connect from SSMS and Export more than 30,000 rows
Переглядів 4,5 тис.4 роки тому
Power BI Desktop Model - Connect from SSMS and Export more than 30,000 rows
Azure Databricks - Enabling Databricks CLI for Azure Portal and Windows
Переглядів 4 тис.4 роки тому
Azure Databricks - Enabling Databricks CLI for Azure Portal and Windows
Azure Data Factory - Accessing a Databricks Notebook with Input and Output Parameters
Переглядів 14 тис.4 роки тому
Azure Data Factory - Accessing a Databricks Notebook with Input and Output Parameters
Azure Databricks - Accessing Data Lake using Azure Active Directory Credentials
Переглядів 7 тис.4 роки тому
Azure Databricks - Accessing Data Lake using Azure Active Directory Credentials
Azure Databricks - Accessing Data Lake - Using a Service Principal
Переглядів 19 тис.4 роки тому
Azure Databricks - Accessing Data Lake - Using a Service Principal
Sending Emails from Azure SQL Database
Переглядів 10 тис.4 роки тому
Sending Emails from Azure SQL Database
Data Science with SQL Server Machine Learning Services
Переглядів 5 тис.4 роки тому
Data Science with SQL Server Machine Learning Services
Automating Resume and Pause of Azure Analysis Services
Переглядів 3,3 тис.4 роки тому
Automating Resume and Pause of Azure Analysis Services
Azure Databricks - What, Why & How
Переглядів 41 тис.5 років тому
Azure Databricks - What, Why & How
Creating Paginated Reports with Power BI
Переглядів 12 тис.5 років тому
Creating Paginated Reports with Power BI
Azure Data Lake Analytics - Accessing External Sources with USQL
Переглядів 3,5 тис.6 років тому
Azure Data Lake Analytics - Accessing External Sources with USQL
Azure Data Factory - Iterate over a data collection using Lookup and ForEach Activities
Переглядів 61 тис.6 років тому
Azure Data Factory - Iterate over a data collection using Lookup and ForEach Activities
Azure Data Factory - V2 - Creating a Pipeline using Author-Tab
Переглядів 25 тис.6 років тому
Azure Data Factory - V2 - Creating a Pipeline using Author-Tab

КОМЕНТАРІ

  • @Learn2Share786
    @Learn2Share786 11 днів тому

    Thanks, would be really helpful if you upload the notebook on GitHub etc - that would help in practicing.

  • @AsangaRamanayake
    @AsangaRamanayake 23 дні тому

    Great work Dinesh...!!!

  • @Learn2Share786
    @Learn2Share786 29 днів тому

    Thanks for the explanation, could you pls share the ADF code/template?

    • @DineshPriyankara
      @DineshPriyankara 23 дні тому

      Sorry, I generally discard once used in videos :(. Anyway, it is a simple one, just follow the steps for adding items to the pipelines.

  • @MDSely-uf9tf
    @MDSely-uf9tf Місяць тому

    😊ল০০ল😊০

  • @Nalaka-Wanniarachchi
    @Nalaka-Wanniarachchi Місяць тому

    Nice video ! From my experience, the starter pool works well without HC mode, while custom Spark pools can benefit from it. I've noticed HC mode enhances performance for heavy-load concurrent tasks, but if we are cost-conscious, it may be prudent to avoid HC mode for smaller tasks, as underutilized resources could increase CU costs.

    • @DineshPriyankara
      @DineshPriyankara Місяць тому

      Good point, Nalaka! Since the Custom Pool takes time to start, enabling HC for pipelines might indeed highlight its value, definitely worth testing and seeing the results. Thanks for sharing this. The one we saw with 1.7 million CUs only includes code for updating an Azure SQL DB, but it’s part of a pipeline with many activities that typically take around 20-30 minutes to complete.

  • @MorsiMasmoudi
    @MorsiMasmoudi Місяць тому

    hello and thanks for the video, can you share the code for the database sample and the SP sending email ?

  • @saharshjain3203
    @saharshjain3203 3 місяці тому

    Hey Dinesh, I want to create a data flow such that I can extract all the attachments having csv and or Excel format and save it in the Lake House, in the table format in Microsoft Fabric. Can you guide me how to execute this task?

  • @KishoreLingala
    @KishoreLingala 5 місяців тому

    Nice one. I tried this given error like this "[Open Browser Console for more detailed log - Double click to close this message] Failed to load model class 'ReportModel' from module 'powerbi-jupyter-client' Error: No version of module powerbi-jupyter-client is registered"

  • @KishoreLingala
    @KishoreLingala 5 місяців тому

    Nicely Explained . Do we have option of Authenticating using Service Principal (Bearer Token) in PowerBiClient ?

  • @KishoreLingala
    @KishoreLingala 5 місяців тому

    Nicely Explained . Do we have option of Authenticating using Service Principal (Bearer Token) in PowerBiClient ?

  • @syednayyar
    @syednayyar 6 місяців тому

    Can i connect and ingest the data using Java ( Json file ) upoading to Lake house ? i am not able to find any Java snippet using it , Use case is that i want to ingest data directly through my android app ( using Java ) it may be HTTP or Rest i dont mind but i want to ingest the files ( let say GPS data , or sensor data ) saved in JSON format into my Lake house in Fabric is it even possible can you please make video on it please ? ( I dont want to use Azure subscription )

  • @hasnaamanaa1729
    @hasnaamanaa1729 8 місяців тому

    good job and clarification

  • @pratikpophali107
    @pratikpophali107 8 місяців тому

    Hii! Thank you for this wonderful video. I am doing a similar thing in which I am consuming data from the Kafka topic and sending it to Power BI. I want to replicate the same thing that you have done to understand it. Can you please share the code or github repository link for it?

  • @superfreiheit1
    @superfreiheit1 11 місяців тому

    Can you show us how to config Maschine Learning Services on SQL Server 2022?

  • @manjunathkampali8942
    @manjunathkampali8942 11 місяців тому

    Nicely Explained . Do we have option of Authenticating using Service Principal (Bearer Token) in PowerBiClient ?

  • @wendzbrand
    @wendzbrand Рік тому

    Where can I get the Table Name?

  • @cheatvideos3015
    @cheatvideos3015 Рік тому

    Wow, this is awesome! Did you share the code so I can get it and play with it? Thanks in advance for everything. I'm looking forward to watch all you videos. And Can you do permission in dashboard , like if you have cities and want each user to see his city only, like what we do in Power BI desktop version.!?

  • @FilippoBesana
    @FilippoBesana Рік тому

    Hello, Thanks for this video! I'm trying to implement Push and Delete using a capability of latest SqlServer. It can call REST webservice creating MSXML2.ServerXMLHttp object using sp_OACreate/sp_OAMethod/sp_OADestroy system stored procedures. The main advantage is working directly on database where data are stored bypassing VisualStudio and a lot of unusefull complexity generated by writing, compiling source code, releasing it and so on... Calling webservices from database there is a very strange behaviour. Push methos works fine : it returns 200 as status code and I see values in realtime in my dashboards on PowerBi Service, but Delete method return 200 as status code but does not delete anything! If I delete rows on the microsoft webpage you shown in the video all rows are deleted. Do you ave any suggestion?

  • @saadhafeez9171
    @saadhafeez9171 Рік тому

    Hi, I have a question. You used the static access token to remove the rows, that expires in two hours I think. How can we make it dynamic? I tried registering an app in azure but I think that does not have permission. kindly let me know. Thanks.

  • @shireentabassum5052
    @shireentabassum5052 Рік тому

    G🩵🪈🍍😮😮😮

  • @weronikakrol5812
    @weronikakrol5812 Рік тому

    Hello! It's a very interesting video, thank s lot! Can you tell me if it is possible to create a push dataset and connect it to usual datasources like Oracle database?

    • @DineshPriyankara
      @DineshPriyankara Рік тому

      Yes, when it comes to Push Dataset, you are responsible for connecting with the source and get the required data, and push to Power BI. If you code get the right dataset from Oracle, it just works as shown in the video.

    • @susanoyekanmi861
      @susanoyekanmi861 Рік тому

      It’s a very nice video but I have a question, can you just upload the dataset from a google sheet and then create a streaming dataset with it and then analyse it , will it work the same way as seen in the video

  • @BISimplifier
    @BISimplifier Рік тому

    Note. Can connect to a Direct Lake dataset in Desktop in a Live mode .Once you create the semantic model using the web editing in service, you can connect to the dataset in Live mode in the Desktop to create reports.

    • @DineshPriyankara
      @DineshPriyankara Рік тому

      Yes, unfortunately, I had no way of showing that part, the workspace configured with a capacity is not allowed to be exposed :). Only limitation we see with current version is, creating datasets for Data Lake folders, using the PBI service.

  • @tharindhuanuradha
    @tharindhuanuradha Рік тому

    can I bring data from Infor data lake to microsoft fabric lakehouse and perform some transformations

    • @DineshPriyankara
      @DineshPriyankara Рік тому

      I have not worked with Infor Data Lake, however if it can be accessed using generic protocol (HTTP, OData, REST API), then data can be brought in using pipelines. Once in, any type of transformation is possible, either using Data Flow or Notebooks.

    • @krishnakashyap2165
      @krishnakashyap2165 Рік тому

      ​@@DineshPriyankara mm

  • @sangeetadevnath7409
    @sangeetadevnath7409 Рік тому

    Nicely explained.Thankyou

  • @Gayashan4lk
    @Gayashan4lk Рік тому

    great content.

  • @SKGA
    @SKGA Рік тому

    Good stuff..! 👏

  • @matrixlnmi169
    @matrixlnmi169 Рік тому

    These are tightly couple based approaches , if solutions need to move to another cloud provider in those cases? A new fresh investment would be required

  • @anoopv5790
    @anoopv5790 Рік тому

    Very Clear Explanation. Excellent!

  • @HEMANTHKUMAR-gu7fi
    @HEMANTHKUMAR-gu7fi 2 роки тому

    Thank you so much

  • @motionblender27
    @motionblender27 2 роки тому

    Awesomely done..thank you. If someone wants to update mail is sent in table column..just add one more step of sal action to execute query

  • @dpatro7245
    @dpatro7245 2 роки тому

    If we want to return multiple output value from notebook then how do we get individual value into pipeline, can you please help me

  • @giladsefti301
    @giladsefti301 2 роки тому

    Excellent explanation. Thanks!

  • @SAMSARAN2108
    @SAMSARAN2108 2 роки тому

    Can I pass my SSO credentials that used to login into Power BI to make Databricks connectivity? Please confirm. Regards, Sam

  • @vamsi.reddy1100
    @vamsi.reddy1100 2 роки тому

    Your voice and pronunciation is like kud venkat ...,

  • @sudarshant2340
    @sudarshant2340 2 роки тому

    Data In Excel file 2 columns: Country-name, Flag India, 1 Netherlands, 1 Romania, 0 I have a excel file having data mentioned above storing in Azure data Lake Gen2. Requirement: I want only flag 1 data from excel using Azure data factory without using SQL server and data flow. I’m trying implement the requirements using the look up, set variable and for each activities in adf but I’m unable to find solution. Can you please give me your suggestions or ideas how to implement the pipeline in ADF.

  • @felixkongho6969
    @felixkongho6969 2 роки тому

    Hi Dinesh, In creating connection in logic app and after entering the servername, database name and table name, I got an error "Could not retrieve values. login failed for user <token-identified principal>" any suggestion on how to resolve the connection issue?

  • @priyadarshinis8008
    @priyadarshinis8008 2 роки тому

    Thanks for the informative video 👍

  • @mohammedsiraj8523
    @mohammedsiraj8523 2 роки тому

    Thanks , very usefull

  • @mohammedshabaaz9625
    @mohammedshabaaz9625 2 роки тому

    Are you looking for a thumbnail designer for your channel which would increase the click through rate of your videos then plz let me know. Happy to help Dinesh and btw your work is amazing

  • @reshmakonde597
    @reshmakonde597 2 роки тому

    what if daily new files are added , how to control the access on them?

  • @ash3rr
    @ash3rr 2 роки тому

    Why do I get network timeout when I try to list my clusters

  • @kamlakarpawar6671
    @kamlakarpawar6671 2 роки тому

    How to set up recurring migration through queries/script from SQL server(On-premise) to Azure SQL database I need a help to sync the data on Azure SQL from SQL server(On-premise). Available recourses: 2 Database (SQL Server) on premise available on different server Azure SQL database on cloud Migration script/queries are ready to fetch data from on-premise sql server Requirements: Set up a scheduler which will run every 12 hours means two times in a day on Azure SQL. In scheduler, using migration scripts data will be fetch from the On-premise SQL server and insert into Azure SQL Database.

  • @pragnyapuranik5355
    @pragnyapuranik5355 2 роки тому

    Thank you … this helped me a lot :)

  • @chinmaygokhale4260
    @chinmaygokhale4260 2 роки тому

    Does this encryption type support "like" operator and wild cards for searching?

  • @krishnachaitanyareddy2781
    @krishnachaitanyareddy2781 2 роки тому

    can you tell how storage is mapped when creating apache spark anywhere you mentioned use particular storage

  • @manniedigicast
    @manniedigicast 2 роки тому

    Hi Dinesh. How can I copy the output of a lookup activity to blob storage or ADLS?

  • @fumini99
    @fumini99 2 роки тому

    REally awesome presentation - thanks a lot!!

  • @swathibandarupalli803
    @swathibandarupalli803 2 роки тому

    Hi Dinesh...Could you please create a video on different file storage formats like parquet, avro, ORC..etc., with some practical use cases

  • @sravankumar6180
    @sravankumar6180 2 роки тому

    Could you also share a video on how the data is stored when we use a dedicated sql pool and a cost comparison between use of Synapse serverless,Dedicated and data bricks lakehouse

    • @You77alesi
      @You77alesi 2 роки тому

      Hi Sravan, did you find something useful in the meantime ?

  • @sravankumar6180
    @sravankumar6180 2 роки тому

    Thanks for sharing this Dinesh, this is very useful. I would like to understand more about the use of Serverless SQL pool for use as Datawarehouse and PowerBI Consumption. If we use Parquet - it doesn't support schema enforcement and comes across read-write conflicts. Could you provide a video on your analysis on use of Serverless SQL pool for BI consumption