DP-203 Exam Question 16

You have an Azure Data Lake Storage account that contains a staging zone.
You need to design a dairy process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.
Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that copies the data to a staging table in the data warehouse, and then uses a stored procedure to execute the R script.
Does this meet the goal?
  • DP-203 Exam Question 17

    You have a Microsoft SQL Server database that uses a third normal form schema.
    You plan to migrate the data in the database to a star schema in an Azure Synapse Analytics dedicated SQI pool.
    You need to design the dimension tables. The solution must optimize read operations.
    What should you include in the solution? to answer, select the appropriate options in the answer area.
    NOTE: Each correct selection is worth one point.

    DP-203 Exam Question 18

    You have an Azure subscription that contains an Azure Synapse Analytics dedicated SQL pool named SQLPool1.
    SQLPool1 is currently paused.
    You need to restore the current state of SQLPool1 to a new SQL pool.
    What should you do first?
  • DP-203 Exam Question 19

    You have an Azure Synapse Analytics pipeline named Pipeline1 that contains a data flow activity named Dataflow1.
    Pipeline1 retrieves files from an Azure Data Lake Storage Gen 2 account named storage1.
    Dataflow1 uses the AutoResolveIntegrationRuntime integration runtime configured with a core count of 128.
    You need to optimize the number of cores used by Dataflow1 to accommodate the size of the files in storage1.
    What should you configure? To answer, select the appropriate options in the answer area.

    DP-203 Exam Question 20

    You have an Azure data solution that contains an enterprise data warehouse in Azure Synapse Analytics named DW1.
    Several users execute ad hoc queries to DW1 concurrently.
    You regularly perform automated data loads to DW1.
    You need to ensure that the automated data loads have enough memory available to complete quickly and successfully when the adhoc queries run.
    What should you do?