Mohammad Gufran Jahangir August 5, 2025 0

Here’s a comprehensive list of 50 Azure Data Factory (ADF) interview questions across essential, advanced, and scenario-based levels with answers. This will help you prepare effectively:


Essential Level (Basic Concepts)

  1. What is Azure Data Factory?
    ADF is a cloud-based ETL and data integration service for orchestrating and automating data movement and transformation.
  2. What are the core components of ADF?
    Pipelines, Activities, Datasets, Linked Services, Triggers, Integration Runtime.
  3. What is a pipeline in ADF?
    A logical group of activities for data movement and transformation.
  4. What is a dataset in ADF?
    Metadata that defines the schema and location of the data to use in activities.
  5. What is a linked service?
    Similar to a connection string; defines the connection to data sources.
  6. What are the types of triggers?
    Schedule, Tumbling Window, Event-based.
  7. What is Integration Runtime (IR)?
    The compute infrastructure for data movement and transformation in ADF.
  8. Difference between Azure IR and Self-hosted IR?
    Azure IR runs in the cloud; SHIR is installed on-prem or in a VM for accessing on-prem data.
  9. What is the use of parameters in ADF?
    Used to make pipelines dynamic and reusable.
  10. What is a control activity?
    Controls pipeline flow, e.g., If Condition, ForEach, Until, Wait.

🔁 Intermediate to Advanced Level

  1. What is the difference between mapping data flows and wrangling data flows?
    Mapping flows use a UI for transformations; wrangling uses Power Query.
  2. How do you handle failures in pipelines?
    Retry policies, On Failure path, activity dependency conditions.
  3. What is a tumbling window trigger?
    Executes pipeline at periodic intervals with a fixed size window and no overlap.
  4. How to pass parameters between pipelines?
    Using Execute Pipeline activity with parameter key-value pairs.
  5. How does ADF support CI/CD?
    Through Git integration (collaboration & publish branches) and Azure DevOps pipelines.
  6. What is staging in copy activity?
    Used in copying between incompatible stores using interim blob storage.
  7. What is the purpose of global parameters?
    Shared constants accessible across all pipelines.
  8. What is debug mode in ADF?
    Allows testing pipeline runs without triggering real execution or logs.
  9. How do you monitor pipeline executions?
    Using Monitor tab, Activity Run Output, or Log Analytics.
  10. How to encrypt sensitive data in ADF?
    Use Key Vault for secrets, and secure input/output settings.

💡 Advanced/Architectural Level

  1. ADF vs. SSIS – Key differences?
    ADF is cloud-native, supports serverless execution and modern cloud stores.
  2. How to use Git integration in ADF?
    Connect Git repo, work in collaboration branch, publish to adf_publish.
  3. Can we use stored procedures in ADF?
    Yes, using Stored Procedure activity with parameters.
  4. What is the role of Data Flows in ADF?
    For scalable transformation of data using Spark clusters.
  5. How do you optimize ADF performance?
    Use parallel copies, partitioning, efficient IR location, reduce data volume.
  6. What is the maximum pipeline concurrency in ADF?
    Default is 20, but configurable per pipeline.
  7. Difference between pipeline parameters, variables, and expressions?
  • Parameters: values passed in
  • Variables: store runtime values
  • Expressions: compute values dynamically
  1. ADF pricing model?
    Based on pipeline activity execution and IR usage (vCore-hours for Data Flows).
  2. How to handle schema drift?
    Use Data Flows’ “allow schema drift” and “auto-mapping” features.
  3. Can we invoke REST APIs in ADF?
    Yes, using Web activity or REST connector in Copy/Data Flow.

📘 Scenario-Based Questions

  1. How would you orchestrate ETL jobs with dependency?
    Use control flow activities (If, Wait, Until) and activity dependencies.
  2. You need to load 1000 files in parallel, how would you do it?
    Use ForEach with batch setting and parallelism set.
  3. How do you copy data from on-prem SQL Server to Azure Data Lake?
    Use Self-hosted IR with Copy Activity.
  4. How do you secure ADF pipeline secrets?
    Use Azure Key Vault linked service for credentials and secrets.
  5. How to restart only failed activities in a pipeline?
    Use checkpointing logic with control activities and variable flags.
  6. You need to implement different logic for dev, test, and prod. How?
    Use global parameters or configuration files with environment variables.
  7. Can you deploy ADF pipelines using Azure DevOps? How?
    Yes, export ARM templates or use Git repo, build and release pipelines.
  8. Data flow is running slow, what steps will you take?
    Enable staging, optimize source queries, tune partitioning, increase core count.
  9. You need to process files only if they land between 2AM–3AM daily?
    Use event trigger with time window filtering and validation checks.
  10. How would you handle incremental loads in ADF?
    Use watermark columns and Lookup/Filter activities to track delta.

🧠 Bonus Conceptual Questions

  1. What’s the use of REST connector in Copy Activity?
    To extract data from RESTful web services.
  2. Difference between lookup and get metadata activity?
  • Lookup fetches data (e.g., SQL query output)
  • Get Metadata retrieves structure information.
  1. How do you validate pipeline before publishing?
    Use Validate All option in the UI.
  2. What happens during pipeline publishing in Git mode?
    It copies ARM JSON definitions from collaboration to publish branch.
  3. What are sink and source in copy activity?
  • Source: data origin
  • Sink: destination
  1. Can ADF trigger Azure Functions?
    Yes, using the Azure Function activity.
  2. Can we schedule pipelines with dependencies?
    Yes, using Tumbling Window trigger or chained activities.
  3. How do you deal with changing file names dynamically?
    Use wildcards and dynamic content with expressions.
  4. How to ensure data is not processed multiple times?
    Implement watermarking, status flags, or control tables.
  5. How would you troubleshoot pipeline failure?
    Use Monitor → Activity Run output, logs, retry settings, and debug mode.

Category: 
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments