-
Azure Data Engineer
- Insight Global (Houston, TX)
-
Job Description
A client of Insight Global is looking for an Azure Data Engineer to join their team. This person will be working on a project that is moving digitalized business data to modern cloud structure, deliver values, BI insights, move data to the correct places, and adding analytics and AI. This environment is all on Azure. This person will need strong experience ingesting data from multiple sources and building scalable pipelines using ADF and Databricks. This person will also need experience building curated datasets in Databricks using PySpark, and SQL. The ideal person will have experience with time series data and real-time use cases.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to [email protected] learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/.
Skills and Requirements
5+ years of experience in Data Engineering
Databricks experience - pivot logic, aggregation, extracting data from multiple sources and enhancing data management, building curated datasets, building scalable pipelines
Azure Data Factory - experience ingesting data into ADLS, experience building more complex scalable pipelines, developing end-to-end data pipelines to ensure seamless data ingestion and curation
Experience designing, developing and implementing scalable ETL processes, and data engineering pipelines using Python and Pyspark
Writing SQL queries from scratch Experience with Fabric
-