"Alerted.org

Job Title, Industry, Employer
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Advanced Search

Advanced Search

Cancel
Remove
+ Add search criteria
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Related to

  • Data Architect

    Intermountain Health (Denver, CO)



    Apply Now

    Job Description:

    The purpose of the Data Architect - Staff role is to design, implement, and maintain both traditional and modern data infrastructures at the staff level. This involves combining expertise in relational databases, cloud technologies, and big data tools to create scalable, efficient, and secure data solutions. The role also includes developing strategies for integrating enterprise data, ensuring data quality, and optimizing data access across both cloud and on-premise systems

    Essential Functions

    + Responsible for designing, implementing, and maintaining both traditional and modern data infrastructures at the staff level.

    + Combines expertise in relational databases, cloud technologies, and big data tools to create scalable, efficient, and secure data solutions.

    + Develops strategies for integrating enterprise data, ensuring data quality, and optimizing data access across both cloud and on-premise systems.

    + Works on moderate to complex projects and is mentored by senior-level data architects/engineers to deliver robust data models, pipelines, and warehouses that meet organizational needs using technologies like SQL, Python, PySpark, and traditional RDBMS platforms

    Skills:

    Azure Databricks and Azure Data Factory:

    + Deep hands-on experience designing and managing Databricks pipelines, such as batch and streaming.

    + Strong working knowledge of Delta Lake concepts: schema evolution, ACID transactions, Change Data Feed, etc.

    + Experience with Databricks Lakeflow pipelines (or similar orchestration tools) for scheduling, chaining, and monitoring ETL workloads.

    + Skilled at troubleshooting job failures, managing cluster configs (job clusters/serverless/interactive), and tuning Spark performance.

    PySpark Development:

    + Advanced proficiency writing modular PySpark code for scalable data ingestion and transformation.

    + Performance tuning for large datasets.

    + Building reusable functions, UDFs, and unit-tested PySpark modules.

    CI/CD:

    + Strong experience with Databricks Asset Bundles.

    + Proficiency with GitHub Actions to automate deployments, testing, and environment promotion (dev/test/prod).

    + Skilled in Git/GitHub workflows such as branching strategies, pull requests, peer reviews, governance with CODEOWNERS.

    SQL Server Integration:

    + Proven experience landing data from Databricks into Microsoft SQL Server (on-prem and Azure-hosted).

    + Expertise with batch writes, JDBC, MERGE/UPSERT logic, error handling, and high-throughput patterns.

    + Proficiency tuning queries, stored procedures, and ETL logic for large-scale workloads.

    Terraform:

    + Practical experience using Terraform

    Data Integration:

    + Experience with ingestion from external cloud sources (GCS, APIs, S3, etc.) into ADLS Gen2.

    + Familiarity with parallelized, fault-tolerant ingestion approaches (threaded boto3 utilities, checkpoints, etc.).

    Minimum Qualifications:

    + Proficiency with SQL and experience with traditional RDBMS (e.g., Oracle, SQL Server, PostgreSQL).

    + Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform for data architecture and storage solutions.

    + Strong familiarity with programming languages such as Python and PySpark for data engineering tasks.

    + Working experience with ETL/ELT processes and tools, including both traditional (e.g., SSIS, Informatica) and/or cloud-native solutions (e.g., Azure Data Factory, Databricks).

    + Proficiency with Python or PySpark or R programming

    + Excellent communication skills for collaborating with stakeholders and teams.

    + Proficiency in Product Management, Project Management, or Program Management philosophies and methodologies, and capable of applying them to data architecture projects to ensure alignment with business goals and efficient execution.

    Preferred Qualifications:

    + Managing and monitoring Databricks Lakeflow pipelines at scale with alerting, retries, dependencies.

    + • Designing cost-efficient Databricks workloads with autoscaling, cluster policies, and job orchestration best practices.

    + • Implementing logging/lineage tracking across Databricks and SQL Server.

    + • Developing internal Python packages or shared libraries for common ingestion/transformation tasks.

    + • Working with sensitive or regulated data domains

    Physical Requirements:

    Remain sitting or standing for long periods of time to perform work on a computer, telephone, or other equipment.

    Location:

    SelectHealth - Murray

    Work City:

    Murray

    Work State:

    Utah

    Scheduled Weekly Hours:

    40

     

    The hourly range for this position is listed below. Actual hourly rate dependent upon experience.

     

    $43.92 - $69.16

     

    We care about your well-being – mind, body, and spirit – which is why we provide our caregivers a generous benefits package that covers a wide range of programs to foster a sustainable culture of wellness that encompasses living healthy, happy, secure, connected, and engaged.

     

    Learn more about our comprehensive benefits package here (https://intermountainhealthcare.org/careers/benefits) .

     

    Intermountain Health is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.

     

    At Intermountain Health, we use the artificial intelligence ("AI") platform, HiredScore to improve your job application experience. HiredScore helps match your skills and experiences to the best jobs for you. While HiredScore assists in reviewing applications, all final decisions are made by Intermountain personnel to ensure fairness. We protect your privacy and follow strict data protection rules. Your information is safe and used only for recruitment. Thank you for considering a career with us and experiencing our AI-enhanced recruitment process.

     

    All positions subject to close without notice.

     


    Apply Now



Recent Searches

  • Senior Data Engineer Streaming (Tennessee)
  • Senior Project Manager Electrical (New Hampshire)
  • Object Based Production Analyst (Florida)
  • Oracle AP High Tech (Georgia)
[X] Clear History

Recent Jobs

  • Data Architect
    Intermountain Health (Denver, CO)
[X] Clear History

Account Login

Cancel
 
Forgot your password?

Not a member? Sign up

Sign Up

Cancel
 

Already have an account? Log in
Forgot your password?

Forgot your password?

Cancel
 
Enter the email associated with your account.

Already have an account? Sign in
Not a member? Sign up

© 2025 Alerted.org