-
Associate Director, Data Platform and Solution…
- BeOne Medicines (San Mateo, CA)
-
General Description:
Join BeOne's Global Data Strategy and Solutions team to build and scale a cutting-edge and fully integrated Enterprise Data and Analytics Platform that accelerates our journey from Data to Insights and deployment of AI applications. The **AD Platform and Solution Engineering** must be an expert in Databricks solution technologies to design a scalable, high-performance data solutions that empower our organization to ingest and curate data and build data products at scale. The ideal candidate will possess strong technical knowledge and experience in cloud data architectures, big data processing, and real-time analytics, coupled with the ability to collaborate cross-functionally to drive data-driven decision-making across the organization.
Essential Functions of the job:
The individual in this position should expect significant day-to-day variability in tasks and challenges.
Primary duties include but is not limited to the following:
+ Design and implement robust data architectures using Databricks, ensuring integration with existing systems and scalability for future growth
+ Establish data management frameworks, optimizing ETL/ELT processes and data models for performance and accuracy.
+ Evaluate and recommend modern architectural patterns, including Lakehouse, Delta Live Tables, Data Mesh, and real-time streaming.
+ Drive rapid Proof-of-Concepts (POCs) to validate new architectural approaches, tools, and design patterns before enterprise rollout.
+ Partner with data engineers, scientists, and business stakeholders to develop seamless data pipelines prioritizing data integrity and usability.
+ Implement and uphold data governance practices that enhance data accessibility while ensuring compliance with regulations.
+ Integrate external systems, APIs, and cloud-native services to support new data products and analytics use cases.
+ Prototype and test new connectors, ingestion frameworks, and integration patterns to accelerate innovation.
+ Monitor data pipelines and infrastructure performance, troubleshooting issues as they arise and ensuring high availability.
+ Optimize and enhance existing data systems for performance, reliability, and cost-efficiency.
+ Collaborate with data analysts and data scientists to understand data requirements and implement solutions that support data-driven insights and models.
+ Monitor and enhance system performance, employing tools and methodologies to optimize data processing and storage solutions.
+ Optimize compute costs, job orchestration, workflow efficiency, and data storage strategies.
+ Troubleshoot and resolve data-related issues to maintain optimal system functionality.
+ Experiment with new Databricks features (Unity Catalog updates, AI/ML runtimes, Photon, DBRX, Delta Sharing, serverless SQL/compute, etc.) through quick hands-on evaluations.
+ Develop and enforce data governance standards, including data quality, security, and compliance through automation.
+ **Innovation & Rapid Prototyping**
+ Conduct fast-turnaround POCs to explore new technical capabilities, libraries, and features across Databricks, Azure, Informatica, Reltio, and other ecosystem tools.
+ Build lightweight demo pipelines, dashboards, and micro-solutions to demonstrate feasibility, guide architectural choices, and influence roadmap decisions.
+ Stay current with emerging technologies, industry trends, and platform advancements; translate insights into actionable recommendations.
+ Collaborate with vendors and internal teams to evaluate beta features, pilot new capabilities, and provide technical feedback for adoption decisions.
**Education Required:** Bachelor’s Degree in Information Technology or related field/experiences
Qualifications:
+ Proven experience (8+ years) in data architecture or in a similar role, with extensive experience in Databricks and cloud-based data solutions.
+ 8+ years of experience in solution engineering, platform architecture, or related working in a cross-functional environment.
+ Strong proficiency in Apache Spark, Unity Catalog technology, Python, SQL, and data processing frameworks.
+ Experience with APIs and experience in integrating diverse technology systems.
+ Familiarity with modern development frameworks, DevOps methodologies, and CI/CD processes.
+ Experience with data warehousing solutions, delta lakes, and ETL/ELT processes.
+ Familiarity with cloud environments (AWS, Azure) and their respective data services.
+ Solid understanding of data governance, security, and compliance best practices.
+ Excellent communication and interpersonal skills, with an ability to articulate complex technical concepts to diverse audiences.
+ Databricks certifications or hands-on experience with Delta Lake and its cloud architecture is strongly preferred
+ Familiarity with machine learning, AI frameworks, and data visualization tools (e.g., Tableau, Power BI, Spotfire).
+ A proactive approach to learning and implementing new technologies and frameworks.
+ Experience working with Life Sciences data, including exposure to R&D, Clinical Operations, TechOps, or Manufacturing domains. Understanding of key systems (CTMS, EDC, eTMF, LIMS, MES, PV systems), data models (CDISC, SDTM, ADaM), and typical data challenges (quality, lineage, integration, governance) is highly desirable
**Supervisory Responsibilities:** No
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.
-
Recent Searches
- Associate General Counsel Strategic (Pennsylvania)
- Patient Care Tech Labor (Texas)
- Machine Operator 2nd SHIFT (North Carolina)
- Vice President Sales Executive (Texas)
Recent Jobs
-
Associate Director, Data Platform and Solution Engineering
- BeOne Medicines (San Mateo, CA)