- Robert Half Technology (San Antonio, TX)
- …including deep learning techniques. * Experience with big data tools such as Apache Spark , Hadoop , and Kafka. * Knowledge of ETL processes and data clustering ... methodologies. * Familiarity with natural language processing tools, including Natural Language Toolkit (NLTK). * Excellent analytical and problem-solving skills with the ability to work independently and collaboratively. Technology Doesn't Change the World,… more
- Amazon (Austin, TX)
- …Hands on experience designing and implementing ETL data pipelines (Kafka, Airflow, Spark , Hadoop , Glue), MLOps pipelines (Sagemaker, MLFlow, Kubeflow), or LLMOps ... pipelines Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Los Angeles County applicants: Job duties for this position include: work safely and… more
- Robert Half Technology (Houston, TX)
- …ETL tools like Matillion, Talend, or Informatica Cloud + Familiarity with GitHub, Spark , Hadoop , NoSQL, APIs, and streaming platforms + Proficiency in Python, ... Java, or Scala + Experience with Agile/Scrum methodologies Technology Doesn't Change the World, People Do.(R) Robert Half is the world's first and largest specialized talent solutions firm that connects highly qualified job seekers to opportunities at great… more
- Huntington Ingalls Industries (San Antonio, TX)
- …certification is highly desired. + Experience with big data technologies like: Hadoop , Accumulo, Ceph, Spark , NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, ... Drill, Impala, Trino, Presto, etc. + Experience with containers, EKS, Diode, CI/CD, and Terraform are a plus. + Work could possibly require some on-call work. We have many more additional great benefits/perks that you can find on our website at… more
- Citigroup (Irving, TX)
- …and Developing applications using Java/J2EE, Spring, Spring Boot, Hibernate, sql/No-SQL DB, Hadoop , HDFS, and Spark , testing with Mockito library, and performing ... release management using CI/CD DevOps tools including Jenkins, Team City, SONAR, Maven, and uDeploy. 40 hrs./wk. Applicants submit resumes at https://jobs.citi.com/ . Please reference Job ID # 25881232. EO Employer. Wage Range: $168,649.49 to $182,000.00 Job… more
- Amazon (Austin, TX)
- …lead technical teams in customer projects - Strong experience with distributed frameworks (eg Spark , Hadoop , Kafka, Presto, Flink, S3, HDFS, DBs) and a solid ... understanding of JVM or similar runtime. Experience designing and building data governance and data protection mechanisms - Experience building applications using Generative AI tools and technologies (LLMs, Vector Stores, Orchestrators such as LangChain,… more
- Evolent (Austin, TX)
- …and .NET full stack development. Experience with data engineering tools and platforms like Apache Spark , Hadoop , and cloud services (eg, AWS, GCP) is a plus. + ... Experience with developing cloud-native services and applications, either on the public cloud (AWS/Google/Azure) or an equivalent private cloud. + Experience with healthcare regulations and standards, such as HIPAA, as well as healthcare systems and domains… more
- General Motors (Austin, TX)
- …Experience with Big Data technologies and developing in Hadoop ecosystem, ie Hadoop , Hbase, Hive, Scala, SPARK , Sqoop, Flume, Kafka, Python. + Experience ... with Oracle and/or Postgres, NoSQL with Yugabyte, and/or Cassandra and/or Cosmos, a plus with competency in writing basic SQL and experience with JPA. + Experience with the ELK stack and dashboarding within Kibana, Grafana to support production applications… more
- Highmark Health (Austin, TX)
- …to utilize technologies such as, but not limited to: Google Cloud Platform, Hadoop , Hive, NoSQL, Kafka, Spark , Python, Linux shell scripting, SAS, Teradata, ... SQL + Data Warehousing + Problem-Solving + Communication Skills + Analytical Skills + Spark or Python or related tool + Cloud Technologies **Language (Other than… more
- Capital One (Plano, TX)
- …Azure, Google Cloud) + 3+ years experience with Distributed data/computing tools (MapReduce, Hadoop , Hive, EMR, Kafka, Spark , Gurobi, or MySQL) + 2+ year ... role supports our cyber IAM team, working with modernized tools like Python, Spark , AWS, and Databricks. You will create and manage actionable metrics, data models,… more