Job Description
Optum is a global organization focused on improving health outcomes through the power of technology. As an Associate Software Engineer at Optum, you will be a part of a team that builds cutting-edge data infrastructure and solutions that help millions live healthier lives. You will work with talented peers in a collaborative environment that values innovation, inclusivity, and continuous development.
This role will provide hands-on experience in building scalable, cloud-native data pipelines, implementing DataOps best practices, and ensuring the quality of massive data sets for healthcare solutions. You’ll have the opportunity to grow your skills in data engineering, automation, and modern data architecture while working in a dynamic, mission-driven environment.
Key Responsibilities
- Design and develop scalable data pipelines for processing terabytes of data
- Build and orchestrate data workflows in Airflow to run on Kubernetes or Hadoop
- Develop best-in-class solutions for cleaning and standardizing datasets
- Tune and optimize data processing performance
- Champion the adoption of DataOps practices including CI/CD, testing, orchestration, and monitoring
- Implement data quality monitoring systems to ensure high reliability of data products
- Collaborate with team members to deliver impactful technology-driven health solutions
- Comply with company policies, including flexibility in assignments, shift changes, and workplace arrangements
Required Qualifications
- Bachelor’s degree in Computer Science or related discipline
- Hands-on experience in real-time, near-real-time, and batch data ingestion
- Expertise in building cloud-native data pipelines using AWS, Azure, or GCP
- Strong understanding and experience with DataOps (CI/CD, orchestration, testing, monitoring)
- Proficiency in:
- Apache Spark
- Complex SQL query writing
- ETL/Data pipeline development
- Open-source programming languages and platforms (Scala, Python, Java, Linux)
- Knowledge of data modeling techniques such as DataVault and Kimball Star Schema
- Solid understanding of column-store RDBMS (e.g., DataBricks, Snowflake, Redshift, Vertica, Clickhouse)
- Demonstrated ability to design and implement modern data strategies with measurable business value
- Strong stakeholder management and interpersonal communication skills
Technical Skills (Comma Separated)
Apache Spark, SQL, ETL, Data Pipeline Development, Scala, Python, Java, Linux, AWS, Azure, GCP, Airflow, Kubernetes, Hadoop, CI/CD, DataOps, DataVault, Kimball, DataBricks, Snowflake, Redshift, Vertica, Clickhouse