Tulsa, Oklahoma, United States

Data Streaming Developer

 Job Description:

We are looking for a talented and motivated Data Streaming Developer with hands-on experience in Apache Kafka and Apache Flink to join our team. In this role, you will be responsible for designing, developing, and maintaining real-time data-streaming applications and pipelines that power our data-driven solutions.

The ideal candidate is passionate about data technologies and stream processing, has a strong technical background, and thrives in a collaborative environment. You will work closely with data engineers, architects, and other technical stakeholders to deliver scalable and efficient data-streaming solutions.

Key Responsibilities

  • Development and Implementation:
    • Design, develop, and deploy real-time data streaming applications using Apache Kafka, Apache Flink and Apache Spark

    • Build and maintain data pipelines for ingesting, processing, and delivering streaming data

    • Integrate Kafka, Flink and Spark with other data systems, such as databases, data lakes, and analytics platforms

  • Optimization and Troubleshooting:
    • Optimize Kafka topics, partitions, and consumer groups for high throughput and low latency

    • Tune Flink jobs for performance, scalability, and fault tolerance

    • Build resilient architectures using multi-region clusters and high-availability patterns

    • Monitor and troubleshoot data streaming applications to ensure reliability and performance

  • Collaboration and Support:
    • Work closely with data architects, engineers, and analysts to understand requirements and deliver solutions

    • Collaborate with DevOps teams to deploy and manage streaming applications in production environments

    • Provide technical support and guidance to team members on Kafka and Flink best practices

  • Innovation and Improvement:
    • Stay up-to-date with the latest developments in low latency data and stream processing technologies

    • Propose and implement improvements to existing data streaming processes and infrastructure


Required Qualifications

  • Education:
    • Bachelors degree in Computer Science or Engineering, or equivalent work experience

  • Experience:
    • 2+ years of hands-on experience with Apache Kafka and Apache Flink

    • Proven experience in developing and deploying real-time data streaming applications

    • Familiarity with distributed systems, event-driven architectures, and stream processing concepts

    • Experience with cloud platforms (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes)

  • Technical Skills:
    • Proficiency in programming languages such as Java, Scala, or Python

    • Strong understanding of Kafka components (e.g., brokers, producers, consumers, Kafka Connect, Schema Registry)

    • Hands-on experience with Flink APIs, stateful stream processing, and event time processing

    • Knowledge of data serialization formats (Avro, Protobuf, JSON) and messaging protocols

    • Familiarity with SQL and NoSQL databases

  • Soft Skills:
    • Strong problem-solving and analytical skills

    • Excellent communication and teamwork abilities

    • Self-motivated and able to work independently in a fast-paced environment


Preferred Qualifications

  • Experience with other stream processing frameworks (e.g., Kafka Streams, Spark Streaming)

  • Knowledge of data governance, security, and compliance best practices

  • Familiarity with modern data stack tools (e.g., Snowflake, Databricks)

  • Certifications in Kafka, Flink, or cloud platforms


What We Offer

  • Competitive salary and benefits package

  • Opportunities for professional growth and development

  • A collaborative and innovative work environment

  • The opportunity to work with a variety of clients in various industries, solving hard problems in data engineering and high-performance computing

  • Flexible working hours and remote work options

  Required Skills:

Data Engineering Apache Kafka Pipelines AWS DevOps Databases Engineering