Job Openings Senior Web Data Engineer

About the job Senior Web Data Engineer

Heres a rewritten and formatted version of your job posting, optimized for clarity, professionalism, and appeal to technical candidates:

Job Title: Web Data Engineer (Python / Web Scraping / API Integration)

Location: Remote
Experience Level: Mid to Senior (4+ years)
Type: Full-Time

About the Role

Were looking for a skilled Web Data Engineer with at least 4 years of experience in web scraping, API integration, or related data engineering roles. In this role, you will build and maintain advanced data pipelines that collect and transform massive datasets from publicly available internet sources into actionable intelligence.

You will be working with modern scraping frameworks, browser automation tools, and public APIs designing robust systems that power our strategic data operations.

What You Will Do

  • Design, develop, and maintain scalable data collection pipelines from public APIs and websites

  • Implement scraping solutions using Python libraries (e.g., requests, httpx) and frameworks like Scrapy, BeautifulSoup, or Selenium

  • Leverage browser automation tools such as Playwright or Puppeteer for dynamic content extraction

  • Handle proxy rotation, session management, and TLS challenges for resilient data scraping

  • Parse and process structured and unstructured data using HTML, CSS, JavaScript, REST APIs, and GraphQL

  • Collaborate with the team to transform raw web data into clean, structured, and insightful datasets

  • Contribute to system architecture with an eye for performance, reliability, and scalability

What We Are Looking For

  • 4+ years of experience in web data engineering, API integration, or similar roles

  • Strong Python programming skills and deep knowledge of HTTP libraries

  • Experience with web scraping frameworks (Scrapy, BeautifulSoup, Selenium)

  • Hands-on experience with modern browser automation (Playwright, Puppeteer)

  • Solid understanding of web protocols, JavaScript rendering, and data extraction techniques

  • Working knowledge of proxy management, session handling, and TLS mechanics

  • Detail-oriented mindset with the ability to transform data into valuable business insights

Bonus Points For

  • Experience with Rust or Go web scraping frameworks (for performance-focused scraping)

  • Familiarity with distributed systems, job queues, and high-scale data collection infrastructure

  • Knowledge of asynchronous programming and parallel processing

Why Join Us

You'll be part of a team tackling challenging data problems with impact at scale. We operate at the intersection of cutting-edge technology and massive open-source intelligence (OSINT) collection offering a chance to contribute to meaningful and complex technical work from day one.