Job Description:

About Skywalk

At Skywalk, were developing a new generation of subtle, powerful computing devices to help people access information and voice AI with near-telepathic speed, while staying deeply present in the real world. We just closed $5M in funding from top VCs and founders of Twitter, Pinterest, and Perplexity, and and we are building a voice-first computing experience at the intersection of hardware, software, and ML (shipping soon). Our work spans machine-level voice understanding, embedded firmware, and seamless user experiences across different modalities. Our custom hardware and proprietary machine learning technology allow for perfect voice capture and recognition at any volume, in any environment.

Our mission is to fundamentally change how people interact with technology, moving beyond traditional interfaces to create more intuitive and natural ways to access and control our digital world.

What we're looking for

Were looking for a Senior Full-Stack Mobile Engineer to scale our voice-powered application from beta to production. You'll be instrumental in building the features and backend infrastructure that power real-time speech processing, AI interaction, and cross-device sync for our iOS and wearable apps. You'll own the integration between our iOS experience for our proprietary wearable product that enables natural speech interactions in any environment, working directly with our technical founding team to bring cutting-edge speech AI technology to market.

We have developed our own wearable device, scheduled to launch at the end of 2025. In this role you will collaborate on our iOS application, which integrates Bluetooth peripherals with voice-first computing experiences.

What you'll do

  • Architect and implement the iOS application that pairs with our proprietary wearable device, focusing on background processing capabilities
  • Build a efficient and responsive mobile architecture (Swift, SwiftUI)
  • Design and implement responsive and scalable APIs to support speech enhancement, transcription, and AI processing in real time
  • Integrate with on-device ML models, remote inference APIs, and AI-driven task automation
  • Manage cloud infrastructure for syncing audio, transcripts, and user data across devices
  • Enable low-latency interactions through efficient data transfer between hardware, mobile clients, cloud infrastructure, and AI providers
  • Build secure protocols for sensitive audio and private data
  • Collaborate closely with iOS engineers, ML researchers, and embedded firmware developers

Key must-haves

  1. Experience building and launching applications zero-to-one
  2. Demonstrated ability to build and scale iOS architecture
  3. Have 3+ years of mobile full-stack experience, ideally building for apps with real-time interactions
  4. Deep understanding of speech processing, dictation, and embedded systems
  5. Experience with backend systems (Firebase, MongoDB, Google Cloud, and etc.)

Technical skills

  • Languages: TypeScript/Node.js, Python, Go (or equivalent backend language)
  • APIs: WebSockets, REST, GraphQL
  • Cloud: Firebase, GCP, AWS, or equivalent serverless/cloud platform
  • Databases: Firestore, Postgres, MongoDB
  • Bonus: Experience with ML model serving (e.g., TensorFlow Serving, TorchServe), audio pipeline design, or WebRTC

You might be a fit if

  • Are excited about voice interfaces and building subtle, ambient computing experiences
  • Care deeply about low-latency systems, privacy, and user experience
  • Have startup energy and want to own core infrastructure from day one

Working Place:

San Mateo, California, United States