About KAWA AI
KAWA AI is building the agentic operating system for enterprise operations. We help organizations design, deploy, and govern AI systems for high-stakes workflows, where humans and AI agents work together with speed, control, and reliability. Our mission is to turn advances in generative AI into production systems that solve real operational problems.
Why Join Us
At KAWA AI, you’ll work on applied AI systems that move beyond demos into real enterprise execution. You’ll help build the models, workflows, and product capabilities that power agentic applications in production environments. This is an opportunity to work on challenging technical problems, shape core platform capabilities, and collaborate with a team focused on practical impact, fast iteration, and high standards.
Responsibilities:
- Design, build, and improve generative AI systems that power KAWA AI’s agentic platform.
- Develop production-ready workflows using large language models, retrieval systems, tool use, and orchestration frameworks.
- Work closely with product, platform, and forward deployed teams to translate enterprise use cases into scalable AI capabilities.
- Build and optimize prompts, evaluation pipelines, and system behaviors for reliability, usefulness, and safety.
- Implement retrieval-augmented generation, agent workflows, memory patterns, and human-in-the-loop controls where appropriate.
- Experiment with model architectures, inference strategies, and workflow designs to improve performance across customer use cases.
- Create robust backend services and integrations that connect AI components with enterprise systems, data sources, and internal tools.
- Monitor and debug production AI systems, identifying failure modes and improving performance over time.
- Contribute to internal tooling, reusable frameworks, and best practices for building and deploying generative AI applications.
- Stay current with advances in LLMs, agents, evaluation methods, and applied AI infrastructure.
Requirements:
- Strong software engineering background with experience building and shipping production systems.
- Hands-on experience working with large language models and generative AI application development.
- Proficiency in Python and familiarity with modern backend development practices.
- Experience with APIs, model providers, vector databases, retrieval systems, and orchestration frameworks.
- Strong understanding of prompt design, evaluation, observability, and iteration loops for LLM-based systems.
- Ability to move from experimentation to production-grade implementation with speed and rigor.
- Strong problem-solving skills and comfort working in fast-moving, ambiguous environments.
- Excellent communication skills and the ability to collaborate across engineering, product, and customer-facing teams.
- High ownership mindset and strong attention to quality, reliability, and practical impact.
Nice to Have:
- Experience with agentic systems, multi-step reasoning workflows, and tool-using AI applications.
- Familiarity with model evaluation frameworks, offline benchmarking, and production monitoring for AI systems.
- Experience deploying AI products in enterprise, regulated, or mission-critical environments.
- Background in workflow automation, orchestration, or human-AI collaboration systems.
- Experience working in an early-stage startup or high-growth product environment.
We’re looking for engineers who want to build real-world generative AI systems that operate reliably in complex enterprise environments. If you’re excited about turning foundation models into governed, production-ready systems for high-stakes operations, we’d love to hear from you.