AI Engineer (Developer Productivity)
We are looking for an AI Engineer who can design and build production-grade AI systems to accelerate software development, testing, and deployment across our platform.
This role focuses on LLM-powered agents, orchestration frameworks, and intelligent automation — not just experimentation. You will build systems that are stateful, scalable, secure, and integrated into real engineering workflows.
You will work closely with the Software Architect, Rust Backend Engineers, Senior Frontend Engineer, and Cloud & Deployment Engineer to embed AI into both: our product (NHI / security platform), and our internal engineering stack.
What you’ll own.
AI agents & orchestration
- Design and implement AI agents capable of code generation, review, and refactoring.
- Build agents for test generation and validation.
- Build agents for deployment automation and troubleshooting.
- Build agents for documentation generation and knowledge retrieval.
- Build multi-step, stateful workflows using tool calling, task planning, and execution graphs.
LLM systems, SDKs & memory management
- Build systems using OpenAI SDK, Anthropic SDK, and AWS AgentCore.
- Design and implement memory strategies — short-term (context window), long-term (vector DB / retrieval), and session-based memory for agents.
- Use frameworks such as LangGraph, LangChain / LlamaIndex (or similar).
- Implement Retrieval-Augmented Generation (RAG), tool/function calling, and multi-agent coordination.
Developer productivity & automation
- Build tools that assist engineers working in Rust, Next.js, and cloud-native systems.
- Generate boilerplate code, tests, and API integrations.
- Improve debugging and observability workflows.
- Integrate AI into Git workflows (PRs, reviews, commits), CI/CD pipelines, and internal developer tools.
Cloud & deployment integration
- Deploy AI systems in Kubernetes environments and containerized systems (Docker).
- Build AI-driven systems for deployment validation, incident analysis, and cloud cost optimization.
- Ensure reliability, scalability, and observability of AI pipelines.
What you’ll bring day one.
Core
- Strong experience designing and shipping production-grade AI systems (not just experimentation).
- Hands-on experience with LLM SDKs (OpenAI, Anthropic) and orchestration frameworks (LangGraph, LangChain, LlamaIndex, or equivalent).
- Experience implementing RAG, tool/function calling, and multi-agent coordination.
- Familiarity with vector databases and short/long-term memory strategies.
- Comfortable working across engineering, product, and infrastructure.
- Experience deploying AI services in containerized / Kubernetes environments.
- You think in systems, workflows, and automation — not just models.
- You focus on real-world impact and production readiness.
- You are comfortable working across engineering, product, and infrastructure.
- You enjoy building tools that other engineers rely on daily.
- You thrive in high-ownership, fast-moving environments.
Introduce AI-first workflows across engineering and deployment.
Improve developer velocity and product quality.
Reduce manual effort across teams.
Help build a modern, AI-driven engineering platform.
Ready to apply?
Email your application directly to faisal.razzak@arkion.ai. Include the following so we can move quickly:
- Resume
- GitHub or portfolio (AI or automation projects)
- Examples of AI agents, workflow orchestration systems, or developer productivity tools