Koud is looking for a Senior AI Security Engineer to work with one of our clients.
We are seeking a Senior AI Security Engineer to lead the security of AI-powered products, platforms, and infrastructure. You will operate at the intersection of cybersecurity and AI, addressing emerging threats while enabling secure and scalable AI delivery.
This role covers the full AI security lifecycle, including threat modeling, prompt injection defenses, securing model supply chains, hardening RAG pipelines, and building scalable security tooling. You will act as the subject matter expert on AI security, responsible AI, and compliance (e.g., EU AI Act). Fluent English and international project experience are required.
Key Responsibilities
- Design and implement security for LLM apps, agents, and copilots;
- Build defenses against AI threats (prompt injection, jailbreaking, data poisoning, etc.);
- Secure RAG pipelines (data isolation, access control, context integrity);
- Implement content safety (filtering, toxicity detection);
- Enforce authentication, authorization, and rate limiting for AI APIs;
- Secure model serving (logging, audit trails, anomaly detection;
- Conduct threat modeling (STRIDE, MITRE ATLAS, OWASP LLM Top 10);
- Lead red teaming (adversarial prompts, robustness testing, data exfiltration);
- Track AI threat intelligence (attacks, CVEs, research);
- Build automated adversarial testing;
- Assess security of third‑party AI tools and models;
- Ensure compliance (EU AI Act, NIST AI RMF, ISO 42001);
- Define AI security policies (access, data, prompts, monitoring);
- Partner with legal/compliance on governance, consent, and bias;
- Maintain model documentation, risk assessments, and standards;
- Enforce responsible AI (fairness, transparency, oversight);
- Build AI security tools (prompt injection scanners, vulnerability scanning);
- Implement monitoring and alerting (SIEM/SOAR);
- Develop reusable security guardrails and middleware;
- Apply security‑as‑code (policy‑as‑code, infra scanning, secrets);
- Enable real‑time detection and forensic analysis;
- Embed with engineering teams to ensure secure‑by‑design AI;
- Provide security guidance across product and engineering;
- Lead AI security training and awareness;
- Support incident response (model compromise, data leaks, attacks);
- Act as internal AI security expert and documentation owner.
Requirements
- Extensive experience in cybersecurity, application security, or security engineering, with focus on AI/ML security;
- Deep understanding of LLM security risks (prompt injection, jailbreaking, data leakage, OWASP LLM Top 10);
- Hands‑on experience securing AI/ML systems in production (model serving, RAG, agents, APIs);
- Strong software engineering skills (Python + one of Go, TypeScript, Rust, or Java);
- Experience with cloud security (AWS, Azure, or GCP — IAM, network, encryption, secrets);
- Proficiency with security tools (SAST, DAST, SCA, SIEM, vulnerability management);
- Expertise in authentication/authorization (OAuth2, OIDC, SAML, RBAC/ABAC, zero trust);
- Strong knowledge of Secure SDLC and DevSecOps practices;
- Ability to communicate AI security risks to technical and non‑technical stakeholders;
- Fluent English and experience with international, multicultural teams;
- Strong communication, stakeholder management, and problem‑solving skills.
Preferred Qualifications
- Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field;
- Previous experience mentoring engineers or acting as a technical lead;
- Experience in insurance, financial services, or healthcare — industries with high regulatory and data privacy requirements;
- Hands‑on experience with AI/ML frameworks: LangChain, LangGraph, Hugging Face Transformers, vLLM, Ollama, and AI agent frameworks (CrewAI, AutoGen);
- Familiarity with AI security tools: Garak, Rebuff, NeMo Guardrails (NVIDIA), Prompt Guard, LLM Guard, Lakera Guard;
- Experience with vector database security: Pinecone, Weaviate, ChromaDB, pgvector access control and data isolation;
- Knowledge of emerging AI standards: MCP (Model Context Protocol), Agent‑to‑Agent (A2A) Protocol, and AI gateway patterns;
- Security certifications: CISSP, CISM, OSCP, GIAC (GPEN/GWAPT), or cloud‑specific security certs (AWS Security Specialty, AZ‑500);
- Experience with AI governance platforms and model risk management frameworks;
- Published research, blog posts, or conference talks on AI security topics;
- Experience building AI‑powered security tools (using AI to enhance security operations, not just securing AI)
Working Model & Collaboration
- Brazil based role with a 100% remote working model;
- Close collaboration with international stakeholders and teams across regions;
- Schedule flexibility may occasionally be required for critical milestones or major incidents.
#J-18808-Ljbffr…
