About The Role:
The CrowdStrike Data Science Team is seeking a motivated professional with good programming skills to do research in the AI Security space. The team is focused on identifying and analyzing potential threats to artificial intelligence systems. The Threat Researcher will drive our continuous efforts in improving our security posture in the AI ecosystem, by researching for gaps and vulnerabilities and helping prototyping solutions that can lead to high quality products for our customers.
What You’ll Do:
Be abreast of emerging technologies in the AI security threat space
Research for blind spots in our ML based detections: file formats, new types of attacks and write blog posts
Research for vulnerabilities of AI and LLM models
Write code for Proof of Concept projects starting from deep researches in the AI Security domain
Work closely with senior leaders to spearhead impactful projects and educate others on the topic, bridging knowledge gaps and gaining widespread support
What You’ll Need:
Preferred 3+ years of proven experience as a Threat Researcher or Engineer in a Data Science organisation
Fundamental understanding of attributes of Machine Learning files and Frameworks
Above medium knowledge of programming and scripting languages, in particular Python and Rust
Sound understanding of current and emerging threats and ability to demonstrate practical knowledge of security research
Bonus Points:
Good understanding of ML frameworks and file formats like Pickle, PyTorch, GGUF
Above medium programming experience in Python or Rust
Experience with cloud computing platforms (e.g. AWS) and familiarity with containerization technologies (e.g. Docker, Kubernetes) is a plus
Experience in vulnerability research is a plus
BA/BS or MA/MS degree or equivalent experience in Computer Science, Information Security or a related field