About the Role:
The CrowdStrike Data Science Team is seeking a motivated professional with good programming skills to do research in the AI Security space. The team is focused on identifying and analyzing potential threats to artificial intelligence systems. The Threat Researcher will drive our continuous efforts in improving our security posture in the AI ecosystem, by researching for gaps and vulnerabilities and helping prototyping solutions that can lead to high quality products for our customers.
What You’ll Do:
Be abreast of emerging technologies in the AI security threat space
Research for blind spots in our ML based detections: file formats, new types of attacks and write blog posts
Research for vulnerabilities of AI and LLM models
Write code for Proof of Concept projects starting from deep researches in the AI Security domain
Work closely with senior leaders to spearhead impactful projects and educate others on the topic, bridging knowledge gaps and gaining widespread support
What You’ll Need:
Understanding of ML algorithms
Programming languages for AI: Proficiency in programming languages such as PyTorch, TensorFlow, or Keras
Effective communication skills, including presentation, writing, and discussion
Fundamental understanding of attributes of Machine Learning files and Frameworks
Above medium knowledge of programming and scripting languages, in particular Python and Rust
Sound understanding of current and emerging threats and ability to demonstrate practical knowledge of security research
Eager to leverage your skills in a rapidly evolving field that brings together AI and IT security
Bonus Points:
Good understanding of ML frameworks and file formats like Pickle, PyTorch, GGUF
Programming experience in Rust
Experience with cloud computing platforms (e.g. AWS) and familiarity with containerization technologies (e.g. Docker, Kubernetes) is a plus
Experience in vulnerability research is a plus
Student in Computer Science, Information Security or a related field