About the Team:
The Data Engineering team is on a mission to create a hyper scale data lake, which helps finding bad actors and stop breaches.
The team builds and operates systems to centralize all of the data the Falcon platform collects, making it easy for internal and external customers to transform and access the data for analytics, machine learning, and threat hunting.
As an Software Engineer III on the team you will contribute to the full spectrum of our systems, from foundational processing and data storage, through scalable pipelines, to frameworks, tools and applications that make that data available to other teams and systems.
You will:
Design, develop, and maintain a data platform that processes petabytes of data
Participate in technical reviews of our products and help us develop new features and enhance stability
Continually help us improve the efficiency of our services so that we can delight our customers
Help us research and implement new ways for both internal stakeholders as well as customers to query their data efficiently and extract results in the format they desire
Key Qualifications:
We are looking for a candidate with a BSc and 4+ years in building products and processing data at scale
A solid understanding of algorithms, distributed systems design and the software development lifecycle
Solid background in Java/Scala and a scripting language like Python
Experience building large scale data pipelines
Strong familiarity with the Apache Hadoop ecosystem including : Spark, Kafka, Hive, Apache Presto, etc.
Experience with relational SQL and NoSQL databases, including Postgres/MySQL, Cassandra, DynamoDB
Good test driven development discipline
Reasonable proficiency with Linux administration tools
Proven ability to work effectively with remote teams
Nice to have skills:
Go
Kubernetes
Jenkins
Parquet
Protocol Buffers/GRPC