Falcon.io provides a unified SaaS platform for social media listening, engaging, publishing and managing customer data. We enable our clients to explore the full potential of digital marketing by managing multiple customer touch points from one platform. We’re a highly diverse team, and we’re always looking for passionate and curious people who want to be part of a fast-paced, high-energy environment.
As a DataOps Engineer you will be covering the engineering aspects including data pipelines, DataOps related infrastructure and tools, data lifecycles, data schemas and data transformations.
The DataOps team breaks down the barriers between data and operations, and spearheads the democratization of data across the departments in Falcon. By employing an Agile way of working, using DevOps techniques and Statistical Process Control from Lean, the team's goal is to consolidate existing data streams into value for its consumers across the organisation.
What you will do:
Work across multiple data platforms to integrate client data, be responsible for data quality control, research data issues, formulate data integrity solutions, and more
Be hands-on, and champion implementation of proactive monitoring, alerting, trend analysis, and robust systems
Develop and advance data reporting and data QC applications/systems
Serve as technical contributor in enhancing and improving ETL processes
Triage data research requests and issues to prioritize and systemize for effective resolutions
Competently communicate and collaborate with multiple departments, customers, and development teams for anything data related
Work effectively on an Agile team and collaborate well with your other team members
Who you are:
Experienced engineer with 4-6 years industry experience
Hold a degree in Computer Science, Information Systems or a related field
Exceptional communicator both within and outside of the engineering organization
What you have:
Decent know-how of data-architecture and data infrastructure
Sound knowledge and experience in continuous development/delivery while maintaining a data pipeline
Versed in ETL/ELT processes and tech stacks, we use Fivetran, DBT and Looker
Proficiency in data querying and manipulation via SQL, NoSQL
Intermediate level of proficiency in one or more programming languages, like Python, Java, or Shell scripting
Experience with AWS Lambda, AWS Redshift, Google BigTable
Experience setting up automated CI/CD pipelines
Used to working with Testing
It would be a plus if you have:
Experience in AWS Cloud Infrastructure Provisioning using Ansible and Terraform as configuration management tool
You are familiar with or have a strong understanding of Kubernetes and containers (Docker)
Experience in Apache Kafka for real-time processing
A good understanding of system observability (proactive monitoring, alerting, trend analysis, and robust systems)
A good understanding of the SDLC
This job comes with several perks and benefits