Für eine verbesserte Lesbarkeit haben wir diese Stelle automatisch zusammengefasst. mehr erfahren
mehr erfahren
Intro
Quality education for hundreds of millions of underprivileged children
Aufgaben
Build and maintain data transformations for downstream consumption (ETL, ELT, streaming).
Negotiate interfaces between software producers and data consumers.
Work with social impact data: you will be working with novel data that is able to portray how education systems evolve over time in the context of low- and middle-income countries.
Use streaming pipelines as well as batch processing technologies to work with huge amounts of data as we scale.
Pick the right tool for each job: We don’t expect you to have knowledge of all the relevant technologies, but rather you will research and learn new technologies and help us adopt best practices.
Collaborate cross-functionally and interdisciplinarily with stakeholders, data scientists, BI analysts, engineers, product owners, members of the operations team, and our users.
Drive the design and implementation of new features.
Own data engineering projects from start to end.
Own different portions of critical data infrastructure.
Anforderungen
Experience in data modeling, schema design, and optimization.
Experience with pipelines for Data visualization tools such as Tableau or Power BI.
Proven experience in SQL, which you’ll be using on a daily basis.
Experience with the rest of our tech stack is not required; we care about your data engineering skills rather than what technologies you have used!
We speak and operate in English – knowledge of German is not required.
Benefits
An international and passionate team located in Kenya and Germany, who support and help each other grow to achieve the great goal we set out for.
Welcoming Data and Engineering teams with whom you will collaborate in creating the data pipelines that enable our insights.
Current technologies: SQL, dbt, Metabase, Tableau, Python. We use event sourcing to model our operational data.
Our stack is evolving, as we are always looking for the best tool for the job while at the same time keeping the stack as streamlined and unified as possible. Anyone is invited to propose new technologies.
We are looking to refine our data architecture, introducing Kafka, Flink, Spark, or similar technologies – you will have a key role in deciding the specifics.
Some other relevant technologies that we use: Kotlin, Redshift, AWS, Terraform, GitHub Actions for CI/CD
Technologies other people in the company use: Android, serverless architectures, AWS Lambda, TypeScript, React.js
Expand your skillset beyond just Data Engineering – if you are interested in developing your DevOps or ML Ops skills you can gain hands-on experience as part of this role.
Unique technical opportunities given the environment that we operate in: low-cost devices, limited connectivity, non-tech-savvy users, massive scale.
A modern company culture that emphasizes transparency and freedom. We care about results, and believe people work best when given the freedom to structure their work themselves, so there is a high degree of flexibility with regard to working hours and vacations, and the option to work from home up to four days per week.
An agile approach informed by data at all levels of the organization.
Quarterly one-week hackathons where we freely explore new ideas.
Exciting trips to Kenya to see your impact firsthand and meet our teachers and learners.
Defined and transparent salary ranges – you don’t have to negotiate to be paid what you’re worth.
A (seriously) unparalleled equity scheme.
Social impact like no one else: we are working to fix one of the world’s biggest problems.
´
Hier bewerbenUnterstütze uns, indem du bei deiner Bewerbung auf baito verweist