All our tasks are based on the fact that:
- We collect data from third-party APIs
- We conduct QA of this data
- Let the client visualize and view this data
- And also you need to upload and provide this data to different sources (BigQuery, Redshift, Amazon S3)
But! This must be done reliably so that it works every day, so that hundreds of gigabytes are processed and so that nothing falls apart.
E.g. one of our services is a service for uploading a huge amount of data from Clickhouse to external sources, which takes into account changes in the client's data schema, new data, etc.ā
Requirements:
- Experience in commercial development in Python (Django, asyncio) from 3 years (OOP, multithreading), ideally - experience in developing a system that works under high load / with big data
- Experience in developing REST services
- Knowledge of SQL, including query optimization and database configuration
- Experience with NoSQL databases
- Experience with microservices and RabbitMQ
- Ability to write unit tests and easy-to-maintain code
- Ability to work confidently in Linux
Nice to have:
- High system design skill
- Experience with Redis, Amazon Web Services, Clickhouse, Docker, Kubernetes
- Experience with cloud services
- Ability to work in a Continuous Integration environment
- DDD concept understanding
Why Improvado
- Remote OK
- Strong product/market fit: marketing data product for US-based enterprises
- Ideal time & stage to benefit from companies growth - just got Series A :)
- Opportunity to get the company's stock options in the future
- Free English courses
- Comfortably built workflow and engineering processes
- Strong engineering culture (test coverage>90%, Domain-Driven Design, clean architecture)
- Modern stack (asynchronous, Clickhouse, high-load, custom pub/sub microservices, event-driven architecture, CI/CD, Kubernetes, AWS)
- Constant salary indexation and clear development roadmap
- Annual bonus
Ready to apply for this role?
Apply Now ā


