Your challenge 🚀
- Build a truly self-service application for business users so they can work with data even without learning SQL or needing assistance from more technically-skilled colleagues. We are democratizing access to data and making it available for anyone within an organization who needs it.
- Find ways to automate most activities users must do to reduce the time needed to prepare data, allowing them to actually use the data for their work, and make data-driven decisions.
- Create a product that contains all the features enterprise solutions must-have, while making it super-easy to use. We want to refute the myth that enterprise software must be ugly and painful to use–it’s a lie!
- Take end-to-end ownership of the whole product and not just blindly focus on coding.
What you’ll do 💻
- Extend our Java-based application and work on big data integrations with our clients’ Hadoop and Spark ecosystems. You’ll also get to work with new and exciting tech, like Snowflake and Redshift.
- Analyze Hadoop and Spark ecosystems to find ways they can benefit from integration with our products and solutions.
- Work with our customers to find new opportunities where big data can help their businesses.
- Leverage your big data experience to help customers design the best version of their data lake using our products.
- Work with our product owners on big data architecture for our customers.
- Help our consultants with their projects on the customer side.
Is this you? 💪
- You have hands-on Java programming experience.
- You are familiar with some of the following technologies: Apache Hadoop, Spark, Apache Hive, SQL and relational DB principles, and cloud technologies such as AWS, MS Azure, and Unix shell.
- You can effectively and comfortably communicate with English-speaking teams located around the world.
- You see challenges as opportunities. Why wait for a task list when you can start innovating right away?
- You enjoy constantly learning new things and sharing your knowledge with others.
- Experience with data-management-related issues is a bonus. 😉
- You are physically located in UTC-1 to UTC+3 time zones.
Our technical stack 👨💻
- Frontend: TypeScript, React/Vue, Apollo, Nx, MobX, Styled Components
- Backend: Java, Spring Boot, Kotlin, GraphQL, Python, jOOQ
- Big data: Spark, Redshift, Snowflake
- Storage: Postgres, Elastic, Minio
- Infrastructure: GitLab CI/CD, Kubernetes, AWS, Azure
Your team 😍
- You will become a part of Data Integration Spaceport.
- If you would like to know more about the structure of our whole Product & Engineering team, how it works, and why are the teams called Spaceports, you can take a look at a series of articles from Martin where he talks about it in detail.
What happens next? 🔜
- We’ll quickly review your application and let you know whether we’re a good fit to move forward. This won’t take longer than a day.
- You’ll have your first chat with Tereza or Petra, just so we can get to know each other better, understand your motivation for applying, and make sure you know all the important things about us.
- Depending on your seniority, you may have another interview or two with teammates from the Back-End Circle and your hiring manager.
- We’ll finish the interviews with lunch, so you can meet your future team.
- The whole process shouldn’t take longer than a week or two.
We offer equal opportunities
Ataccama is proud to be an Equal Opportunity Employer. We know diversity fuels knowledge exchange, fosters innovation, and empowers us to grow and be better as a company and as humans. We seek to recruit, develop, and retain the most talented people from a diverse candidate pool.
We are committed to fair and accessible employment practices. If you are contacted for a job opportunity, please let us know how we can best meet your needs and advise us of any accommodations required to ensure fair and equitable access throughout the recruitment and selection process.