DevOps Engineer

What do we do?

We leverage machine learning, neural networks, and advanced data technologies to help customers:

  • Enhance products, processes, or services with AI capabilities that align with their business goals
  • Build scalable, secure AI-driven data infrastructures that meet regulatory requirements
  • Help customers discover what they can do with these technologies and transform their business

Our team of 18 is passionate about continuous learning, working with the latest technologies, and tackling complex challenges that others often deem unsolvable. We value teamwork, adaptability, and enjoy a well-rounded work-life balance that includes staying active and supporting each other.

🕵️‍♂️ We are currently looking for a DevOps Engineer for our AI Solutions!

As a DevOps Engineer, you will design, build, and maintain the infrastructure and deployment pipelines for our own or customer developed AI-driven applications. You’ll collaborate closely with data engineers, data scientists, and clients to create scalable, reliable systems that support AI and big data workflows. Your work will help our clients maximize the value of their data and facilitate efficient, impactful AI integrations in their operations.

🎯 What will be your main responsibilities

  • Designing and implementing large scale data infrastructures, potentially using technologies such as Hadoop, Spark, and Kafka, mainly used by enterprise customers.
  • Building and maintaining deployment pipelines tailored to AI applications.
  • Automating the provisioning, scaling, and monitoring of cloud-based and on-premises clusters.
  • Troubleshooting and resolving issues in production to ensure high availability and performance.
  • Developing best practices for performance, scalability, and security across our own and customers infrastructure.

🧠 Who should you be

  • A tech enthusiast with a strong foundation in Linux and Python, passionate about DevOps and AI infrastructure.
  • Familiar with containerization technologies, especially Docker and Kubernetes 
  • Familiar with cloud environments such as AWS, Azure, or GCP.
  • Authentic & Charismatic, able to work effectively in a team setting and collaborate across disciplines.
  • Confident in English and Czech, especially when communicating complex technical details.
  • A problem-solver, quick learner, and adaptable to evolving project requirements.
  • Structured with a strong attention to detail and quality.

🚀 What would really help you

  • Strong experience with Linux systems administration and proficiency in Python.
  • Hands-on experience with containerization (Docker) and orchestrating clusters (Kubernetes).
  • Knowledge of big data technologies like Hadoop, Spark, and Kafka.
  • Familiarity with CI/CD processes and best practices in MLOps, including model versioning, monitoring, and lifecycle management – this is a huge advantage!
  • Background in AI, machine learning, or data-centric projects.
  • Experience with enterprise grade customers 

🤩 What can you look forward to?

  • Meaningful work in a fast-paced, startup-like environment.
  • Direct collaboration with customers, building impactful AI and data solutions.
  • Creative office spaces Brno that encourage teamwork and innovation.
  • Flexible working hours, because we know life happens outside of work.
  • A Multisport card—stay active, stay healthy!

If you’re ready to make an impact in the AI world and enjoy working on cutting-edge infrastructure projects, send us your CV or LinkedIn profile. We can’t wait to meet you!

‍

Location
Brno
Job Type
Full time
Team Leader
Jiří Polcar
Jiří Polcar
CTO, Co-founder
I am interested in this job
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.