machine learning engineer - Ireland - KAPIA - RGI

    KAPIA - RGI
    KAPIA - RGI Ireland

    1 week ago

    Default job background
    Full time
    Description

    The candidate will work in the R&D Innovation Department, joining our young and growing Data Science & ML team, with other experts in the field and will report to the BI & Data Science Lead.

    As a Machine Learning Engineer, you will play a central role in the team responsible for implementing data and machine learning pipelines for our insurance platform. This is an exciting opportunity to shape our ML architecture and contribute to RGI's growth and success.

    Responsibilities

    • Work closely with data scientists, software developers and various stakeholders to grasp the project's goals and develop a clear understanding of machine learning objectives;
    • Develop and implement data science prototypes and transform them into production-grade models on Cloud platform like AWS or Microsoft Azure;
    • Apply modern methods of machine learning, artificial intelligence, and data analysis to make data-driven decisions and develop innovative solutions
    • Assess new and promising technologies with regards to their business potential and technical maturity, particularly in the area of MLOps, NLP and Generative AI;
    • Collaborate with cross-functional teams, document findings, and ensure compliance with industry regulations;
    • Carrying out the automation, tuning, observability, and reproducibility of our ML pipelines;
    • Carrying out activities related to data extraction, cleaning, manipulation, validation, storage, and processing;

    Experiences

    At least 2 – 3 years' experience in a Machine Learning Ops. Engineer.

    Technical Skills

    • Strong Python programming language knowledge and extensive use of its ecosystem, including: Pandas, Polars, NumPy, scikit-learn, TensorFlow/PyTorch/Keras
    • Professional experience with versioning tools: Git, GitLab, MLFlow, DVC....
    • Experience with big data architecture and data manipulation tools: Apache Spark, Dask
    • Professional experience with cloud-based platforms, such as AWS or Azure o GCP, and with model Deployment using FastAPI:
      • Experience with Docker and Kubernetes
      • Knowledge of REST API deployment.
      • Leverage Pydantic's data validation capabilities to ensure incoming requests adhere to expected formats.
      • Implement logging and error handling to track important events and gracefully handle exceptions.
    • Good knowledge of software engineering principles (e.g. IT security, REST APIs, OOP, entity-relationship models, data structures, algorithms etc.)
    • Fluent knowledge of English and Italian.

    Nice to Have / Plus :

    • Experience in Java
    • Experience working with relational and NoSQL databases.
    • Knowledge in ETL tools (e.g.: AirFlow, Talend, IBM DataStage....) and management of databases is a plus
    • Utilize SQLAlchemy's ORM for simplified database interactions. Define models and utilize sessions for CRUD operations

    Personal Characteristics:

    • Analytical Thinker
    • Strong team spirit and an open, cooperative culture
    • Attention to detail, passion for processes, systems and data mining
    • Willingness of learning and delivering innovation-based products to a wider audience
    • Effective Communicator
    • Results-Driven
    • Ethical and Compliant

    Education

    Bachelor's or Master's Degree in Software Engineering, Data Science, Computer Science, or a similar quantitative field.

    This announcement is addressed to candidates of both genders in accordance with the law (L.903/77 and Legislative Decree no. 98/2006, article 27). The interested party is invited to submit their application by providing specific consent for the processing of personal data, in accordance with the new European Privacy Regulation, as per articles 13 and 14 of the GDPR (Regulation (EU), April 27, 2016, no. 2016/679).

    #J-18808-Ljbffr