Job DescriptionAs part of our Artificial Intelligence Team, you will help out shaping the future of our software.You will develop, test and also maintain data architectures to keep this data accessible and ready for analysis. Among your tasks, you will do Data Modelling, ETL (Extraction Transformation and Load), Data Architecture Construction and Development and also Testing of the Database Architecture.Daily responsibilitiesCreate and maintain optimal data pipeline architectureAssemble large, complex data sets that meet functional / non-functional business requirementsIdentify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Cloud ‘big data’ technologies.Build/use analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metricsWork with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Real impact one step at a time The impact will imply the project's context and will also go beyond this, with the Competence Area community that you will be part of, with a strong focus on your technical skills. Professional OpportunitiesYou will have access to AI Community trainings and programs emphasizing skills on the technical and tactical side, while you will be engaged within new projects and opportunities landing in our business line.Community insightsThe community consists of Data Scientists and Machine Learning Engineers, along with Data Engineers sharing knowledge and projects' insights on a regular basis. We engage in projects pertaining to Computer Vision, NLP, Advanced Analytics, Preventions and Trends Analysis.QualificationsMust have3+ years of professional experience Experience working in Agile teamsExperience building and optimizing ‘big data’ data pipelines, architectures and data setsExperience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.Strong analytic skills related to working with unstructured datasetsBuild processes supporting data transformation, data structures, metadata, dependency and workload managementTechnical experience with:Big data tools: SparkObject-oriented languages: PythonVisualization tools: PowerBI, etcRelational Databases: PostgresNice to have:Experience in working directly with customer stakeholdersKnowledge of manipulating, processing and extracting value from large disconnected datasetsWorking knowledge of message queuing, stream processing, and highly scalable ‘big data’ data storesTechnical experience withBig Data tools: DatabricksData pipeline and workflow management tools: Airflow.Stream-processing systems: Storm, Spark-Streaming, etc.Additional InformationAt Accesa & RARo you can:Enjoy our holistic benefits program that covers the four pillars that we believe come together to support our wellbeing, covering social, physical, emotional wellbeing, as well as work-life fusion.Physical: premium medical package for both our colleagues and their children, dental coverage up to a yearly amount, eyeglasses reimbursement every two years, voucher for sport equipment expenses, in-house personal trainerEmotional: individual therapy sessions with a certified psychotherapist, webinars on self-development topicsSocial: virtual activities, sports challenges, special occasions get-togethersWork-life fusion: yearly increase in days off, flexible working schedule, birthday, holiday and loyalty gifts for major milestones, work from home bonuses
Data Engineer (with Spark, Python) in Bucuresti
Contact
Datele de contact vor fi vizibile dupa ce veti aplica!
Anunţ expirat