Become TDCs new “Code Hero” on real time data streaming
We want to continue to be among the best in Denmark at Data & AI
Data & Advanced Analytics is the epi center of data warehouse, management reporting, advanced analytics and omnichannel communication to customers in TDC Group. Together with our team on AI & Robotics (AI&R) we play a central and innovative role in converting data to insights for the business and for customer solutions. As a core part of our strategic journey we want to push the bar even further for applying Data & AI. You are going to help us meet our high ambitions.
We are looking for data streaming engineer who will join our dedicated quest to build a unique and powerful real time data streaming platform by designing and developing data processing pipelines, applications and tools. You will work in a horizontal team together with data scientist, digital engineers and more to make your task of delivering a commercial rooted use case-based Data & AI roadmap come alive.
We are looking for a Data Streaming Engineer to spearhead new feature development, DevOps, build data pipelines to induct data from numerous systems. The role will be deeply rooted in designing, developing and delivering TDCs future streaming data platform, and this developer role will get hands on experience with some of the latest technologies including Apache Kafka and Spark. In this role you will be in the heart supporting the design, development, testing and deployment of a Kafka-based data brokering solution, an open-source stream-processing software platform. We are used to work with data in large scale – now you will be part of scaling this exponentially.
Your primary tasks are:
- Be part of building and deploying the “Next Gen Data & AI Platform” by constructing data staging layers and build fast real-time data pipeline to automate high-volume data delivery systems to feed BI applications and AI / Machine Learning algorithms
- Develop scalable and reliable data solutions to move data across systems from multiple sources in real time streaming as well as micro batch modes (Kafka/Spark)
- Build, integrate, and test prototypes and final end-2-end implementations of real time data feeds via Kafka from data source to APIs for front end customer systems.
- Provide analysis and support for the architecture and design of a end-2-end Kafka-based data brokering system.
- Build processes supporting data transformation, data structures, metadata & documentation, data privacy, dependency and workload management
- Encourage data innovation, implementation of cutting-edge technologies, inclusion of new data sources, push outside-of-the-box thinking, be an advocate for horizontal teamwork in mind & behavior.
To be successful in the job:
- Must have strong experience in writing programs using Kafka APIs and Kafka Streams API.
- Advanced working SQL knowledge; working with relational databases; query authoring (SQL), Kafka, Hadoop, Spark, etc.
- Must have solid hands on experience and clear understanding in the related BI technologies.
- Should be able to confidently handle commercial requirements from non-data business profiles and with a strong understanding of analytics to translate business requirements into technical solutions
- Strong work ethic with good time management with ability to work with diverse teams.
- Must be a self-starter by nature to execute and deliver project work and milestones
- Work well in agile and horizontal team environments – and deliver progress on daily basis
- Possess deep curiosity to learn new tools and technologies and apply them in production scale
As this is a key role in TDC Groups future data strategy program the ideal candidate would have several of the following characteristics:
- Experience with building data pipelines in Kafka and Kafka Brokers/Connect
- Good skills around data processing tools like Storm, Kafka or Spark via Microsoft Azure
- Strong experience building data services via Azure Cloud will be highly desired, but not required
- Experience working with Big Data Technologies such as Apache Storm, Apache Spark and Kafka.
- Experience in DBMS such as SQL server, MySQL, Oracle, SQL scripting, tuning and scheduling.
- Good coding skills in either Python, C++, SQL, Java are needed
- Experience with Docker containerization
- Knowledge of data serialization and formats, Avro, JSON, Parquet
We understand that this is a long list of different experiences and technologies and we are not looking for expertise in every area, but a good foundation in Data Engineering with understanding of API development and some of the above skills would be an extremely good fit.
You will be part of a dedicated horizontal team that has a broad set of skills and a wide variety of tasks, projects and people. We have high ambitions about being the best at what we do and are passionate about our work environment. You will get the opportunity to be a part of one of the most proficient Data & Advanced Analytics teams in Denmark.
Application and contact
We are interviewing candidates as of today, so send your application as soon as possible via the "apply now" button.
Place of work is Aarhus or Copenhagen, Denmark. If you have any questions, please address them to Head of Data Engineering Niels Mejer, +45 2331 7037.