Employer: Isilumko Staffing (JHB)
AWS Cloud Engineer: Data Analytics
Banking Industry: Sandton
12-month Fixed Term Contract
This position will function within the Business Support Data Monetisation Business Unit
What will you be responsible for?
Overseeing junior data engineering activities and aiding in building the organisations data collection systems and processing pipelines. Oversee infrastructure, tools and frameworks used to support the delivery of end-to-end solutions to business problems through high performing data infrastructure.
Responsible for expanding and optimising the organisations data and data pipeline architecture, whilst optimising data flow and collection to ultimately support data initiatives.
What do you need to qualify for this position?
· Post Graduate Degree: IT Field of Study
· Preferred Qualification: Master’s Degree: IT Field of Study
· 8+ Years: Big Data Tools: Hadoop, Spark, Kafka
· Relational SQL and NoSQL Databases, including Postgres and Cassandra.
· Data Pipeline and Workflow Management Tools: Azkaban, Luigi, Airflow
· AWS Cloud Services: EC2, EMR, RDS, Redshift
· Stream Processing Systems: Storm, Spark Streaming
· Object Oriented/Object Function Scripting Languages: Python, Java, C++, Scala
· 8+ Years strong Analytic Skills related to working with unstructured datasets.
· Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
· A successful history of manipulating, processing, and extracting value from large, disconnected datasets.
· Working knowledge of message queuing, stream processing, and highly scalable “big data” data stores
What will you be doing?
· Provide Data Engineering guidance, information services and ensure an effective data engineering capability, works closely with data analysts and data scientists to ensure and effective data team.
· Collaborate with technology and project teams.
· Manage SLA’s and technical service delivery of vendors in the development, implementation, and customer service requirements for all Data Engineering requirements
Outputs for this position:
Owns and extends the business’s data pipeline through the collection, storage, processing, and transformation of large datasets and oversee the process for creating and maintaining optimal data pipeline architecture and creating databases optimized for performance, implementing schema changes, and maintaining data architecture standards across the required Client’s databases.
Oversee the assembly of large, complex data sets that meet functional / non-functional business requirements and align data architecture with business requirements.
Responsible overseeing the process for enabling and running data migrations across different databases and different servers and defines and implements data stores based on system requirements and consumer requirements.
Oversee, design, and develop algorithms for real-time data processing within the business and to create the frameworks that enable quick and efficient data acquisition. Deploy sophisticated analytics programs, machine learning and statistical methods.
Manage the analysis if complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual physical and logical data models.
Build analytics tools that utilise the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Create data tools for analytics and data scientist team members that assist them in building and optimising Client into an innovative industry leader.
Monitor the existing metrics, analyse data, and lead partnership with other Data and Analytics teams to identify and implement system and process improvements. Utilise data to discover tasks that can be automated and identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
Developing ETL processes that convert data into formats through a team of data analysts and dashboard charts. Oversee large-scale data Hadoop platforms and to support the fast-growing data within the business.
Risk, Regulatory, Prudential and Compliance:
Responsible for performing thorough testing and validation to Ensure proper data governance and quality across EDO and the business.
Acts as a subject matter expert from a data perspective and provides input into all decisions relating to data engineering and the use thereof. Provide guidance in terms of setting governance standards.
Liaise with and collaborate with data analysts, data warehousing engineers, and data scientists in finding and applying best practices within the Data and Analytics department as well as defining the business’s data requirements, which will ensure that the collected data is of a high quality and optimal for use across the department and the business at large.
Responsibility for contributing to the continual improvement of the business’s data platforms through thorough observations and well-researched knowledge. Keeps track of industry best practices and trends and through acquired knowledge, takes advantage of process and system improvement opportunities.
Overseeing activities of the junior data engineering teams, ensuring proper execution of their duties and alignment with Client’s vision and objectives. Provide oversights and expertise to the Data Engineering that is responsible for the design, deployment, and maintenance of the business’s data platforms. Required to draw performance reports and strategic proposals form his gathered knowledge and analyses results for senior EDO leadership.
· Communication will only be with shortlisted candidates.
· For this client we need the following documentation: ID/Skills Visa/Permanent Residence, High School Certificate and Highest Qualification
· Please include two contactable referee’s including their email addresses
· Your information is respected and will be treated with the utmost respect and confidentiality.