Data Engineer – Inside IR35 – Hybrid 

The Data Engineers will operate under the Principal Engineer in our Data Engineering practice, and is accountable for:
 

  • Designing, implementing, and supporting key data flows and data related technologies in support of our Analytics efforts.

Tasks associated with this role include:
 

  • Designing and implementing data pipelines and data buses that will collect data from disparate sources across the enterprise, and from external sources, transport said data, and deliver it to our data warehouses and data lakes.
  • Using both advanced data manipulation tools and programmatically manipulating data throughout our data flows, ensuring such data is available at each stage in the data flow, and in the form needed for each system, service, and customer along said data flow.
  • Working within the regulatory, and system requirements, to ensure data quality, data security, data compliance needs are met.
  • Provide ongoing support to the customers within, and external to, the science and analytics environment.
  • Creating innovative data driven capabilities and prototypes that push the bounds of computer security analytics.

Skills

  • A strong background in data engineering, with experience in deploying complex cloud data and big data systems
  • Understanding in applying data engineering methods to the cyber security domain
  • Experience in PMP/Prince2, SCRUM, and Kanban program management methods
  • Excellent understanding of cyber security principles, techniques used by advanced criminal and nation state adversaries, global financial services business models, regional compliance regulations and applicable laws.
  • Excellent understanding and knowledge of common industry cyber security frameworks, standards and methodologies, including; OWASP, ISO2700x series, PCI DSS, GLBA, Global data security and privacy acts, FFIEC guidelines, CIS, and NIST standards.
  • An ability to communicate complex and technical issues to diverse audiences, orally and in writing, in an easily-understood, authoritative and actionable manner.

Technical Skills

  • Experience designing, building, and maintaining data pipelines and ETL workflows across disparate data sets
  • Experience with common data engineering tooling such as Azure Data Factory, Azure Databricks, Azure functions, Azure Logic App, Azure Monitor, Azure Log Analytics, Azure Data Lake Store, S3, Synapse Analytics and/or PowerBI.
  • Experience with streaming platforms such as Azure Event Hubs or Kafka and stream processing services such as Spark streaming.
  • Experience with Azure DevOps, ability to script (bash/Powershell, Azure CLI), code (Python, C#, Java), query (SQL, Kusto query language) coupled with experience with software versioning control systems (e.g. git) and CI/CD systems.
  • Advanced understanding of data transport, data pipelining, data cleaning, and data quality methods
  • Demonstrated ability to analyze incoming data sets and rapidly understand schemas of same, coupled with the ability to investigate and derive schemas for data sets for which we have no existing schema.
  • Experience interfacing with technology teams to bring lab concepts to market within an organization and building effective operational models to ensure capabilities are able to be fully utilized and grow to meet the needs of the team