Data Architect at Citi Bank Resume Sample

Hired by: Citi Bank

Enhance your resume with our Data Architect at Citi Bank resume sample. It aided a fellow job seeker in securing their position. Download it as is or use our HR-approved resume builder for customization. Your next career move could be one edit away!

Rewrite Sample with AI
1M+ job seekers
This resume sample was contributed by a real person who got hired with Kickresume’s help. Create your resume now or edit this resume example.
Was this sample helpful? Rate it!
Average: 4.9 (16 votes)

Data Architect at Citi Bank Resume Sample (Full Text Version)

Christian Markmell

Nationality: American
Email address: hello@kickresume.com
Other Phone: 999-999-999

Profile

Data Engineering and Architecture || Databricks Data Engineer Professional || Microsoft Data Engineer || Big Data ,ETL, Data Warehouse and Lake-house Professional || Data Analytics

Profile Summary

  • With a comprehensive professional background spanning 17 years, I currently serve as a Technical Architect and lead in Data Engineering, at Tata Consultancy Services (TCS), where I have been diligently contributing for the past 4 years and 2 months.
  • My expertise lies in the design and development of data warehouses/data lake/lake house and ETL/ELT data pipelines within enterprise environments.
  • Possessing advanced proficiency in Databricks, Pyspark,SQL, Data warehouse and lake-house design, I have effectively applied my skills to both conventional and distributed data warehouses on platforms, including Microsoft Azure, Google Cloud (GCP), and Databricks.
  • In my present capacity, I function as an Azure and Databricks Engineer and Developer, catering to the distinguished clientele of large Swiss Bank.

Skills

Programming Language/Framework
Cloudera Hadoop, Apache Spark (PySpark), Python
SQL/TSQL, Hive, Impala
Public Cloud/Compute
Microsoft Azure
Google Cloud Platform
Databricks and Lakehouse
Distributed Databases
Amazon Redshift
Google BigQuery
Extract Transform Load
Talend
SSIS
PySpark
RDBMS
SQL Server 2008/2012
Oracle 11g
PostgreSQL
Self Service BI and Analytics
Datameer
Tableau/ R Programming
Trifacta

Strengths

Effective Communication Skills
Detail-Oriented and Clarity-Driven Approach
Strong Work Ethic and Professionalism
Collaborative and Team Spirited

Technology Exposure

Azure Databricks - 80%
Microsoft Azure and GCP - 45%
ETL and Data Modeling, Dimensional Modeling - 70%
Big Data, Cloudera Hadoop, Spark, Pig - 60%
SQL,RDBMS and Data warehouse - 80%
C#.Net and Object Oriented Programming - 65%
Linux, Terraform, CICD, Git - 40%

Employment Summary

09/2021 - present, Technical lead and Data Architect, TATA Consultancy Services, Wrocław, Poland
06/2019 - 09/2021, Technical lead, TATA Consultancy Services, Pune, India
09/2016 - 05/2019, Senior Lead Analyst, Infosys BPM, Pune, India
09/2013 - 09/2016, Senior Programmer Analyst, CitiCorp Services India Private Limited, Pune, India
09/2007 - 09/2013, Consultant, Automatic Data Processing India Pvt Ltd, Pune, India
04/2006 - 09/2007, Software Developer, Optimal Info Tech Private Limited, Mumbai, India

Certifications

01/2023, Databricks Certified Data Engineer Professional, Databricks, https://credentials.databricks.com/c01ce455-372c-40fc-9aea-d864d2cfa204
02/2023, Microsoft Certified: Azure Data Fundamentals, Microsoft, https://www.credly.com/badges/3f5653fb-0b75-4fb0-babb-8115a24bc6dc/public_url
12/2022, Microsoft Certified: Azure Fundamentals, Microsoft, https://www.credly.com/badges/ea924909-3c1e-485d-8dad-ea4c3b5372d3/public_url

Project Experience

09/2021 - present, Data Architect and Technical Lead, TATA Consultancy Services, Wroclaw, Poland

Project: BFSI Regulatory Data Platform for Global Wind Down (Recovery and Resolution Planning) and Asset Management

Description: Establishing a Data and Reporting Platform as part of Resolution and Recovery Planning for the Investment Bank. This monthly regulatory ‘Crisis’ and ‘Business As Usual’ playbook with end to end data pipeline development and executions to process complex financial data computations from various upstream systems.

Roles: Data Architecture, Data Strategy, System Integration and Analysis

Tools and Env: Apache Spark (PySpark), Spark Dataframes API, Spark Pipeline Optimization Apache Airflow, AzCopy and Azure Synapse

Python, GitHub, Jenkins, SQL, Tableau, Jira (Atlassian)

Databricks Platform on Microsoft Azure

Linux, Cloudera Hadoop

06/2019 - 08/2021, Technical lead, TATA Consultancy Services, Pune, India

Project: BFSI Regulatory Data Platform for OCIR (Operational Continuity in Resolution)

Description: Funding and charging for Investment Bank assets and services demands regular assessment and re-estimations in OCIR. Project requires ingesting all the asset and services finance and reference data to be projected for re-estimation and evaluation. This was achieved by ingesting and computing huge data from all departments and services to match the projection within SLA.

Role: Senior Big Data Engineer

Tools and Env: Apache Hive, Apache Spark (Python), Impala, GitHub, Jira (Atlassian)

Databricks Platform on Microsoft Azure and Google Cloud

Linux, Cloudera Hadoop

09/2016 - 05/2019, Senior Lead Analyst, Infosys BPM, Pune, India

Project: Data warehouse for Supply Chain

Description: Set up an analytics platform involving, designing a data warehouse and single point of truth data stores for various sub domains and reporting interface.

Objective was to capture import metrics on daily basis for example,

  • Procurement to Pay
  • Overdue Analysis

Role: Data Architect and Engineering

Tools and Env: AWS, Amazon Redshift, PostgreSql, SQL, PL/PGSQL

09/2013 - 09/2016, Senior Programmer Analyst, Citicorp Services India Private Limited, Pune, India

Project: Citi Research and Analytics

Description: Enabling business with a platform for Big Data Analytics on following use cases

Web Access logs of Citi Velocity portal for insights on user’s access pattern of

Reporting on peak usage region/country/city wise with direct mapping to respective AKAMAI Servers.

Web Click-through data ingestion and building data marts for click through analysis sourced from Citi Research Subscriptions

Role: Big Data Developer

Tools and Env: Apache Spark (Python), Impala, Oracle 10g, PLSQL, Talend 5.6 (ETL), Datameer, MYSQL, PIWIK

Cloudera Hadoop, Oracle, Linux, Java

09/2007 - 09/2013, Consultant, Automatic Data Processing India (ADP), Pune, India

Core Technical Work Area:

C#.NET (4.5) development on Application Framework using Entity Framework

Database Development and ETL using SQL Server 2008 and SSIS

Role: Software Developer

04/2006 - 09/2007, Software Developer, Optimal Info Tech Private Limited, Mumbai, India

Core Technical Work Area:

Software Development for Garment Export ERP Solution (N-Tier) Application using below technologies

VB.NET and SQL Server 2005/2008

Achievements

  • Building an end-to-end solution for Exception Dashboard and Reporting using Spark with live dashboard enabling business to timely act and save valuable time (2-3 days) for sign off. This included helper features like :configurable Cross Pipeline Comparison to identify deltas.
  • Project Data Pipelines Migration from Spark 2.0 to Spark 3.0 achieved within 2 weeks of period under significant pressure with critical project and business downtime
  • Optimize Spark Data Pipelines by identifying bottlenecks at different levels for Pipeline Code, Framework and Platform. As a result: Identified models were optimized with almost 40% improvement, and based on analysis some models were moved to compute optimized DB clusters yeilding outstanding 200% performance boost

Accreditations and Trainings

  • Databricks Partner Solutions Architect Essentials
  • Accreditation - Databricks Lakehouse Fundamentals
  • Microsoft Azure for Data Engineering
  • Google Cloud Platform Big Data and Machine Learning
  • Fundamentals Leveraging Unstructured Data with Cloud Dataproc on Google Cloud Platform
  • Serverless Data Analysis with Google BigQuery and Cloud Dataflow

Education

07/2005 - 02/2006, Post Graduate Diploma in Advance Computing, Advance Computing, Vidyanidhi Info Tech Academy-National Resource Centre of CDAC, India, Mumbai, India

Advance Computing

(C, C++, Java, .Net, RDBMS-Oracle, Linux)

07/2005, Bachelor of Mechanical Engineering, Mechanical Engineering, All India Shri Shivaji Memorial Society’s College of Engineering [Pune University], Pune, India

References

Kedar Kulkarni, Citi Technology Canada, kedarkulkarni78@gmail.com, +16478322499
Suraj Pedulwar, Automatic Data Processing India Private Limited, +91 9049156085
Vijay Bhaskar Datla Reddy, Clairvoyant LLC, datla.vijay@gmail.com, +91 9960501199
Karthick Vadivel, Schlumberger, vadivelkarthick1989@gmail.com, +91 9975638603

Social Media

@charudattadeshmukh - https://www.linkedin.com/in/charudattadeshmukh/
@charudatta007 - https://twitter.com/Charudatta007
@charudattadeshmukh - https://medium.com/@charudattadeshmukh/
Position Overview:

As a Data Architect, you will be the mastermind behind an organization's data strategy and infrastructure. Your role involves designing and managing data systems, ensuring data accuracy, security, and accessibility. You'll collaborate with stakeholders to understand data needs, create data models, and oversee database development. Your expertise in data management and architecture will enable efficient data utilization, driving informed decision-making and business growth. As a Data Architect, you'll play a crucial role in transforming raw data into actionable insights for your organization.

Company Overview:
Citi Bank

Citi Bank, a subsidiary of Citigroup Inc., is a global banking and financial services institution. Renowned for its extensive network and diverse financial products, Citi Bank serves millions of individuals, businesses, and institutions worldwide. Offering a wide range of services including banking, lending, wealth management, and more, Citi Bank is committed to enabling financial progress and economic growth. With a history spanning over two centuries, it remains a trusted partner for customers seeking comprehensive financial solutions and expertise.

Edit this sample using our resume builder.

Let your resume write itself — with AI.

Kickresume’s AI Resume Writer can generate a great first draft of your resume in seconds. Simply enter your job title and let artificial intelligence help you get started.
Create Resume With AI Let your resume write itself — with AI.

Related database administrator resume samples

Systems Administrator Resume Sample
Systems Administrator Resume Sample
Linux Administrator Resume Sample
Linux Administrator Resume Sample
Data Engineer Resume Sample
Data Engineer Resume Sample

Related database administrator cover letter samples

Data Scientist Cover Letter Example
Data Scientist Cover Letter Example
Information Officer Cover Letter Example
Information Officer Cover Letter Example
Data Security Analyst Cover Letter Sample
Data Security Analyst Cover Letter Sample

Let your resume do the work.

Join 6,000,000 job seekers worldwide and get hired faster with your best resume yet.

Create Resume
english template stanford template rectangular template