Description:
Technology exp required:
Relational database, SQL, transaction log, replication, applying deltas
Amazon Web Services (AWS) - load data into S3 (ssh, sftp, AWS CLI)
Deep expertise with Python scripting – preferably including Anaconda on AWS (processing CSV / pipe-delimited data files)
Experience in building reliable multi-step distributed data pipelines
Provide technical leadership for data engineering enabling us to advance architecture
The Architect would help guide the technical architecture for projects such as data replication from OnPrem to AWS, Data management, DBaaS, data modeling for the Data Lake on Cloud and data security.
Shaping an architecture to meet the needs of cloud data architecture to service identified use cases will be the primary technical objective of the role
Advanced knowledge of NoSQL and Graph databases, Hadoop, and ETL technologies will be needed to design overall data platform design
Experience designing systems to ingest, process, and get Big Data consumption ready for Data Science is required
Jupyter notebook on SageMaker for simple Python analytics
Duties and Responsibilities
1. Provides the architectural leadership in shaping strategic technology programs, which focus on both Business of IT (e.g., Unified Communication, SOI) and LOB-specific strategic technology programs.
2. Continuously pursues advanced level technical acumen:
· Attends conferences and engages in associated activities (e.g., conducting presentations, leading workshops, etc.).
· Consumes and contributes content from and to Open Source communities.
3. Provides architecture thought leadership and expertise including cost optimization
4. Defines reference and implementation architectures.
5. Produces technology roadmaps in support of IT’s vision and strategy.
6. Develops proof-of-concept prototypes and initial implementation models.
7. Monitors implementation activity to ensure architecture and design principles are upheld.
8. Ensures implementation solutions support architecture objectives (availability, scalability, performance, security, etc.), as appropriate.
9. Rolls up sleeves and does deep dives on Financial Services data sets to understand intricacies and come up with optimal data models
10. Is adept with handling multiple data types from multiple data sources and strategize on the optimal way to analyze/store such data sets
Qualifications
· Minimum 10 years relevant work experience.
· Highly experienced with Big Data, Glue, EMR, and Data Quality tools/procedures for Cloud native data stores
· Strong database and cloud expertise, along with experience with data replication technology required
· Strong knowledge of micro-services and DDAAS required
· Expert level AWS and Cloud native data experience is required
· Minimum five years big data and advanced analytics experience required
- Minimum 2 years of cloud-native database design, migration, and implementation experience required
· Experience with Enterprise Information Management (EIM) is required
· Experience with CDC tools (Attunity preferred) to move data from OnPrem sources