Profile

Lamarti Saad

Saad Lamarti

Senior Data / DevOps Engineer - Cloud Architect

Name:Saad Lamarti
Date of birth:21/05/1989
Address:Paris, France
Email:saadlamartisefian[a]gmail.com
I am a passionate Software Developer / Cloud Architect, who have worked more than 9+ years on advanced IT topics.

The main skills are :
   - Very good understanding on GCP, AWS, Kubernetes Cloud platforms.
   - Distributed systems Hadoop, Spark eco-system, Kafka, Databases …
   - Strong knowledge and experience on software development using Scala / Java / Python technologies.
   - Linux operating system, Shell scripting

Certifications

Cloudera Certified Developer for Apache Hadoop

2015

Oracle Certified Java SE 8 Programmer

2015

Professional Google Cloud Architect

2020

Certified Kubernetes Administrator

2021

Certified Kubernetes Application Developer

2021

Professional experiences

Jun 2020 - present

Believe – Paris

Lead Data Engineer - Cloud Architect

The project DailyStats exposes insights, and deep analytics to artists, label managers through a dedicated web app, and based on daily ingested data from some music platforms (such as: Spotify, Deezer, iTunes ..)

The key responsibilities are :

  • Review the current technical solutions for data processing, identify pain points and work on some eventual enhancements 
  • Optimize processing pipelines execution time, and web app response time.
  • Industralize, Design and build fault-tolerance, reliable computation Jobs (Spark)
  • Reducing / Optimizing ressources usages for promoted Spark 
  • Industralize / evolve AWS infrastructure, and pipeline CI/CD developments 
  • Prod Monitoring / Metrics analyzing / Alerting
  • Deploy / Manage Kubernetes workloads (Airflow, Prometheus, Grafana, Sonar, External Secrets, External DNS, Ingress Controller, Istio)

 

Keys: Cloud Providers (AWS, AZURE), Databases (RDS/Mysql, Redis, Synapse), Monitoring / Metrics / Alerting (CloudWatch, Alertmanager), Languages (Scala, Java, Python, Shell), DevOps (Terraform, Git, GitOps, GitlabCI, ArgoCD, Vault, Kubernetes, Istio, Helm, Docker, Prometheus, Sbt, Maven, Nexus, Sonar), Data Processing (Databricks, EMR, Spark), AWS Services (Lambda, RDS, Secret Manager, CloudWatch, Sqs, S3, EKS, Route 53, EMR, VPC, Firewall, IAM …) 

Jul 2018 - Jun 2020

Renault – Boulogne-Billancourt

Senior Data / DevOps Engineer

I joined FTT (Full Track & Trace) plateform team as a Backend / Data Engineer.

FTT plateform is designed to collect and process real-time data, ingested from many factories, from all the globe. The two main Value-Streams are:

  • Track and trace the whole manufacturing process of vehicle components.
  • Track and trace the packaging flows between different manufacturing sites.

The key responsibilities are :

  • Industralize, Design and build fault-tolerance stream processing pipelines (Spark / Apache Beam)
  • DevOps – Industralize infrastructure, and pipeline CI/CD developments
  • Prod Monitoring / Metrics analyzing / Alerting
  • Reducing / Optimizing ressources usages for promoted Spark Jobs
  • Design and build some micro-services (Akka-grpc)

 

Keys: GCP (Pubsub, GCS, Dataproc, Dataflow, Monitoring, Composer / Airflow, CloudFunction, Stackdriver), Streaming (Spark Structured Streaming, DStream, Apache Beam, Kafka, Solace), Databases (BQ, Spanner, Memory Store, BigTable, Cloud Sql), Monitoring / Metrics / Alerting (Stackdriver, JMX, Prometheus, Grafana, Doctor Elephant, Spark History Server, Spark Bench), Languages (Scala, Java 8, Python), Other tools(Terraform, GitlabCI, Kubernetes, Docker, Prometheus, Sbt, Maven, Nexus, Sonar)

Jun 2016 - Jun 2018

Société Générale – Paris

Lead Data Engineer

I joined a Big data entity within an international banking company as a technical leader, to scope, design and develop a set of projects.

 

The key responsibilities are :

  •  Defining the applicative architecture
  •  Develop some generic and modular components, batch & stream processing pipelines
  •  Apply the best practices (TDD, Code review, Peer-programming, Git workflow, CI/CD …)
  •  Ensure the industrialization of applications pipeline on production environment
  •  Optimizing a Spark applications executions
  •  Set up a CI/CD pipelines (Gihub, Jenkins, Nexus, Ansible, Maven, Docker)

360° Global Consumer vision project is a principal project where I have been working from scratch. My main role is to develop and design the whole applicative chunks, which is a set of Spark application pipelines, through scalable, modular, secure and high available solutions.

The project allows to the final users to get a 360° (consolidated) view for a physical person and a third party from a set of DataSources. All the computations are done on the back side (Normalization, enrichment, fuzzy matching …)

 

Furthermore, I belong on technical / architecture comity which deals with discussing and exchanging about new technologies, validating new architectures for the whole entity projects, technical interviews …

 

Keys: Design, TDD, Batch/Stream processing, Hortonworks, Streaming (Nifi, Spark Streaming, Kafka, elk), Hadoop ecosystem (Hdfs, Oozie, Hive, Sqoop, …), Spark ecosystem (Spark Core, Streaming, Sql), Spark Tuning, CI/CD (Jenkins, Nexus, Git, Ansible, Maven, Docker), Software development (Scala, Java, Python), Security Layer (Kerberos, Ldap, ACls, Ranger, Knox), Nosql Databases (MongoDB, ES)

Mar 2015 - Jun 2016

Axa Group Solution – Paris

Big Data Engineer

I joined Hadoop team within AXA Group solution, to develop and scope Big Data projects. I am a technical referent for Hadoop/Spark platform, and Java/Scala developments. The main business requirement is monitoring the group financial assets.

My role is to support AXA entities from the design to the deployment of big data solutions in production.

The key responsibilities are:

  • Lead the effort on migrating to the Hadoop platform, to take advantage of distributed processing and storage.
  • Develop ETL solutions and software components which meet the business requirements (MapreduceV2/Spark/Pig).
  • Develop a set of middlewares and features to enhance some operating systems such as Hive, and Pig (Java UDFs).
  • Data wrangling and cleansing.
  • Design Oozie workflows, handle the coordinator and maintain the developments around our data integration engine.
  • Create all required project related documentation e.g. Functional Specifications, Operation Manuals, Technical Specifications etc.
  • Participate in and lead Design and code reviews.
  • Assist Project Managers and Data Architects with project planning and estimation.

 

Keys: JAVA Core, Scala, Spark, MapReduce v2, Impala, HDFS, Hive, Pig, Hcatalog, Kafka, Avro, Parquet, Sqoop, Flume, Zookeeper, Mahout, Tez, cloudera distribution, Docker, Scrum.

Sep 2012 - Mar 2015

SAFETY-LINE – Paris

Big Data / Software Engineer

I integrated the engineering development team during almost 3 years. My missions were focused on software development and Big Data topics.

I worked on different projects, the last one was the risk monitoring through flight data, which I was responsable of :

  • Maintain software architecture, especially server side development (Java Core, J2EE, multithreading, real time programming, concurrency, RPC, Servlets, memory management and performance tuning).
  • Setting up a Big Data environment basing on the Cloudera distribution.
  • Distributed processing on flight data, to extract a set of needed metadata, and real time filtring.  (MapReduce)
  • Handling the Machine learning models and predictive analytics on the sever side development. (Multithreading, concurrency, Java Core, Python, R)
  • Linux system administration.
  • Robust software factory, continuous integration, with automated deployment (Git, Jenkins, Artifactory, Maven, Ant).
  • Creating all required project related documentation e.g. Functional Specifications, Operation Manuals, Technical Specifications etc.

 

Keys: Hbase, MySQL, Java Core, Python, MapReduce, Hdfs, Jersey, Datanucleus, Jetty, Apache Server, Linux, JSON, Apache commons, phoneGap, machine learning, predictive analytics.

Mar 2012 - Sep 2012

EDUPAD – Paris

Software Developer IOS/ANDROID

I joined the team of several engineers and designers, working on education applications.

The main responsibilities are:

  • Develop and maintain iOS, Android applications for smartphones/tablets. (E.g RECESS Game)
  • Develop and maintain the Back Office, Front-End / Back-End developments (J2EE).
  • Working closely with the business analyst responsible for devices to understand and implement business requirements.
  • Integrate SaaS and native OS services into mobile applications.
  • Insure deliverables are properly tested and meet functional requirements.
  • Deploy mobile applications to each respective app store.
Apr 2011 - Oct 2011

AS-MAROC

(Internship) JAVA J2EE Developer

Development of the billing module and managing orders.

May 2010 - Aug 2010

T. VIRTUEL

(Internship) JAVA GWT Developer

Development of the administrative management application.

 

Technical skills

Programming

  • Deep Object Oriented Programming experience.
  • Languages: Java, J2EE, Scala, C/C++,  Objective-C
  • Continuous integration tools: Git, SVN, Artifactory, Ant, Maven, Hudson, Jenkins.
  • Frameworks: Spring, Struts, JSF, Datanucleus, Hibernate, Jersey.
  • Interoperability: RPC, CORBA (RMI, IIOP, MIOP avec Java), Web Services SOAP, REST.
  • Web: HTML5, CSS3, JQuery, PHP, CMS(Joomla, wordpress).
  • DataBases: Hbase, MongoDB, Cassandra, Oracle, Access, MySQL, SQL Server, PL/SQL.

Design and modeling

  • Modelling languages UML, Merise.
  • Deep Design Patterns experience (GOF).

Big Data – Hadoop

  • Hadoop Core (MapReduce & HDFS)
  • High Availability & Federation
  • Setting up and administration of Hadoop clusters.
  • MapReduce and Spark developments.
  • Hive & Pig.
  • Flume, Sqoop, Oozie.

Trainings

2007 - 2011

Advanced Computer Science and IT University

eHECT (Tanger/Maroc)

IT Master’s degree, speciality Software engineering.

2006 - 2007

Bachelor

Ibn El-Khatib (Tanger/Maroc)

Experimental sciences bachelor.

Big Data Landscape 2016">

Big Data Landscape 2016

Big Data
Page 1 of

Let's keep in touch