Job Summary
Data Engineer responsible for building and optimizing large-scale data pipelines using PySpark, SQL, and Google Cloud (BigQuery, DataProc), while migrating and managing big data workloads across modern platforms.
Works within an Agile team to develop, deploy, and enhance data-driven solutions for marketing analytics, ensuring high performance, scalability, and continuous improvement of data systems.
Job Description
- Big Data; GCP; ETL - Big Data / Data Warehousing; Git (GitHub, GitLab, BitBucket, SVN); PySpark; Python; SQL Google Cloud - Data Engineer. We are looking for energetic, high-performing and highly skilled data engineers to help shape our technology and product roadmap.
- You will be part of the fast-paced, entrepreneurial Global Campaign Tracking (GCT) team under Enterprise Personalization Portfolio focused on delivering the next generation of global marketing capabilities.
- The team is responsible for marketing campaign tracking new accounts acquisition and bounty payments and leverages large scale data engineering technologies, such as such as SQL, PySpark, GCP, Big Query, Data Proc, Adobe Analytics, Google Analytics, Hive, Kafka & Java.
- Focus: Designs, develops, solves problems, debugs, evaluates, modifies, deploys, and documents software and systems that meet the needs of customer-facing applications, business applications, and/or internal end user applications.
- Develop and maintain large scale data processing pipeline using PySpark Data Proc, Big Query and SQL.
- Use Big Query and Data proc to migrate existing Hadoop/Spark/Hive workloads to Google Cloud.
- Proficient in Big Query to carry out batch and interactive data analysis.
- Function as member of an agile team by contributing to software builds through consistent development practices (tools, common components, and documentation)
- Develops and tests software, including ongoing refactoring of code, and drives continuous improvement in code structure and quality.
- Enable the deployment, support, and monitoring of software across test, integration, and production environments Minimum
Profile Description
- A bachelor’s degree in computer science, computer engineering, other technical discipline, or equivalent work experience
- 6 – 9 years of software development experience.
- Hands-on expertise with application design, software development, and automated testing.
- Strong programming knowledge in SQL, PySpark, Data Proc, Big QueryHands-on experience in Big Data technologies (Spark, Hive).
- Understanding and experience with UNIX / Shell / Perl / Python scripting.
- Database query optimization and indexingWeb services design and implementation using REST / SOAP and Java is a plus.
- Experience collaborating with the business to drive requirements/Agile story analysis.
- Experience with design and coding across one or more platforms and languages as appropriate.
Bonus skills:
Machine learning/data mining, Object-oriented design and coding, Adobe Marketing Campaign products
About Company
A client of ilink Talent Solutions is a global digital engineering and data analytics firm founded in 1991, headquartered in the U.S., with a workforce of 3,500+ professionals and delivery centers across North America, India, Canada, Australia, and beyond. They specialize in cloud engineering, data platforms, AI/GenAI, and enterprise analytics, helping Fortune 1000 companies accelerate digital transformation. Known for proprietary accelerators like LeapLogic™ and strong partnerships with AWS, Databricks, and Snowflake, the company is recognized for driving innovation, large-scale modernization, and high-impact business outcomes.

