Ab Initio Devloper
We are seeking a skilled Ab Initio ETL Developer to play a key role in our data engineering and integration initiatives. In this role, you will design, develop, and optimize ETL pipelines to support enterprise data processing needs. You will work closely with data architects, analysts, and stakeholders to ensure the seamless movement of data across systems while maintaining high standards of data quality, performance, and security.
This position requires expertise in Ab Initio tools, SQL, Unix scripting, and data warehousing concepts. The ideal candidate will be adept at troubleshooting, performance tuning, and handling large-scale datasets in a fast-paced environment. If you thrive in a problem-solving and collaborative setting, this opportunity is for you!
***100% remote/telework role***
Key Responsibilities
- Design and implement scalable ETL solutions using Ab Initio to process large volumes of structured and unstructured data.
- Develop high-performance ETL workflows to extract, transform, and load data efficiently.
- Collaborate with cross-functional teams to gather business requirements and translate them into data integration solutions.
- Optimize data processing performance by fine-tuning Ab Initio graphs and SQL queries.
- Ensure data integrity, consistency, and security in all ETL processes.
- Automate job scheduling and monitoring to ensure seamless data flow.
- Troubleshoot and resolve ETL failures, system issues, and data discrepancies.
- Maintain up-to-date knowledge of industry best practices and emerging trends in ETL, big data, and cloud computing.
Required Skills & Qualifications
- 3+ years of hands-on experience with Ab Initio (GDE, Co>Operating System, Conduct>It, Express>It).
- Strong proficiency in SQL with expertise in relational databases (Oracle, SQL Server, PostgreSQL, DB2).
- Experience working in Unix/Linux environments, including shell scripting for automation.
- Solid understanding of data modeling, ETL best practices, and data warehousing principles.
- Ability to handle large-scale data processing and performance optimization.
- Familiarity with cloud platforms (AWS, Azure, GCP) is a plus.
- Knowledge of big data technologies (Hadoop, Spark, Kafka) is a bonus.
- Strong analytical, problem-solving, and communication skills.
Preferred Qualifications
- Experience with government data projects and compliance standards.
- Exposure to data governance, metadata management, and security protocols.
- Knowledge of BI/reporting tools (Tableau, Power BI) is an advantage.
- Prior experience in a federal or public sector environment is preferred.
Join FSS Government Solutions and be part of a high-impact team that transforms data into actionable insights. Apply now and contribute to cutting-edge data engineering initiatives in the government sector!