Job Overview
Vision Tact is seeking an experienced Data Engineer to design, develop, and optimize large-scale data pipelines and infrastructures that power our AI, automation, and analytics platforms.
You’ll work closely with data scientists, AI engineers, and software developers to ensure that data is clean, structured, and efficiently accessible for modeling, visualization, and real-time processing.
This role requires deep technical expertise in ETL processes, database architecture, API data integration, and cloud-based data management. You’ll play a critical role in enabling Vision Tact’s mission to deliver intelligent, data-driven solutions across industries.
Key Responsibilities
Design and develop data pipelines for ingestion, transformation, and integration across multiple data sources (APIs, databases, IoT, GIS, etc.).
Implement and maintain ETL/ELT frameworks for structured and unstructured datasets.
Build and manage data warehouses and lakes to support analytics and AI initiatives.
Ensure data quality, consistency, and reliability through validation and monitoring processes.
Work with AI and ML engineers to prepare datasets for training, testing, and deployment.
Collaborate with DevOps teams for cloud-based data infrastructure and CI/CD deployment.
Implement data governance, access control, and versioning standards.
Optimize performance for high-volume data storage, retrieval, and transformation.
Required Skills & Tools
Programming: Python, SQL, Scala, or Java.
Data Pipelines: Apache Airflow, Luigi, or Prefect.
Databases: PostgreSQL, MySQL, MongoDB, Cassandra, or BigQuery.
Big Data Technologies: Apache Spark, Hadoop, Kafka.
ETL Tools: Talend, Fivetran, dbt, or custom ETL frameworks.
Cloud Platforms: AWS (Glue, Redshift, S3), GCP (Dataflow, BigQuery), Azure Data Factory.
Version Control: Git, GitHub, or Bitbucket.
Soft Skills: Logical thinking, detail orientation, and cross-team communication.
Qualifications
Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
Minimum 3 years of experience in data engineering, preferably within AI, automation, or analytics domains.
Demonstrated experience with ETL pipeline development and data infrastructure design.
Hands-on experience with SQL and big data processing frameworks.
Certification in AWS Data Engineering, Google Cloud Professional Data Engineer, or equivalent preferred.


