Looking for a FULLY ONSITE Data Analyst Engineer here in Houston TX, to bridge the gap between data engineering and data analysis. Their responsibilities will involve designing, building, and maintaining data infrastructure while ensuring the data is usable, reliable, and insightful for analysis and decision-making. Key Responsibilities: Data Pipeline Development : Build and maintain ETL/ELT pipelines to collect, transform, and store data. Ensure data flows efficiently from source systems (e.g., databases, APIs, SaaS platforms) to data warehouses, lakes, or marts. Data Integration and Modeling : Integrate data from multiple sources into unified schemas for reporting and analytics. Design and develop data models, including star and snowflake schemas, for fact and dimension tables. Create staging and transformation layers to ensure clean, organized data. Data Quality Assurance : Implement data validation rules and monitor data accuracy. Set up processes to identify and resolve data inconsistencies, duplicates, or missing information. Establish data governance practices to ensure proper access, security, and compliance. Collaboration with Teams : Work closely with data scientists, business analysts, and stakeholders to understand their needs. Provide datasets, create dashboards, and support advanced analytics use cases. Serve as a liaison between technical and non-technical teams. Performance Optimization : Optimize query performance and ensure efficient data retrieval for analysis. Use tools like indexing, partitioning, or caching to manage large datasets effectively. Reporting and Visualization : Create reports and dashboards for stakeholders, translating data into actionable insights. Utilize tools like Tableau, Power BI, or Looker alongside SQL to present results. Technology Evaluation and Implementation : Assess and implement new tools or technologies (e.g., dbt, Apache Spark, Airflow) to improve data workflows. Maintain familiarity with cloud platforms like AWS, Azure, or GCP for data storage and processing. Documentation and Training : Document processes, pipelines, and data models for transparency and maintainability. Train team members on how to access and interpret data Skills Required: Technical Skills : SQL, Python, or R for data manipulation and analysis. Data engineering tools (e.g., dbt, Apache Airflow, Kafka). Familiarity with cloud platforms and data warehouses (e.g., Snowflake, BigQuery, Databricks). Proficiency with visualization tools (e.g., Power BI, Tableau, Looker). Analytical Skills : Strong problem-solving abilities to address complex data issues. Ability to translate business needs into data requirements and solutions.