Data Analyst
Columbia, MD
Job Id:
142663
Job Category:
Job Location:
Columbia, MD
Security Clearance:
None
Business Unit:
Piper Companies
Division:
Piper Enterprise Solutions
Position Owner:
Brendan McGowan
Piper Companies is looking for a Data Analyst to join a a premier healthcare organization in Columbia, Maryland. This is a long term opportunity with a great hybrid schedule!
Responsibilities for the Data Analyst:
- Identifies and documents key performance metrics for in-market products. Works with stakeholders to develop specific, repeatable definitions of metrics
- Understands, cleans, models, and prepares data derived from an enterprise data warehouse and other primary data sources for analytics use
- Develops and manages effective visualizations for key product metrics using PowerBI. Conveys quantitative data in a way that is easily understood by stakeholders
- Work with data scientists and analytics staff to operationalize data workflows and modeling
Qualifications for the Data Analyst:
- Bachelors/Masters degree is related field is preferred
- 5+ years experience as a data analyst or engineer
- Data Modeling & analysis and strong experience with Databricks, Python/Pandas and SQL is required
- Proficiency in PowerBI is preferred
Compensation/Benefits for the Data Analyst:
- $90,000 - $110,000
- Comprehensive benefit package; Cigna Medical, Cigna Dental, Vision, 401k w/ ADP, PTO, paid holidays, sick Leave as required by law
- Hybrid schedule (1x a week onsite)
This job opens for applications on 5/28/25. Applications for this job will be accepted for at least 30 days from the posting date
#LI-HYBRID
#LI-BM2
Data wrangling, Data cleaning, Data transformation, Data modeling, ETL (Extract, Transform, Load), ELT (Extract, Load, Transform), Data pipelines, Structured Query Language (SQL), Joins, Inner join, Left join, Right join, Full outer join, Subqueries, Common Table Expressions (CTEs), Window functions, Aggregations, Group By, HAVING clause, WHERE clause, Indexing, Query optimization, Stored procedures, Views, Temporary tables, Data types, NULL handling, Primary keys, Foreign keys, Normalization, Denormalization, Databricks notebooks, Delta Lake, Apache Spark, PySpark, Spark SQL, DataFrames, RDDs (Resilient Distributed Datasets), MLflow, Lakehouse architecture, Unity Catalog, Workspace, Clusters, Jobs, Power BI Desktop, Power BI Service, DAX (Data Analysis Expressions), Power Query, M language, Data connectors, Data refresh, Dashboards, Reports, Visualizations, Slicers, Filters, Drillthrough, Bookmarks, Measures, Calculated columns, Row-level security (RLS), Python, Pandas, NumPy, Matplotlib, Seaborn, Plotly, Scikit-learn, Statsmodels, DataFrames, Series, Data visualization, Exploratory Data Analysis (EDA), Feature engineering, Correlation analysis, Hypothesis testing, Regression analysis, Classification, Clustering, Time series analysis, Outlier detection, Data serialization, JSON, CSV, Parquet, SQLAlchemy, Jupyter Notebooks, API integration, REST APIs, Data ingestion, Data export, Data validation, Data quality, Data governance, Data lineage, Data catalog, Business Intelligence (BI), KPI tracking, Metrics, Interactive dashboards, Real-time analytics, Batch processing, Big data, Cloud storage, Azure Data Lake, Azure Synapse, AWS S3, Google BigQuery, Data security, Access control, Version control, Git, CI/CD pipelines, Automation, Scheduling, Orchestration, Airflow, dbt (data build tool), Metadata management