Bringing Alloy and Ember to Snowflake: DataForge Expands to a New Ecosystem
With DataForge 10.0, the Alloy Architecture and Ember metadata catalog now run natively on Snowflake. This release gives Snowflake users a predictable, governed refinement model, built-in incremental processing, and Snowpark-based extensibility while maintaining a unified development experience across platforms.
Bringing Alloy and Ember to Databricks in DataForge 10.0
With DataForge 10.0, the Alloy Architecture and Ember metadata catalog are now implemented natively on Databricks. Refinement flows through Delta tables, metadata is queryable through Unity Catalog via Lakehouse Federation, and the entire pipeline becomes more transparent, governable, and scalable.
Introducing Ember: A Structured Data Catalog Built for the Alloy Architecture
Ember reimagines the traditional data catalog as a declarative definition layer for modern pipelines. By storing explicit rules for refinement, enrichment, and merging, Ember drives Alloy’s structured five-layer architecture without relying on handwritten transformation code. The result is predictable execution, simpler governance, and no hidden intermediate logic. Ember is the metadata core of DataForge’s new architecture.
Alloy: A New Architecture for Declarative Data Engineering
The Alloy Architecture introduces a structured, five-layer refinement model that eliminates hidden pipeline complexity. By replacing ad-hoc transformation logic with a consistent, predictable flow, Alloy brings clarity, performance, and governance to modern data engineering.
Introducing Our New Plus Subscription Plan: Elevate Your Data Engineering Capabilities
We’re excited to unveil our new Plus plan, tailored for startups and small enterprises. At just $400 per month, this plan offers a comprehensive suite of features including a dedicated DataForge workspace, up to 50 data sources, automated orchestration, and a browser-based IDE. Enjoy a 30-day free trial to experience its benefits firsthand. The Plus plan provides an excellent balance of functionality and affordability to support your data engineering needs and drive growth. Start your trial today and see how Plus can elevate your data operations!
Introduction to the DataForge Framework Object Model
Part 2 of the DataForge blog series explores the implementation of the DataForge Core framework, which enhances data transformation through the use of column-pure and row-pure functions. It introduces the core components, such as Raw Attributes, Rules, Sources, and Relations, that streamline data engineering workflows and ensure code purity, extensibility, and easier management compared to traditional SQL-based approaches.
Introduction to the DataForge Declarative Transformation Framework
Discover how to build better data pipelines with DataForge. Our latest article explores breaking down monolithic data engineering solutions with modular, declarative programming. Explore the power of column-pure and row-pure functions for more manageable and scalable data transformations.