Right, in a typical data mart we prefer optimising data model/storage for read operations. In BigQuery storage is comparatively very cheap to compute/slot usage and theoratically unlimited. This is why materializing intermediate results in tables periodically is good in BigQuery. You can use scheduled queries, composer or any other ETL tool for such pipeline.