You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When comparing cells.execute_mdx_csv(use_iterative_json=True) to power_bi.execute_view on a larger dataset, it shows an around 85% memory improvement when using execute_mdx_csv.
The two dimensional cube used for this test had one measure on the column and 5 million rows. There was no data in the cube and no zero suppression had been applied.
power_bi.execute_view:
execute_mdx_csv(use_iterative_json=True):
The text was updated successfully, but these errors were encountered:
#893 solves this.
When working with very large data sets, you can now use execute_view_dataframe(..., shaped=True) instead of execute_view_dataframe_shaped.
You can also pass use_iterative_json=True or use_blob=True to the existingtm1.cells.execute_view_dataframe_shaped and tm1.power_bi.execute_view function.
Please note that in this case, the resulting data frame will be similar but slightly different:
types are inherited slightly differently
empty rows are omitted by default (unless skip_zeros=False is passed)
When comparing cells.execute_mdx_csv(use_iterative_json=True) to power_bi.execute_view on a larger dataset, it shows an around 85% memory improvement when using execute_mdx_csv.
The two dimensional cube used for this test had one measure on the column and 5 million rows. There was no data in the cube and no zero suppression had been applied.
power_bi.execute_view:
execute_mdx_csv(use_iterative_json=True):
The text was updated successfully, but these errors were encountered: