Variation180
Programmer
I'm using 4.7 to aggregate data from a table that is 24 million rows to an aggregate table that is approximatly 11 million rows. The first mapping does the aggregation and I have a second mapping that writes back the aggregate_id from the aggregate table to all the source records that make up the aggregate. This process is extremly slow updating at 22 rows/sec (approx 5 day runtime). Does anyone have a better solution to this?