Last Updated: August 24, 2017
·
231
· sracer2017

Is there a form of lazy evaluation where a function (like mean) returns an approximate value when operating on arrays

For example we want to calculate mean of a list of numbers where the list is so long. and that numbers when sorted are nearly linear (or we can find a linear Regression Model for data). Mathematically we can aggregate mean by

((arr[0] + arr[length(arr)]) / 2 ) + intercept
Or in the case, linear model is nearly constant (slope coefficient is nearly 1). we can calculate approximately:

mean(arr[n/const]) = mean(arr)
The same concept is applied for the two cases. and is so basic. Is there a way: pattern, function (hopefully in python), or any studies to suggest and that can help will be gratefully welcome; of course such a pattern if exists should be general and not only for the mean case (probably any function or at least aggregate functions like: sum, mean ...). (as I don't have a strong mathematical background, and I'm new to machine learning, please tolerate my ignorance). Please let me know if anything is not clear.