I've recently revisited some of my older coding projects to enhance their efficiency. My focus was initially on improving the array chunking code. For those interested, you can check out my previous post for details, but here's a quick summary:
The original code I wrote three months ago was as follows:
function chunk(array input, numeric chunkSize) {
var output = [[]];
var currentChunk = 1;
input.each((item, index) => {
output[currentChunk].append(item);
if (index % chunkSize == 0 && index < input.len()) output[++currentChunk] = [];
})
return output;
}
However, I developed a new approach using the slice
method for chunking arrays:
function chunk(arr, sz) {
var out = [];
var numberOfChunks = ceiling(arr.len() / sz);
for (var i = 1; i <= numberOfChunks; i++) out.append(i == numberOfChunks ? arr.slice(1 + (i - 1) * sz, arr.len() - ((i - 1) * sz)) : arr.slice(1 + (i - 1) * sz, sz));
return out;
}
This method worked well for small datasets, but I noticed performance issues with larger sets. For example, processing 100,000 elements took about 3 to 7 seconds, which is quite lengthy in terms of web responsiveness.
To improve this, I created a new function that, instead of using slice
, calculates the start offset and iterates to collect the next 'sz' elements or the remaining elements in the array:
function chunk(arr, sz) {
var out = [];
var numberOfChunks = ceiling(arr.len() / sz);
for (var i = 1; i <= numberOfChunks; i++) {
var tempArray = [];
var offset = (i - 1) * sz;
var count = i == numberOfChunks ? arr.len() - offset < sz ? arr.len() - offset : sz;
for (var x = 1; x <= count; x++) tempArray.append(arr[offset + x]);
out.append(tempArray);
}
return out;
}
This revised function dramatically improved performance, handling 100,000 elements in about 80-100ms, roughly 50% faster than my original code. I realized the slowdown was primarily due to the use of the slice
method, prompting this optimization.