Recently I have been going back and looking at some of my older code and finding ways to make it more efficient. I started by looking at the code I wrote to handle chunking arrays. You can go back and look at the post if you like, but to speed things along:
The legacy code from three months ago:
Here is the code that implements slice to chunk arrays:
This code works great on small sets of data, but once the dataset starts growing each iteration starts taking longer. At 100,000 elements, this function takes anywhere between three to seven seconds.
In web time, that is longer than the average person's attention span.
Here is a function that is almost identical, but instead of using array slice I calculate the starting offset and take the next sz elements (or however many elements are left in the array) using a loop.
On average the new chunk function can handle an array of 100,000 elements in about 80-100ms, which is generally about 50% faster than my legacy chunk function.
Once I realized the slice version ran so much slower, I had to find the bottleneck. Slice was the culprit.
If you are a glutton for punishment, here are my proofs.