Neat to see a different side of a coin. In our lab, individual measurements can take as long as a year. \sqrt(N), when constrained by human realities, presents a wall beyond which we cannot pass without experimental innovation.
As the derivative of \sqrt(N) is 1/2*1/\sqrt(N), your first measurement teaches you the most. Every measurement teaches you less than the last. In general, we measure as much as we must, double the size of the dataset as a consistency check, and move on. The allocation of time is one of the most important decisions of an experimenter.
Ah. Well I talk about the cost of data acquisition for a reason.
I've seen a number of businesses who have a current body of active users, and this does not change that fast. So when they run an A/B test, before long their active users are all in it, and before too much longer those of their active users who would have done X will have done X, and data stops piling up. In that case there is a natural amount of data to collect, and you've got to stop at that point and do the best you can.
Businesses are as alike as snowflakes - I am happy to talk about generalities but in the end you have to know what your business looks like and customize to that.
However consider this. To go from 95% to 99% confidence takes 73% more data collection. So for 73% more data, you get 5x fewer mistakes.
To go from 95% to 99.9% confidence takes 182% more data. So for less than 3x the data, you get 50 times the confidence.
My point remains. Confidence improves very, very rapidly.