Adaptive sampling-based trust-region optimization has emerged as an efficient solver for nonlinear and nonconvex problems in noisy derivative-free environments. This class of algorithms proceeds by iteratively constructing local models on objective function estimates that use a carefully chosen number of calls to the stochastic oracle. In this paper, we introduce a refined version of this class of algorithms that reuses the information from previous iterations. The advantage of this approach is reducing computational burden without sacrificing consistency or work complexity to attain the same level of optimality, which we demonstrate through numerical results using the SimOpt library.