Adaptive-sampling trust-region method or ASTRO-DF is a paramount algorithm for stochastic derivative-free optimization. Its salient feature is an easy-to-understand-and-implement concept of maintaining ``just enough" replications when evaluating points throughout the search to guarantee almost-sure convergence to a first-order critical point. To reduce the dependence of ASTRO-DF on the problem dimension and boost its performance in finite time, we present two key refinements, namely, (i) local models with diagonal Hessians constructed on interpolation points based on a coordinate basis and (ii) direct search using the interpolation points whenever possible. We demonstrate that the refinements in (i) and (ii) retain the convergence guarantees while matching existing results on iteration complexity. Uniquely, our $\mathcal{O}(\epsilon^{-2})$ iteration complexity results hold without placing assumptions on iterative models’ quality and their independence from function estimates. Numerical experimentation on a testbed of problems and comparison against existing popular algorithms reveals the computational advantage of ASTRO-DF due to the proposed refinements.