We consider solving convex problems satisfying quadratic growth condition (QGC) over a distributed setting with no central server. Such problems are popular in distributed machine learning applications. When QGC growth parameter c is known, we propose distributed accelerated gradient methods with restarts, named PDACA and DACA respectively for constrained and unconstrained settings. In practical problems when c is unavailable, we design mPDACA and mDACA methods respectively for constrained and unconstrained settings, where novel distributed mechanisms are proposed to update the estimates of growth parameter c using only local quantities depending on local proximal operators or local gradients. We further derive theoretical guarantees and gradient computation and communication complexities for all four proposed algorithms. Extensive numerical experiments on logistic regression on different communication topologies showcase the utility of our algorithms in comparison with baseline methods.