Traditional power analysis has a narrow focus on statistical significance, overlooking other related inferential risks concerning estimates uncertainty under hypothetical replications of a study. To evaluate inferential risks related to effect size estimation, Gelman and Carlin (2014) proposed the “Design Analysis” where Type M [magnitude] error (i.e., the average overestimation of the actual effect) and Type S [sign] error (i.e., the probability of estimating the effect in the wrong direction) are considered together with statistical power. These errors highlight the less obvious consequences of underpowered studies on effects estimates, helping researchers to make informed choices in the design phase of a new study as well as in the interpretation of previous results in the literature. We introduce PRDA (Prospective and Retrospective Design Analysis), an R-package available on CRAN, that allows researchers to perform a “Design Analysis” in the case of Pearson’s correlation between two variables or mean comparisons (one-sample, paired, two-sample, and Welch’s t-test; see Altoè et al., 2020 and Bertoldo et al. 2020). Considering a plausible effect size (or its prior distribution), researchers can conduct a Prospective Design Analysis to evaluate, in the planning stage of a study, the sample size required to obtain a given level of power. Retrospective Design Analysis, instead, is performed to evaluate the inferential risks associated with studies that have been already conducted. The main aim of the PRDA package is to enhance researchers’ reasoning about inferential risks avoiding automated decisions.