Choosing a Research Design

Lessons from the Grameen America Formative Evaluation

This post is one in a series highlighting MDRC’s methodological work. Contributors discuss the refinement and practical use of research methods being employed across our organization.

This post discusses the process that we used to select a research design in the evaluation of the Grameen America program, a microfinance model that provides loans to low-income women living in the United States who are seeking to start or expand a small business. The first step was to determine whether it was a strong enough program to study — before embarking on an impact evaluation, we need to be sure that there is something worth evaluating and that the conditions are in place to make the study feasible.[1] For example, in the case of Grameen America, it was important to assess whether participants were receiving loans and persisting in the program before doing an evaluation of the program’s effects. Once the study’s worth and feasibility were established, we used the information we had gathered to determine the most appropriate research design.

Choosing a research design

The process of choosing a research design is not always easy; each design has particular requirements. The table below lists several questions that can help guide the decision, given the context of the program and study. The answers show how the options were ruled out in the Grameen America evaluation. Working with Grameen, we considered the following research designs:

  • A longitudinal tracking study follows study participants over time and collects data to measure their outcomes.

  • Random assignment divides study participants into a “treatment group” (or “program group”) that is eligible to receive program services and a “control group” that is not eligible. Comparing the outcomes of the groups over time allows us to estimate the impacts of the program.

  • In a regression discontinuity design, researchers take advantage of a threshold in the program eligibility criteria (for example, a test score or income threshold). Individuals above (or below) the threshold serve as the treatment group and individuals below (or above) the threshold serve as the comparison group. The estimated impact is defined only for individuals very close to the threshold. The validity of the design is based on the assumption that at the threshold, the design is equivalent to a random assignment design.

  • Propensity score matching is a method for identifying a comparison group that has observed characteristics similar to those of the treatment group.

  • A comparative interrupted time series uses longitudinal data for a treatment group and a matched comparison group to estimate the effects of an intervention. The analysis compares the two groups’ deviations from their baseline trends after the intervention.

We also found through a literature review that there is a lot of debate about the effects of microfinance. Several previous studies of microfinance programs used quasi-experimental research designs — some of which we considered — that were sensitive to statistical assumptions. Others used random assignment, but it was implemented in such a way that there were only small differences in the percentage of treatment and control group members receiving microloans (known as the “treatment contrast”), making the results ambiguous. These studies underscored the importance of using a rigorous design and ensuring that the study is set up to have a large treatment contrast.

The recommendation

Given the specifics of the Grameen America program, we believed that the non-random assignment designs were not feasible and would not produce a reliable evaluation of the program’s effects on poverty. Only a random assignment study could provide enough rigor in this case.[2]

This recommendation came with the acknowledgment that random assignment can pose operational challenges, and it requires careful design choices. A future Reflections in Methodology post will discuss how the random assignment design was implemented and the numerous adjustments we made to fit the research within the context of the Grameen America program.


[1]MDRC’s Self-Employment Investment Demonstration did this type of up-front work — often called a formative evaluation — and was one of the inspirations here.

[2]MDRC has used a similar research design selection process in other studies. In several cases, MDRC ended up implementing a quasi-experimental design. Future posts will describe two of these designs — comparative interrupted time series and regression discontinuity — in more detail and provide examples of how they were applied.