Current conversations around web-survey panelist quality lean heavily on simple heuristics like non-differentiation (commonly called "straight lining"). The development and application of such heuristics usually ignore intersections with scientific understanding of measurement error generally, and cognitive processing theory specifically. Using a sample of web-survey panelists, this experiment manipulated four questionnaire design variables, number of scale points, question order, question placement, question difficulty), and compared nondifferentiation across these conditions. Signiflcantly greater non-differentiation occurred in questions with more cognitively axing requests, and in items rated later in the questionnaire. The use of 5-pointand 11 -point scales elicited significantly different levels of non-differentiation, but the direction of difference is less clear. Beyond holding implications for questionnaire design practice, the results also question the reliability and validly of o non-differentiation heuristic for identifying poor-quality web panelists.