10 Comments
⭠ Return to thread

I am curious about the first steps in the process you outlined: 1) formulate a data analysis and presentation goal and 2) generate a series of “data questions." How do they arrive at that goal? Are they thinking of the audience and their information needs? This may be part of step 1, but I think it could also be an initial step in itself, maybe Step 0: Identify the audience, their information needs (what questions do they want/need answered), the problem/challenge they are facing, their data proficiency, etc. I think that can help clarify the goal(s) and question(s) and subsequent visualizations. From this perspective, I would also say that much of the time the intended audience's question(s) ARE the data questions. However, the audience may not know how to formulate these well and the visualization designer or analyst will need to prompt them for clarity and precision and revision. This is an iterative, time consuming process.

I agree that people don't intuitively know how to formulate good data questions, and this is not something that is taught (well or at all) in K-12 education. Students' and working professionals' initial data questions are often vague, not measurable, and/or not able to be answered. I think people need to be taught the characteristics of good data questions (e.g., well-defined, answerable, actionable, unbiased, relevant, etc.), with examples, and given opportunities to critique and improve sample data questions, as well as their own. They should also be taught how to dialogue with the audience/stakeholder so they can refine a bad data question into a good one.

I have found in my own teaching, as you have, that a lot of the time there is a lack of alignment between the audience needs/stated purpose, questions, and visualization/analysis. I am a stickler for alignment. I think it's useful to give students samples (descriptions of audience, statements of purpose, questions, data visualizations), some that are aligned, and some that lack alignment, along with a rubric or set of criteria, and ask them to critique the samples first (and justify their evaluations), and then use the same rubric/criteria to evaluate their own work. This can help them internalize what this looks like and apply it their work.

These are just a few of my ideas and some things that I have done in my own work. Not sure if there's anything new or different here, but I thought I'd share them.

Expand full comment