The whole point of running a needs assessment survey is to get information. Before administering the survey, you should check your work so far to make sure it will get you what you want. You would hate to administer the survey, get confusion/angry emails from the participants who didn’t understand your questions, and then have to rework the survey and run it a second time. To reduce the chances of that happening, there are two tasks that you need to do to improve the quality of the data you will get from your needs assessment survey.
The first task is to review your questions and match them to the goals set by the scope of the survey we talked about in the first post of this series.
Go through question by question and note which of the items it relates back to in the scope. While you are aligning questions to your scope, on the scope document put a check mark or even better the question number that responds to that scope item. When you are done you should have a record of how items are connected that you can use to show to stake holders that you have asked questions about each of the scope items. Cross referencing questions to scope items will also help with data analysis after the survey has been administered.
The second task is to beta test your survey.
Beat testing involves having someone take your survey and then see what they thought you were asking. When I taught creative writing, one of the things I would have my students do is to put their writing assignment away for a day or two. When they came back to it and re-read the work, they often had moments of confusion about their own work. We make mental connections between things we write that dissolve with time. When we come back to it later we say “What in the Sam Hill was I talking about?” Running a beta test with someone else doing the reading has the same effect – you get questioned about phrases, questions or answers you thought were perfectly clear.
When choosing beta testers, it is best to try to find people with a similar background but who either know nothing about your survey scope or who can set aside their previous knowledge and answer questions like one of your real participants. You should talk to your beta testers after they completed the survey to try to clarify what their impressions were of the survey and decide if you need to change your survey based upon their reaction. However, never run a beta test on potential real survey participants and then have them take the entire survey again to record their answers for your data set. Their second attempt at the survey will always be somewhat influenced by their first.
Remember, don’t take their reactions personally. Their comments are about your survey, not the person developing it. You asked for their feedback on your survey so you can improve the quality of the data you receive from the real survey. Use the feedback to develop a dynamite survey!
One more note on beta testing.
If your survey is going to be administered digitally (for example using Google Docs or Survey Monkey), you would be wise to beta test it twice. The first one would be a paper version so you can get a sense of how well the questions and answers work. The second one, with different people so they aren’t influenced by what they already know, would be with the interface you are intending to use for the real survey. Your questions may be perfectly worded but if the interface confuses the participants, you may get bad data that leads to bad decisions about training.