Stats NZ has a new website.

For new releases go to

www.stats.govt.nz

As we transition to our new site, you'll still find some Stats NZ information here on this archive site.

  • Share this page to Facebook
  • Share this page to Twitter
  • Share this page to Google+
Appendix 3: Gap analysis process

This chapter presents the process adopted to complete the gap analysis. It shows an example of how the gap analysis was undertaken and finishes with lessons learned from that process.

The process

To help identify the information needs around our enduring questions and how well existing data informed the questions, we analysed the strengths, gaps, overlaps, and deficiencies of our data.

For this work, we asked subject matter and end-user experts to assess, for each of the questions and for each of the datasets:

  • How well does this dataset inform us about that question?
  • Given all the datasets, how well informed was that question overall?

Experts were given a spread sheet with the questions along the top, and the datasets listed down the side. They were asked to put a grade for each dataset with either zero, low, medium, or high to indicate how well they though that dataset informed that question. Where a ranking couldn’t be assigned (for example, where the expert didn’t know enough about a particular dataset), they left the square blank.

Spreadsheets from all the experts were then combined using a text string to indicate the cumulative grading. The text string was B:0:L:M:H. The numbers in that string match a number assigned to the organisation that graded that data or question. The overall scores were listed at the bottom of the spreadsheet in a similar way.

Using the experts’ scores, the following factors were used to assess how well the questions were informed:

  • the number of organisations assigning each of the five grades in the ‘overall’ scoring row for this question
  • the average scores across all datasets, for all organisations and each grading category
  • the maximum grade given for each question by each organisation
  • the weighted sum of the number of organisations scoring low (weight =1), medium (weight = 3), and high (weight = 5) across all the questions and datasets.

These indexes were used to suggest an overall classification of the level at which each question was informed (low, medium, or high).

The gap analysis spreadsheets were also used to assess how useful each dataset was in informing all the questions. The process here was to count the various grades across a row, and then look for the highest number of ‘highs’ or ‘mediums’. A search for the datasets that generally produced zeros or lows was also used to highlight datasets that were not useful in informing these questions. However, this is not an evaluation of the value of the datasets which may still successfully provide data for their intended purpose.

In the example below (see figure 1) on the climate change topic, 10 organisations responded to the requests to undertake the gap analysis.

The indexes showed that the first two paleoclimate datasets, New Zealand Paleontological Database and New Zealand Fossil Record File, had 23 zeros each for informing the climate change questions, indicating they may not be that useful in informing these questions.

Conversely, the Agricultural Production Survey had 11 ‘high’ scores, showing it is very useful in answering the climate change questions.

Question A ‘How is New Zealand’s climate changing?’ was highly informed, with six organisations agreeing that it was highly informed overall, and four organisations not providing a grade (ie the overall score was blank).

Figure 1

Gap analysis process spreadsheet for climate change

Image, Gap analysis process spreadsheet for climate change.

Lessons learned

We learned several lessons from the gap analysis.

We found that some experts graded each dataset by its ‘value’ rather than how well the dataset informed the questions. That is, they said ‘this dataset is highly valuable’ rather than ‘this dataset tells us a lot about the question’. For example, knowing where the petroleum reserves are is highly valuable, but only tells a little about the question ‘Where and what are New Zealand’s mineral resources?’ This meant there were more ‘highs’ in the columns than were represented in the overall score.

The enduring questions are complex. Often, an enduring question would be made up of multiple questions. This made it very hard to earn overall high scores. There may have been instances where part of an enduring question was well informed, but not all of it.

The different scoring indexes we used in the gap analysis process showed different results. This made it hard for us to assign an overall score. This issue is generally a reflection of the ‘value’ problem listed above. We found that the most useful indicator was from the ‘overall’ score, that is, whether a question had a low, medium, or high overall score.

Despite these limitations, the gap analysis process showed how current information informed the supplementary enduring questions. This was reflected in the comments that were forthcoming in the workshops, where the foibles in the analysis were acknowledged and the conversation moved on. As the analysis was primarily there as a conversation starter, it served a ‘fit for purpose’ function as initiating thinking and discussions in the workshops on the prioritised initiatives.

  • Share this page to Facebook
  • Share this page to Twitter
  • Share this page to Google+
Top
  • Share this page to Facebook
  • Share this page to Twitter
  • Share this page to Google+