US Federal IT professionals recognise value, but face difficulties
Despite the trend in paying attention to the threat of coping with big data, most government agencies in the USA are under-equipped to take advantage of the opportunities enormous data analysis can provide, a survey has revealed.
This problem will not be specific to the United States. As the world increasingly generates more and more data, it is useful not just for businesses, but for local and national government institutions to analyse big data and act on it and we can assume similar challenges will be emerging across Europe.
President Obama did recently announce the “Big Data Research and Development Initiative”, which does well to underline the advantages to harnassing big data to the advantage of local and national government. However, Federal IT professionals agree that the necessary applications to extract useful insights from this data are still lacking.
According to a study from NetApp – called “the Big Data Gap” – the US state’s IT professionals mostly agree that big data can present valuable learning opportunities, but the “promise of big data is locked away in unused or inaccessible data”. Federal IT professionals also agree that effectively analysising big data can lead to overall agency efficiency, while it will also speed up decision making and improve forecasting abilities.
Much of this useful data is, unfortunately, locked away, according to NetApp. The report claims that just under a third of agency data has no structure to it and is significantly more difficult to analyse – and this glut of unstructured data has been increasing over the past two years. According to NetApp, agencies are also unclear on what department owns which data. 42 percent reported that IT departments own it, while 28 percent reported that the data belongs to the department that generated it.
Nine out of 10 government IT professionals recognised that there are some fairly large hurdles to jump when working with big data: 57 percent of those worked with said that datasets have grown too large to work with using the infrastructure available to them, for example.