Foresight is Becoming Data-Rich
The rise of big data is another easy way to grasp the new nature of foresight practice. Accelerating digitization creates vast new data pools, just waiting to be better displayed to indicate current conditions, and analyzed for trends and hidden relationships. Books like Visualize This (2011) and Data Points (2013) help with visualization, and Naked Statistics (2014) helps with statistical thinking, as well as the resources we listed under statistical consulting earlier.
When data in any field becomes fully representative of the system being studied, such that marginal new data falls into predictable types, at that point classification systems, maps, and models begin to stabilize. GIS, CRM, SFA, ERP and other business automation domains were among the pioneering data frontiers of the twentieth century. These domains remain innovative, yet today, smartphones, wearables, internet of things (machine to machine interaction), home automation, quantified self/fitness/health, sentiment analysis, social networks, conversational interfaces, and algorithmic trading are just a few of our new data frontiers.
On the governance front, the emergence of The Program, the NSA’s massive intelligence analytics platform for warrantless wiretapping and mapping relationships between all human beings on Earth, foreign and domestic, only became technically feasible in the early 2000s. Only in the last two decades did our world become sufficiently data-rich, and our machines sufficiently powerful to do all this data-crunching affordably. See Frontline’s excellent United States of Secrets (2014) for an excellent recent account.
The NSA’s domestic surveillance program became politically possible only after 9/11, for a president willing to subvert the law (as US presidents have tended to do, perhaps since the founding of the republic). What level of top-down vs. bottom-up (citizen-run) surveillance we should have going on within the US, given our current levels of global and domestic development, is of course a complex political issue. Perhaps the most obvious failure of democracy is that our politicians have ducked even putting this issue to public discussion. What isn’t a debate is that every nation and even large corporations are now playing the data accumulation game, and good rules need to be established. Currently intelligence agencies marketers, hackers, criminals, and others do this with a wide variation in legal justification, oversight, and transparency.
We’ve described the rapidly emerging field of predictive analytics, well-introduced in books like Predictive Analytics (2013) and Big Data (2014). Every big company now has an internal or external data science team working to help them better model themselves and their customers, anticipate their needs, and find hidden efficiencies lying in the data of how we all presently live, work, think, talk, buy, and behave. Almost all of this data (about 95%) is untagged and unstructured, and machine learning is one of several methods emerging to help us to structure and contextualize it so we can better use it in our digital systems. See The Age of Context: Mobile, Sensors, Data, and the Future of Privacy (2013) for a quick intro to the way our digital tools are increasingly learning our contexts as they try to anticipate and serve us, and some of the privacy challenges and solutions that are emerging as that happens.