The Sherlock syndrome and the Darwinian data disruption
By Eric Hunter, Director of Knowledge, Innovation & Technology Strategies, Bradford & Barthel
It happened over the weekend - an entire weekend spent watching the BBC television show Sherlock. I realised that everything I've been speaking and writing about on big data, analytics and predictive analytics is encapsulated within one individual's perception of reality - Sherlock Holmes.
"Data, data, data, I cannot make bricks without clay" - Sherlock Holmes in The Adventure of the Copper Beeches
I've spoken on this concept from Hong Kong to London. We all have the ability to see the world as Sherlock through the application of big data. We are in an era of unprecedented territory in data gathering, analysis and the speed in which we can interpret, predict and put a consistent stream of data into effect. However, unlike the mighty Holmes, we do not have to limit the amount of data we store in order to have the data on hand at all times. More importantly, we do not need to limit the amount of data we are able to interpret.
An example I like to present when speaking is an image from a scene within the King Kong movie remake by Peter Jackson. The image has a Tyrannosaurus rex on one side facing off with King Kong on the other, and standing between those two is a tiny Naomi Watts. The question I ask is, pretending you know nothing of the movie or the outcome, who ultimately survives in this scenario? The answer, of course, is Naomi Watts.
The point of this example is to illustrate the difficulty we have in predicting future trends and analysis through emerging technologies. What may seem as obvious at first glance is not always as it seems. Currently, we're experiencing a market disruption of this kind through what I call the Darwinian data disruption.
A knowledge thought leader, Ryan McClead, I spoke with at Managing Partner's 'Knowledge Management in the Legal Profession' conference in New York last autumn best described this through an analysis of the human brain; I've built on this analysis many times since.
The human brain and the neural network that rapidly develops and increases in complexity through our early childhood and into adulthood closely mirrors the evolution of the internet, as seen from its infancy throughout its growth across the globe. As social connections, searches and ramifications increase in complexity, so do the computations, algorithms and data at hand, increasing in exponential leaps year-on-year.
As this continues, it will not be long until the human brain is surpassed in complexity. To see this clearly, consider the example of Holmes from the BBC show, how he views the world around him and how the visual interface we see as the viewer is presented to us in leveraging the extreme complexity and analysis made possible. Essentially, big data and interpretive analytics give all of us the ability to harness Holmes' analytical tools, particularly his ability to see, cross reference and observe all. Big data allows us to see trends far more clearly and quickly than ever before.
Embedded technologies and biotechnology are leading us towards the concept of not only an always-online world but also an evolving, relative, infinite cloud of interpretative information, with the ability for total recall. Wherever we look, what we see will not only be recorded but also interpreted and analysed. What will our day-to-day perceptions, learning, work life, professional contacts and social lives be like?
I'll reveal the full impact of the Sherlock syndrome in my upcoming Managing Partner article 'Darwinian disruption: How big data analytics will disrupt big business' and also within my upcoming Managing Partner report, due to publish this autumn, The Sherlock Syndrome: Big Data, Predictive Analytics & the Darwinian Business Disruption.