Machine Intelligence in a Digital Economy: The ‘killer app’?

By Professor Alan W Brown

Machine Intelligence could well be the integrative mechanism that transforms data into genuine sources of new value. Could it be the ‘killer app’ for the digital economy?

Read the full report here.

The computer industry has a long history of investigation into Artificial Intelligence (AI) approaches (in which computers attempt to mimic human behavior). While it can be argued that AI has long passed its fiftieth birthday — certainly since John McCarthy coined the term in the conference he organized in 1956 — progress on intelligent data interpretation and machine learning is, surprisingly, still relatively embryonic. Or not surprisingly – as several important developments have only recently opened up the possibility for computers to begin to emulate human cognitive capabilities: modern day algorithmic software lies at the heart of AI and machine learning. Non-routine, cognitive tasks can now be simulated by these algorithms to help humans make meaningful decisions in all aspects of their business and social lives.

Machine Intelligence (MI) thus holds out the promise of being able to make sense of large volumes of data by exploiting a combination of machine learning and artificial intelligence to yield entirely new sources of value. MI encompasses natural language processing, image recognition, algorithms, and other techniques to extract patterns, learn from these by assessing what they mean, and act upon them by connecting information together. MI is now possible because:

  • It can build upon a core set of cheap hardware capabilities provided in massive centres that support large-scale data management (Data Lakes);
  • Of the move to virtualized storage and compute power accessible over the Internet (Cloudification); and
  • Of managed distribution networks for architecting efficient systems that stitch together all the pieces of these complex systems (Interconnectivity).

Discussions in this area are complicated due to the fact that there is no common vocabulary, and a set of terms that continues to evolve. Furthermore, common usage of some of these terms changes as new ideas emerge. Here we offer a broad view of the key terms as commonly used.

Machine Learning

Machine learning is a form of data analysis that creates an evolving model of a problem from the data being analyzed. A set of algorithms is created to process the data with increasing accuracy as the data is classified and assessed. In this sense the computer system is able to learn from data, and is able to gain new insights without being explicitly re-programmed. As pointed out by SAS, by gleaning insights from this data – often in real time – organizations are able to work more efficiently or gain an advantage over competitors in domains such as financial services (fraud detection), government (cyber threat analysis), healthcare (medical diagnosis), and transportation (congestion avoidance).

Big Data

Big data has become the general way in which we now refer to the challenge of a world in which massive amounts of information are generated every day from a broad range of devices: sensors used to gather information in the home, social media interactions, stored digital photos and videos, customer records and online order details, and performance data from instrumented mechanical devices to name a few. IBM highlights the key characteristics of big data in four dimensions: volume (creation and management of very large datasets), variety (heterogeneous collections of structured and unstructured items), velocity (speed of generation and processing of data streams) and veracity (assessing quality, timeliness, and accuracy of data).

Artificial Intelligence

Artificial intelligence refers to computer systems able to exhibit behaviors, or perform tasks that normally require human intelligence. It is most frequently associated with cognitive tasks such as visual perception, speech recognition, decision-making, and translation between languages. However, more broadly, Accenture defines artificial Intelligence as a collective term for multiple technologies that enable information systems and applications to sense, comprehend and act. That is, computers are enabled (1) to perceive the world and collect data; (2) to analyze and understand the information collected; and (3) to make informed decisions and provide guidance based on this analysis in an independent way.

Machine Intelligence

Machine intelligence extends notions of artificial intelligence and machine learning through computing techniques that allow systems to predict future actions and behaviors. As Numenta describe, rule-based models and data pattern analysis can be augmented with behavioral models that characterize normal and abnormal activities. This is essential in many situations that evolve quickly and involve many data sources such as weather prediction, modeling virus propagation, and social media analysis.

The Emergence of Machine Intelligence

Machine Intelligence is being seen as the ‘New New Thing’ by entrepreneurs and investors across the Globe. CBInsights reported almost 400 financial deals in AI companies in 2015. Investment in such start-ups more than tripled between 2013 and 2015 to reach $2.4B. As this focus expands, Deloitte has recently predicted that investment in Machine Intelligence could grow to $50B in the next five years.

What is perhaps more significant is the interest being taken by digital leaders such as Amazon, Google and Facebook in this fast expanding area. Given the vast amount of data that each of these companies has acquired, covering every aspect our individual lifestyles, the race is now on to monetize this asset beyond pure advertising revenues. For example, Apple has acquired Seattle-based machine learning start-up, Turi, for $200 million. Google has acquired UK based Deep Minds for $500M million. Intel is intending to acquire Irish computer vision chip-maker Movidius. All these partnerships point to a deliberate focus on data mining and machine intelligence.

The maturing of this sector of information technology reinforces progress being made at the R&D level. Huge strides in computer hardware and software provide the fertile conditions for a commercial breakthrough as witnessed by the increase in start-up activity, especially in the global innovation hubs such as Silicon Valley, London, and Tel Aviv.

Big companies are also investing in their own internal projects. IBM Watson is perhaps the most visible and long standing Machine Learning programme. From first coming to public notoriety in 2011 by winning a popular US gameshow, Jeopardy, Watson has developed to the point that it is now able to store and interpret data in many different fields, from medicine and disease diagnostics to food recipes. Google is creating an open source library under its ‘TensorFlow’ project that could ultimately connect the entire field of human knowledge. What is especially significant here is the rapid flow of R&D discovery from academia into commercial environments – large and small. Companies such as IBM are betting their future survival on the speed at which they can deliver these ideas into client success.

So, what do you think? Will the rise of the machines be the stuff of fiction, Hollywood – or something even stranger? One thing is sure – here at CoDE we’ll be digging below the surface of these developments, sorting out truth from fiction, and looking to provide insights that help accelerate their impact in business and society.

Share this Post!

About the Author : Kris Henley

Communications and Outreach for Surrey Business School's Centre for the Digital Economy, a newly-founded research centre to explore the implications of the Digital Economy for business, government and society.