There was a lot of talk about AI and machine learning at IBC 2018 – not least by us – as expected. This technology is making strides in production environments, but have you ever wondered what the difference is between the two?
In the parlance of Computer Science “machine learning” uses statistical techniques to give computer systems the ability to progressively improve performance of a specific task by collecting and analyzing data, which always improves without being explicitly programmed. Machine learning is closely related to computational statistics, the basis of prediction through the use of computers. It is sometimes conflated with data mining, where systems are designed to focus on data analysis and is sometimes referred to as unsupervised learning.
On the other hand, Artificial intelligence (AI) is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. AI research is the study of “intelligent agents” or any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “problem solving.”
What can be considered AI is changing all the time: as computers become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition. For example, optical character recognition is today not considered “artificial intelligence” because it has become routine technology.
In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and better theoretical models. The techniques of AI have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.
While we can consider AI to be a blanket term that encompasses many media applications, the reality of our industry is that most of these software applications are not truly “intelligent,” and must be taught to do their particular function. In a real sense, we are applying machine learning tools to process or manage mundane, repetitive chores that have no intelligence other than the data we as trainers supply.
A clear example is often discussed in our industry, the review and metadata annotation of all the assets stored in a deep archive through years of programming. A software tool assigned this task must “learn” to identify the images, recognising the important from the inconsequential, and must have a clear set of objects to note. Cloud sourced systems that leverage crowd input can be useful to our modern media applications, such as recognizing a location or a celebrity.
However, a unique library of historical media may have no external reference or crowd-sourced knowledge and therefore must be personally “taught” to recognize the criteria for evaluation and annotation before it can be a useful tool. As media executives and broadcasters, we must recognize that regardless of what we call this service, it is really machine learning we are implementing and there is an effort required to train the computer to be a useful tool in our applications.