close

Image analysis and neural networks – what happens when you teach machines to see?

Image analysis and neural networks – what happens when you teach machines to see?

Many technologies follow a ‘hockey stick’ curve. A great number of them seem to be around for years, they are talked about within the IT industry, white papers are written about them, but it takes a long time for them to actually reach the threshold of public consciousness.

This is certainly true for neural networks. This computational approach, loosely modeled on the way a biological brain solves problems, has at least in theory been discussed since as early as the 1940s. It is only recently, however, that neural networks are not only becoming a reality, but also finding real, value adding use cases.

Despite the early theoretical start, neural networks have only existed since the early 90s. In the early days, they were limited both by hardware available and an extremely restricted knowledge of the field and its potential. This situation has changed dramatically over the last two decades: Neural networks have been transformed into an extremely versatile tool thanks to advanced learning algorithms and the speed and processing power of modern hardware.

Image analysis is one field that is benefitting greatly from the computational capabilities of neural networks. Image analysis has enormous potential in reaching far beyond the practical limits of traditional programming. In fact, many of us are already encountering basic forms of image processing daily, with the real-time face recognition capability offered on Facebook and by many digital cameras.

This capability is an astonishing leap forward from a technology perspective

Most people can easily recognize their friends in photographs, but for a computer, this recognition is actually an amazingly difficult problem to solve. Because it is easy for us, we just don’t appreciate the magnitude of the problems our brains can solve, based on visual input. The size of the problem only becomes obvious when we try to write a computer problem to recognize people – that is when we finally start to appreciate quite how many characteristics there are to describe faces in terms of algorithms…

In the early days of image analysis, before the advent of neural networks, we used programs specific to each problem to recognize image features. For example, we deployed a form of character recognition based on skeletonization, a process for reducing an image to a ‘skeleton’ that largely preserves the characteristics of the original image while eliminating most of its original pixels. To analyze a skeletonized image, image processing focused on the relationships between points that crossed or forked in the extracted skeleton. Programmers would have to create code to detail exactly how the system should recognize each element.

Neural networks, on the other hand, use an approach much closer to that of our own brains: they look for patterns. Also, instead of requiring specific code for each individual parameter, neural networks can be “trained” to look for a specific characteristic or pattern. This learning can then be applied to new problems, without having to specify every single new element in code.

As we’ve mentioned, this technology is already being applied today, most commonly in modern photography. Digital cameras can identify faces, which makes focusing easier, and can even identify smiling faces, or the faces of selected people, therefore helping us by automatically arranging portraits by subject. This is useful, but there are other, more far reaching applications for the same technology: Cameras are also capable of detecting health conditions or fatigue. This means it is possible to analyze the expressions of people engaged in activities that require high levels of concentration and reliability, such as driving cars, trucks or buses – and provide early warning of any anomalies such as a lack of concentration that may have serious consequences.

Although applications like these are potentially very interesting, in the short term, the most important deployment of image processing by neural networks is in the area of manufacturing automation. When you give robots eyes, they become autonomous systems, and can boost factory automation and increase productivity by replacing many simple, monotonous jobs.

For example, robotic eyes can monitor goods coming off a production line – analyzing each for anomalies. For a human, this is a mind-numbing task and one that is error-prone, due to boredom. However, an image processing robot never tires.

Neural networks and their growing ability to recognize patterns are also being adopted in the financial services industry. Through analyzing trading patterns, they can identify illegal activity, such as insider trading. Similar forms of anomaly detection are also being used to identify cyberattacks or malware infection.

Just a few years ago, leading scientific journals would typically reject papers on neural networks, because the subject matter was not widely regarded as science. This situation has clearly changed now. However, many fundamental issues relating to neural networks remain unsolved. Consequently, it is impossible to predict how the technology will continue to evolve. For example, it is still unclear what the optimal depth of a deep learning net should be, or the ideal parameters for training a network. Also, neutral networks are opaque in many respects as they are not auditable or verifiable. Unlike traditional applications, where it is possible to examine the source code and improve or change it, it is impossible to see inside a neural network to understand what has actually been learned or to identify where possible issues lie.

What we do know now is that the technology unquestionably provides increasingly valuable results, although we are still far from having a mathematical framework that we can rely on. One thing we can be sure of, however, is that neural networks in general and image analysis in particular will remain some of the most interesting and important topics in cutting edge information technology in years to come. Watch this space.

The robots will be watching!

Tags: , , , ,

Show 2 Comments

2 Comments

  • avatar image
    Turgut Haspolat
    January 4, 2017

    The biological brain is designed as highly parallel connections. The majority of neurons are connected directly to thousands of other neurons. In any case, the human brain behaves as a unit of massively parallel computing in which many instructions are executed simultaneously. But you would need a different computing system for conducting this issue because of its electrical patterns that are holding all the information about whatever stimulated in human brain. https://www.linkedin.com/pulse/3-endeavors-artificial-intelligence-stupidity-turgut-haspolat

  • avatar image
    Tim Moody
    January 5, 2017

    Thanks for the Interesting post and happy new year! Image recognition is one of a number of technologies falling under the AI "banner" which will be moving from the realms of science into one of mainstream computing adoption in the next 12-24 months. I see the point around the auditability of AI decision making as critical as we move into broader adoption but have seen little discussion on how this will be achieved. I am personally not particularly bothered if Google miscategorises an image but I am worried if I am rejected for a loan application by a bot or manufactured goods I have bought fail because the bot had not been trained properly. Whether in practise this is better or worse than the current situation is an interesting discussion.

Leave a reply

Post your comment
Enter your name
Your e-mail address

Before you submit your comment you must solve the following arithmetic function! * Time limit is exhausted. Please reload CAPTCHA.

Story Page