A chess computer, the self-driving car, Siri and Google that answer spoken questions, spam filters and even a calculator that solves the problem for you.
They all help us in our daily lives and use machine learning to do so.
Machine learning, a form of artificial intelligence (AI), is concerned with designing machines/devices that can learn from data.
They do this with the help of algorithms that are programmed by humans.

Recognizing Motion

That’s not easy.
To begin with, we need to “tell” the software that a certain group of changing pixels in the image is a moving human.
And that man can move in different ways; For example, by crawling, walking or cycling.
After that, the software is able to fill in the infinite number of possible positions between walking and crawling – with the help of the right algorithm.
Even harmless, repetitive movements, such as trees, shrubs, flags and noise in the image, can be filtered by the intelligent software itself.
It is possible to instruct the software to ignore the part of the image where the bush is moving.
But that also means that there is no detection in the area where the bush moves.

Patterns in motion

The next step is to teach the software how to distinguish a burglar from an innocent person.
Searching for suspicious persons in the recorded image is easy if we know the time of the burglary.
And by letting the software do its job at times when the presence of people in the field is unusual, we can go a long way in detecting unwanted activities in a timely manner.
But we also want to recognize (suspicious) behavior.
Patterns in movement can be analyzed.
To do this, we can use specific algorithms, such as loitering and trip wires.
These algorithms have been devised, programmed and widely available by humans.
In the future, more and better algorithms will be added that contribute to the correct recognition.

The Cloud Plays an Important Role An important tool in developing intelligence is the Cloud.
Images can be analyzed remotely via central and very powerful servers.
New insights and algorithms can then be added online to the local machine.
A shared database of objects and behaviors provides an enormous acceleration of development.

Man delivers intelligence

So the intelligence in the camera system (read: the algorithms) still comes from us, the human beings.
By adding more and more of our intelligence to the software, it will become smarter and smarter.
Learning this takes time.
Therefore, only relatively simple intelligence is currently available.
Is it a person, a tree or a car?
Is the car driving or stationary?
Are people standing still or running?
The algorithm for detecting suspicious behaviour does not yet exist.
It is difficult to develop a single algorithm that can detect all forms of suspicious behaviour, because it depends very much on the situation on the ground.
That is why customization is needed.

Software processes data faster than humans

So humans are still smarter and more flexible than the most intelligent software.
In recognizing, but especially in estimating situations.
The strength of the intelligent software is mainly the speed with which enormous amounts of information (data) can be processed.
And software is never distracted or tired.
Self-driving cars still make mistakes because the learning process is still in full swing, but eventually the car will react to a dangerous situation faster than we can.
Even if he looks for a better radio station for us at the same time.

Future

Ultimately, the development of hardware and intelligent software will help us more and more efficiently in analysing behaviour and in searching for the right camera images.
Machine learning in progress…

But humans are still smarter… The human brain is still more intelligent than any machine or any device.
Ideally, humans and machines should work together to compensate for each other’s weaknesses.
The report ‘Preparing for the Future of Artificial Intelligence‘ from late 2016 (USA) mentions a study in which the computer and a pathologist had to review images of cells from lymph nodes to determine whether it was cancerous.
The computer made 7.5% errors, the pathologist 3.5%.
If the pathologist was helped by the computer, the error rate was only 0.5%!