In the previous post, we talked about the living, which, in contradistinction to non-living, can create a mental model of the world around him, predict, act based on the prediction, and correct the model based on the results of the action or just based on the observations.
Modern methods of data analysis allow computers to build a model of the world around them too. Machine learning at first used supervised learning algorithms, which compared new data with the known ones, labeled already. For example, first, thousands of pictures of cats were loaded to the database first. Then a program would analyze some new pictures and used the supervised learning algorithm to compare them with the pictures of cats in the database, trying to find similarity and calculate the probability that a new picture is a picture of a cat.
There are also unsupervised learning algorithms that look for all possible patterns and calculate their “density.” Some of the patterns may be the images of cats, but there is no label. This process is more like humans do, looking for patterns all the time everywhere. Later, we may or may not learn how the particular pattern is called.
The third group of basic machine learning algorithms is called reinforcement learning. It is a version of supervised learning where the teacher is the environment or its model. Such systems find solutions as a probable one – a balance between exploration (of the unknown) and exploitation (of the known) – using a feedback loop. It is even more similar to the process of decision making humans follow.
All these algorithms were known a long time ago, but their power could be exploited only with a big amount of data. Only after Google, Amazon, and other giants of the Internet had accumulated enough data about their users they had let this genie out of the bottle and ordered him to start working for them and generate more revenue. Other companies started to use machine learning too.
Used for artificial intelligence, these algorithms helped to imitate human thinking quite a bit. For the purpose of modeling, it is assumed there are three levels of consciousness.
The first level is where the input data are processed automatically all the time. Here the predictions are generated and compared with the incoming data (from outside). They are compared not bit by bit, but only on a few key aspects. It makes computing less resource-consuming. Nature has figured it out millions of years ago. System of artificial intelligence work on this level quite well.
The next level is where the decisions are made. Many systems of artificial intelligence work on this level too. Some of them work on the next level—the third one—as well. That’s where the system is able to observe its own behavior. On this level, a system is able to pass the Turing test without a hitch. That is while talking to such a system via an intermediary, we are not able to distinguish between a human and a robot.
Does it mean that such a robot is conscious?
A robot can acquire an experience (data) but does not have the same deep connection with the world as a living organism has. AI is constructed starting with a certain level of imitation. Thus its consciousness is limited by that level. By his behavior and speech, a robot can look very much like a human. But his connection with the world does not have the biological component that allows the same depth of connection.
However, we continue to study ourselves and one day will get to the basics of our own consciousness. Then the model used by AI will be comparable in complexity and its depth with whatever happens in our brains. Maybe then AI will become able to evolve.
We shall see. Literally. Because it is going to happen in our lifetime.
Send your comments using the link Contact or in response to my newsletter.
If you do not receive the newsletter, subscribe via link Subscribe under Contact.