Friday, 1 August 2014

Google Translate and How Different Are We?


Have a go at the video. It's quiet short.

The idea that stuck with me in this, and which has always sat at the back of my head was that of a computer learning things like we do. Yes, we are the ones that taught it to learn, but once it knows how to learn-- forgive me for repeating what every sci-fi movie has ever said-- what can it not learn?

It is common knowledge that every thought, every action, and every tiny little thing that goes on in our head can be narrowed down to two simple things-- a yes, or a no. So we are not really that unpredictable. It requires a lot of learning, immense study and time. Perhaps one life time will not be enough. But I think, definitely, if not immediately, our actions can be predicted very accurately. After all, every little thought that comes to our mind is a direct or indirect result of some external factor. The particular way that the thought has arrived depends on our attitude.

Attitude is not unpredictable either.

To understand furthur, let us think of ourselves as computer that can learn really fast.

You've got give that computer information, which it can interpret, process and store. It uses that information for the problems it will be given in the future. And let us assume that these computers come with some default information that sometimes overrides the knowledge that it has learned from the its environment.

The computer has got no say in what type of information it has by default. And the stickier part is that this default information can be masked or overridden, but it cannot be deleted, because it has come with the computer as a package. You destroy the default info, you destroy the computer. You can't eat the cake and have it too.

The default info has a very peculiar characteristic in that, it decides the way the computer interprets data. View it like a translator, and an unreliable one. How it has got what the grammar it got depends on another whole scenario of its parent computers and we will ignore that now.

So it comes down to this:

The computer exists. That is for sure.

It has a some info by default from the time it has been manufactured.

This default info is a set of masks, or screens through which the computer interprets data.

Raw data is input into the computer from the time it is seen around, inadvertently or intentionally.

The computer learns what is has been input, and also observes and judges the environment that is giving the inputs.

It can also recognize the type of environment by the input it receives, and also the type of input it might get by the environment it is in.

This knowledge is stored, again, in a form that the default information sees fit.

The default info cannot be erased. But it can be overridden by priorities or hidden by viruses. But it does not die.

What the info has screened, it has screened. A part of that data can never be changed, although a large part of it can be re understood by temporarily disabling the unreliable translator.

Every re-translating system is built on an algorithm that is a solution to a problem, which has arisen from inputs that the computer has received, which are completely random. which means, every new translator that the computer might come up with is, ultimately, random.

If you devote your entire life to understand one particular computer, you can. Because every single thing above is not indeterminable. Everything can be determined. It's just insanely hard because, to predict what the computer might do next, based on its AI, you need to:

Figure out the default program by understanding the way it has translated a lot of data, and find out the patterns that repeat.

Figure out the quality of AI it has, whether it is a primitive one, or one that make updates on itself.

Be alert, so when the computer decides to disable the default translator and generates a new one based on the data is has received, you can observe the change in the translator system. Note that the new translator is also effected by the previous translator. It is specially concentrated on doing what the default one could not. It is also likely that, in an attempt to overcome the limitations of the default trans. the computer has gone and generated a new one that does everything the previous one cannot, but forgets to include the ones it can.

The next thing that you should do is, after you think you have too much information, plot the data you have on a graph and trace the curve.

Keep on drawing curves until you have many curves.

Put all the curves together and observe them. If you can predict the general path of any one of the curves, then you can find out what the computer might do next.

If you can find out what the next curve might look like, based on the hundreds of curves you have, then you can find out what the next translator is going to look like.

If you can find out what the next translator can look like, and if turns out to be correct, congrats, you have figured out the computer.

If you find all this very confusing and do not know what I am talking about, here goes a little thesaurus.

Computer: Human.
Default prog/default translator/default info: Attitude that comes by birth.
New translators: The attitudes that we adopt by compulsion of situation.
Artificial Intelligence/AI: Human intelligence
Inputs: Situations

Note that you cannot decide or predict what inputs the computer might be receiving. You can however, if you have figured out the computer pretty well, find out what environment it might put itself in, and how it might react to a set of situations. So the prediction only works if you can guess pretty correctly what situations it might be getting. Otherwise all the information you have is just an explanation of what it has already done.

The computer still exists, and it still thinks it can control everything in its way. Here we are, laughing at the way the little thing thinks it has free will.

So, are we really different from computers? You can say having a default prog makes us different, but what I say is, since we are a different species than computers, we just have a different build, but we are equally predictable. I am not saying our lives are predictable, but how we respond to the lives we have is predictable.

This is not depressing to me. I imagine a big someone watching every move in my life, and gathering data. I would want to take deliberate unreasonable, out-of-character steps just to mislead him. And that will make me the most hard to predict machine ever. At what point do you think he will realize that even though we do not have free will, the fact that we think we do makes all the difference? I don't care. I don't want to be plotted on a graph and be predictable.