thoughts on artificial intelligence

Posted by MahFreenAmeh on Dec. 20, 2012, 2:02 p.m.

he-lo 6-4-digits

it is i, the person who is a person. ok, introductions aside, because i pretty much hate those in any universe.

so i need to come here and write a bunch of stuff down so i have a repository of information that once it's written… okay, let's just start

I've been thinking about the representation of a CPU as a very constrained version of a human brain. After previously speaking with Josh@Dreamland about it, we came to the agreeement that both a human brain and a CPU are ultimately just finite state machines, designed to process the current state, and respond to it accordingly.

I however, have a little bit of a disagreement when it comes to the particulars. Namely, I don't believe the brain to be a finite state machine of any sort. While, yes, it's true that there are only a finite number of configurations for a given aggregate of modifiable variables and such, that doesn't mean that the machine that is processing them has to have a finite number of states. That would be horribly inefficient, like converting a non-deterministic finite state automata into a deterministic one; you have to store the process that the automata will go through for every state that is possible.

My theory is that the human mind is not a finite state automata, so much as a dynamic state automata. It takes into account all of the past input that has been given to it, to dynamically generate an algorithm which takes all necessary variables into consideration, and predicts what may happen if one changes based on past experiences, using this to rationalize ultimately what is the most positive outcome.

Now, I decided to speak with another friend of mine, someone more versed in circuitry and engineering, low-level hardware components and such, who gave me his perspective that in the end, a CPU is basically like a single Arithmetic Logic Unit, whereas in the brain, every individual neuron/cluster of neurons can act as an ALU. He was positing the theory that in order to accurately model a human brain, one would either need a quantum computer with millions of qubits, or possibly a neural network consisting of many computers interacting together, with each node acting as a neuron. I thought that was a pretty interesting concept, but, it's really not relevant at all to what I'm talking about.

What came of all of this was a way I thought of for training any given AI: while it is true that there are many training methods, usually involving feeding a pipeline of data to the AI then telling it what that data means, I've come up with a slightly different process, which I will outline now. It exists in a few stages.

The initialization stage:

Initializing the AI, including the neural network representing its brain, be it a series of small emulations of neurons, or whole computers, etc, to represent a neuron.

Basically, the point is to get the AI to a tabula rasa so that it has no training, and can make no logical assumptions or inferences about the state of anything.

Training is what comes next, and that's the stage that's obviously most crucial. In this model of an artificial intelligence, however, data is not sent and categorized to the artificial intelligence. Instead, it is given an environment that possesses a state, just as the inner workings of the artificial intelligence possess a state. It also will have a series of methods and variables exposed to it, which, when interact with, will form a stage change in some state, either in the internal AI state or in the environmental state. Ideally, the objective is to have the AI randomly perturb the different variables and change the state of the world around it, then observe any state changes, and record them; I think an explanation would do better.

Let's say you have two environments: external, and internal. External is everything outside of the AI, and internal is obviously everything inside. Let's say within external, there is one state variable exposed, "time". If the AI perturbs that randomly, then as a side effect of the method made to handle perturbing time, it should also increase or decrease the "luminosity" value of the external environment, given that the AI is theoretically in an outdoor environment where time advancement can cause side effects such as the sun going down and causing a noticeable difference. After every perturbance is made in training mode, the AI tries to make some inferences about what conditions changed in each state, then presents a query back to the operator asking to ratify the assumption with either a true or a false response. True means go ahead and commit it to train, false means ignore it. This is pretty much like the stages of being a child as a human; you don't necessarily know much about the world, but you interact with it, and rely on a parent to ratify your decisions for you so that you know you are safe, and acting within the lines of how a human should act.

After enough training, the AI is meant to be pushed to a more production environment, which has an analogue in the mental maturity of a human towards adulthood and max brain density, etc. At this point, enough training has been performed to where the AI should have learned how to properly make inferences, and it no longer needs input from the operator to ratify assumptions; by this point, it should have had enough inferences in the past ratified that it can understand the likelihood of an inference being rational, and convey a risk level of accepting it is rational or not, then use that in order to determine if it should do that action, or continue ahead.

Right now, it's just a bunch of ideas buzzing around in my head, but I'm busy trying to think of a way to properly implement it with the technology I have available, e.g., not very much! But if anyone has any input, I'd love to discuss it with you. It would be conceptually good as a very focused AI, in that you can only expose to it methods and variables that have something to do with the problem domain you have, and have it ignore anything else, e.g., feed it syntactical rules about a language, then let it start making inferences, and after long enough, you may have a somewhat accurate language parser! Nonetheless, ideas. :3

Comments

firestormx 9 years, 1 month ago

I didn't even read all that, because the background is distracting, but the first thing that neurologists need to explain to programmers is that the brain does not behave like a computer.

MahFreenAmeh 9 years, 1 month ago

firestormx: for that reason i'll remove the background and have it stop doing that, it is kinda noisome.

also, if neurologists need to explain to us programmers that the brain does not behave in that way, then what way does the brain behave in?

firestormx 9 years, 1 month ago

It behaves like a brain, the most complex biological process currently known to man. =3

MahFreenAmeh 9 years, 1 month ago

that doesn't really answer my question though. that's like saying a computer behaves like a computer. or a child behaves like a child. of course they do, it's fitting into the archetype that its existence created for itself. lol. but i guess that is one way to put it

flashback 9 years, 1 month ago

You. Link to the homepage and other blogs. Restore. Now.

firestormx 9 years, 1 month ago

The point I was making was that it can't really be compared to something.

But comparing it to a computer is like comparing the stomach to a computer. Activity in the stomach includes monitoring levels of such-and-such acid and this-or-that enzyme, and releasing different things into the stomach to compensate. A computer would do something similar, but in a totally different way.

Similarly, if you "teach" AI to observe things, it can then take those observations into account when it receives input, and output something influenced by those observations.

The brain does something similar, but in a completely different way.

MahFreenAmeh 9 years, 1 month ago

flashback: maybe in a minute.

firestormx: comparing the brain to a computer isn't really like comparing the stomach to a computer. while a stomach is a contained system that has certain operational parameters that it goes by, it's controlled by something higher up. the brain, the nervous system, whatever. it doesn't really matter what controls it; the stomach is really just periphery. the stomach may release certain enzymes or proteins which cause certain reactions to occur, but a lot of that is autonomy, last i checked, which results from the action of the brain causing it.

if it does something similar, but in a completely different way, then what is the completely different way? how does it differ? if i'm going to be refining this theory and thought process into a working model at any point in the future, i need to consider all possibilities

flashback 9 years, 1 month ago

In a minute will not suffice. If you do not correct your CSS, I will remove it.

firestormx 9 years, 1 month ago

Well, probably the simplest difference is that the brain doesn't work in binary, it works via synaptic connection strength with action potential.

I should probably read your blog, so I can give you something more constructive to work with…

A quick google search might help a bit. Here's a "neuroscience for kids" link:

http://faculty.washington.edu/chudler/bvc.html

MahFreenAmeh 9 years, 1 month ago

flashback: if in a minute will not suffice, then you really need to change your concept of timing, because due to having to sift through all of the css, find the right area to change, then change it, and test it repeatedly until it works, well, it may take more than just a minute. :B

firestormx: not sure if condescending or actually trying to help