thoughts on artificial intelligence

Posted by MahFreenAmeh on Dec. 20, 2012, 2:02 p.m.

he-lo 6-4-digits

it is i, the person who is a person. ok, introductions aside, because i pretty much hate those in any universe.

so i need to come here and write a bunch of stuff down so i have a repository of information that once it's written… okay, let's just start

I've been thinking about the representation of a CPU as a very constrained version of a human brain. After previously speaking with Josh@Dreamland about it, we came to the agreeement that both a human brain and a CPU are ultimately just finite state machines, designed to process the current state, and respond to it accordingly.

I however, have a little bit of a disagreement when it comes to the particulars. Namely, I don't believe the brain to be a finite state machine of any sort. While, yes, it's true that there are only a finite number of configurations for a given aggregate of modifiable variables and such, that doesn't mean that the machine that is processing them has to have a finite number of states. That would be horribly inefficient, like converting a non-deterministic finite state automata into a deterministic one; you have to store the process that the automata will go through for every state that is possible.

My theory is that the human mind is not a finite state automata, so much as a dynamic state automata. It takes into account all of the past input that has been given to it, to dynamically generate an algorithm which takes all necessary variables into consideration, and predicts what may happen if one changes based on past experiences, using this to rationalize ultimately what is the most positive outcome.

Now, I decided to speak with another friend of mine, someone more versed in circuitry and engineering, low-level hardware components and such, who gave me his perspective that in the end, a CPU is basically like a single Arithmetic Logic Unit, whereas in the brain, every individual neuron/cluster of neurons can act as an ALU. He was positing the theory that in order to accurately model a human brain, one would either need a quantum computer with millions of qubits, or possibly a neural network consisting of many computers interacting together, with each node acting as a neuron. I thought that was a pretty interesting concept, but, it's really not relevant at all to what I'm talking about.

What came of all of this was a way I thought of for training any given AI: while it is true that there are many training methods, usually involving feeding a pipeline of data to the AI then telling it what that data means, I've come up with a slightly different process, which I will outline now. It exists in a few stages.

The initialization stage:

Initializing the AI, including the neural network representing its brain, be it a series of small emulations of neurons, or whole computers, etc, to represent a neuron.

Basically, the point is to get the AI to a tabula rasa so that it has no training, and can make no logical assumptions or inferences about the state of anything.

Training is what comes next, and that's the stage that's obviously most crucial. In this model of an artificial intelligence, however, data is not sent and categorized to the artificial intelligence. Instead, it is given an environment that possesses a state, just as the inner workings of the artificial intelligence possess a state. It also will have a series of methods and variables exposed to it, which, when interact with, will form a stage change in some state, either in the internal AI state or in the environmental state. Ideally, the objective is to have the AI randomly perturb the different variables and change the state of the world around it, then observe any state changes, and record them; I think an explanation would do better.

Let's say you have two environments: external, and internal. External is everything outside of the AI, and internal is obviously everything inside. Let's say within external, there is one state variable exposed, "time". If the AI perturbs that randomly, then as a side effect of the method made to handle perturbing time, it should also increase or decrease the "luminosity" value of the external environment, given that the AI is theoretically in an outdoor environment where time advancement can cause side effects such as the sun going down and causing a noticeable difference. After every perturbance is made in training mode, the AI tries to make some inferences about what conditions changed in each state, then presents a query back to the operator asking to ratify the assumption with either a true or a false response. True means go ahead and commit it to train, false means ignore it. This is pretty much like the stages of being a child as a human; you don't necessarily know much about the world, but you interact with it, and rely on a parent to ratify your decisions for you so that you know you are safe, and acting within the lines of how a human should act.

After enough training, the AI is meant to be pushed to a more production environment, which has an analogue in the mental maturity of a human towards adulthood and max brain density, etc. At this point, enough training has been performed to where the AI should have learned how to properly make inferences, and it no longer needs input from the operator to ratify assumptions; by this point, it should have had enough inferences in the past ratified that it can understand the likelihood of an inference being rational, and convey a risk level of accepting it is rational or not, then use that in order to determine if it should do that action, or continue ahead.

Right now, it's just a bunch of ideas buzzing around in my head, but I'm busy trying to think of a way to properly implement it with the technology I have available, e.g., not very much! But if anyone has any input, I'd love to discuss it with you. It would be conceptually good as a very focused AI, in that you can only expose to it methods and variables that have something to do with the problem domain you have, and have it ignore anything else, e.g., feed it syntactical rules about a language, then let it start making inferences, and after long enough, you may have a somewhat accurate language parser! Nonetheless, ideas. :3

Comments

firestormx 11 years, 4 months ago

Actually trying to help, lol.

MahFreenAmeh 11 years, 4 months ago

o ok good, i'm pretty used to by now most people being somewhat condescending! lol. i'll actually take a look over that and see what it says; it's obvious the brain does work in different ways, but I guess what I'm trying to do is abstract a new learning process based on an assumption I made on how the brain works. it may be wrong, it may be right, but any extra input will help in changing it if it's wrong! lol

firestormx 11 years, 4 months ago

Well, there's a lot to how the brain works. It's not as simple as binary. My suggestion, if you really really wanted to know, is to read A User's Guide To The Brain by John Ratey, and Britanica's book on the brain as well. They'll give you a general idea of the brain. I'm sure there's lots of books specifically about the differences between the brain and computers as well.

I tried to read over your idea, and part of it didn't make sense (mainly due to my lack of focus here at work), so I can't help too much. You don't really put a lot of focus of the idea onto the brain, so it's coo'.

Josea 11 years, 4 months ago

Of course the brain isn't a finite state machine, your brain can understand things a FSM can not (context-free grammars for example)

MahFreenAmeh 11 years, 4 months ago

firestormx: it's no surprise to me that my argument and statement aren't very focused. as of right now, they're basically just the architecture and scheme for a progressively enhanced ideological set. what part of it didn't make sense to you?

josea: i'm not trying to really say that it is an FSM of any sort, though at one point i found myself starting to believe it! but, something

Josea 11 years, 4 months ago

I find it very hard to believe that we will ever be able to simulate any sort of human-like intelligence on a computer. Computers are inevitably limited by whatever a turing machine can do, but the brain seems to be able to understand things a turing machine can't.

…but that's only whatever comes out of my head right now, I've never really given much thought to the topic.

firestormx 11 years, 4 months ago

MFA: Most of it didn't make sense. Not because of your writing, but because I just don't understand parts of it.

Quote:
I find it very hard to believe that we will ever be able to simulate any sort of human-like intelligence on a computer. Computers are inevitably limited by whatever a turing machine can do, but the brain seems to be able to understand things a turing machine can't.
I never thought of something as obvious as that. Some new, non-computer-based technology will likely come along down the line.

Juju 11 years, 4 months ago

The brain cannot be an FSM because FSMs are discrete. Human behaviour, if any meaningful metrics can be devised, has continuous properties. Even with many states, each state is necessarily non-continuous with its neighbours.

Alert Games 11 years, 4 months ago

First of all, I don't like your text color/font aside from your comment box, which is awesome.

But in terms of AI, I think it is possible to create a form of AI, but not to mimic a simple brain. The brain is organic and uses reinforcement chemicals and signals to guide behavior.

However, it would be interesting to create abstract architectures to give slightly imperfect solutions to see what it could come up with. Sounds more complicated than a perfect solution though.

MahFreenAmeh 11 years, 4 months ago

@firestormx: okay, I was just making sure that it wasn't something that I said that was misleading or confusing, :B

@Alert well what color/font do you suggest I use then? but that bit aside, that's one of the things that I more or less discussed with my friend; he believes that the first true AI will be either implemented using a quantum computing device, or through a bioneural connection, so as to have the neural network be an actual network of neurons. as per abstract architectures and models, the only way that the species as a whole can come up with a right idea is if everyone collaborates and shares their ideas; that way, foolish and impractical ideas can be weeded out easily, but there would be a constant, decentralized search for the truth. and how can an imperfect solution be more complicated than a perfect one? ideally, a perfect solution should be able to handle any possible input, and for inputs it doesn't recognize, it should be able to qualify the input then calculate a response to it. that sounds pretty complicated to me!

also, another bit that I feel I should mention is that, as far as biology goes, I've completely thrown it out the window and ignored it in this post. I'm not trying to build a model for an AI that can manage a system as complex and dynamic as a human body. In theory, given enough training using the model that I stated, it might be possible to expose a biological environment to it and allow it to examine it's state, but that's not the point. The brain is a very complex piece of machinery, but that's also due to areas that have to evolve in it to control all of the periphery attached to it, or more namely, every organ, etc, in the body. I guess my model is ultimately trying to model how the brain learns and acquires more knowledge, but, at the same time, this brain will be nothing like a human brain in that it won't possess any knowledge of a structure of it's own self, only that it's own self can affect a change on the world outside of it. I feel that disclaimer was necessary.