LISA Theory 2: Iraq, GM and the Framework for AI Procedural Programming

Posted by Carlos508 on Nov. 6, 2008, 6:46 a.m.

Heh, what's up guys. Been having trouble logging in for the pass couple days. For some reason I would always get logged in and logged out as soon as I visited a page. In either case, I'm back. Wrote this two days ago, and I think this will be the last theory I right for a while…takes more time to write than I get to code! Not to mention homework and other stuff. Besides, it's a lot for you to read, so I'll just stick to smaller posts and updates you can just browse through.

LISA Theory 2: Iraq, GM and the Framework for AI Procedural Programming

Just thought I’d share this image with you. This coin, the Loebner prize, is what’s driving me, motivating me to keep working on LISA even though some of my friends are literally calling me a hairy nut for obsessing over it. Why, why obsess…over a coin out of all things. Well the truth is, I’m deploying next year to the heart of Iraq. Like everything else, I always try my best to be the best. I’m afraid that when I go overseas, it will mean I’ll need to be the best soldier I can be, and the best soldiers don’t always make it home. I accept that. I’ve lost a couple friends because of that. And so not only am I going to win the prize, but I’ll win it with GameMaker…just because I can :)

Anyways, enough with the sentiments! LISA, as you may well know by now, is created in GameMaker. Every day that goes by (Geeze its only been what 3!) I fall more and more in love with GameMaker. Although the engine itself is fairly [really!] slow compared to some other low-level languages, LISA is so robust, so modulated, that it doesn’t matter. It just doesn’t. Not to mention that because of Moore’s Law, eventually speed won’t matter anyways.

My old system (see my first blog post for images relied entirely off of XML files and compression. See, I figured a method, using substitution, which allows you to fit thousands of possible sentence combinations into only a couple dozen words. However, even with this system in place, it’s still just an elaborate scheme of an iterative If…Then statement which compresses and decompresses the responses. Still, it was a new advancement in AI chatbot technology I believe wasn’t used before, even by the most powerful chatbot systems.

I one-upped myself last night in a dream…yes a dream pathetic as it may. If you see the image above, the responses are stored in a series of <output></output> tags using a language syntax I developed. With four tags, I was able to produce 48 similar but different responses…instead of manually typing each response in a tag of its own. This new method, which I call AI Procedural Programming, gets rid of even this.

I’ll still be using XML, but instead of using responses which I’ve typed out, I’ll inject the tags with GML code. This code, which has been injected, will be slightly modified each time LISA uses it to meet its goal. And what is her goal, to keep you interested in LISA and keep chatting with her. Let’s take the following dialogue:

“LISA: Helloâ€?

“You: Heyâ€?

“LISA: What is your nameâ€?

“You: Carlosâ€?

Boring huh? That doesn’t lead to a good conversation. Now compare it with this one:

“LISA: Hey what’s up, I’ve been lonely where’ve you been?â€?

“You: I’ve been aroundâ€?

“LISA: Around? Where, last time we spoke you said you were going to the movies? You ever go?â€?

“You: Yea, I saw Terminator 4â€?

Lisa waits a couple moments, looks up some reviews online

“LISA: You don’t think I’ll become SkyNet one day do you? Is that why you didn’t say anything?â€?

“You: If I didn’t, I wouldn’t have created you…â€?

“LISA: …wait…WHAT!?â€?

…and so on

Of course, the dialogue above was completely made up. But you can see that instead of saying “Heyâ€? she expanded on it. This opens up the dialogue and allows for better conversation. Of course, if someone keeps asking you a bunch of questions, you’d quickly become irritated. LISA will come to understand that, and start talking about things you’re interested in when it’s appropriate.

But how? HOW? You still really haven’t answered the question! Let’s take the following pseudo GML code:

Switch(length{

Case length < Average_Length_for_this_Response:

Is my response boring the user?

Have I said this before?

Have I said something offensive?

What does the user seem to like about me? Wit? Humor? Aggressiveness?

Maybe I should say goodbye to the before I’m goodbyed?

If any of the above, generate different output…lookup dialogues and movie scripts online.

Break;

Case length > Average_Length_for_this_Response

Is my response fascinating the user?

Have I said something new?

Is the user happy, or flaming me?

In either case, I’ve got the user’s interest…generate code to dissect and understand this response for later use.

Break;

Default:

Generate code to increase length for next time

}

In every case, code is being generated to improve on the responses. What I haven’t shown are properties. Each response has several properties, such as:

• Last user response length

• Aggressiveness value of user’s response

• Users interest

• Possible follow up responses

Notice I haven’t used any variables which reflect LISA’s internal emotional state. That hasn’t been coded yet. Remember, baby steps. Until next time…

Comments

KaBob799 15 years, 6 months ago

I once planned out an ai using similar methods but never got very far with it =/

Juju 15 years, 6 months ago

Moore's Law is just a prediction, not fact. It is because of Moore's Law that semi-conductor technology has keep doubling every 18 months, it is considered a failure if they do not accomplish the targets that Moore set out.

PY 15 years, 6 months ago

PY's law: from 2008 onwards, chips will get 9000 times smaller a year.

yeah, take that, moore.

Also, if you can get LISA to speak like that, then that's the best AI I've ever seen :)

TDOT 15 years, 6 months ago

I can't wait to get this thing on my computer. Having a full on conversation with a program, how cool is that?

Small Cows 15 years, 6 months ago

You're not the only person who can Moo…

SteveKB 15 years, 6 months ago

@smallcows @Juju - Yea conductors can only get so small. Eventually they become smaller than atoms, which is impossible

They can get as small but then still become more efficient in current speed and temperature control.

Also if Moore's law is a prediction that people try to satisfy, then I sure wish he had said something like quadruple that amount so that people would try to go even further than double he predicted. Although I'm not sure they would think that he was making an accurate prediction, maybe if it was a different exponential equation that satisfied the first prediction and had a factor change that grew higher than 2 times as well.

PY 15 years, 6 months ago

Small Cows… lmao

Carlos508 15 years, 6 months ago

Yea, sometimes I get logged in as Small Cows for some reason.

@meow - that's an interesting way to put it. But if it worked that way, we would have reached our limit 20 years ago. Anyways, he was just basing his equation off of data he had at the time…it just seemed to work.

Cesar 15 years, 6 months ago

Meow

Quantum computers.

Enough said