Aliza [message #97189] |
Mon, 13 November 2023 09:34 |
|
Wayne Parham
Messages: 18792 Registered: January 2001
|
Illuminati (33rd Degree) |
|
|
I've just released a fun little chatbot. As a nod to Weizenbaum's Eliza program, I've called it "Aliza."
If you don't know "Eliza" or who Joseph Weizenbaum was, ask Aliza.
You can reach it at any of these locations:
Right now, it has just the "standard" training included in gpt-3.5, which includes a bunch of data gathered up until around June 2021. So it can answer a lot of questions and be somewhat accurate on several topics. But then again, it lacks information on some things and where it lacks information, it will literally make stuff up. So keep that in mind when you interact with it.
I've done this largely as an experiment in large language model transformer-based AI. Specifically what I want to know is how much better I can train it in acoustics, and specifically on the details of Pi Speakers and recommended setups.
It already "knows" a lot about Pi Speakers, things like the fact that most models use waveguides, that I am the one that designed them, that they are high-efficiency designs and so on. But it doesn't know things like the part number of the waveguide, the proper use of flanking subs, the descriptions of the models, etc. So I will spend some time over the next few months fine-tuning a GPT dataset to give it this information.
The tricky thing about transformer-based large language models is that they know only words. The phrase, "a picture is worth a thousand words" falls on deaf ears here. Well, not exactly; A transformer will gobble up that phrase. And it can spit it back out to you quite elegantly but it has no idea what you are talking about. My point is that these chatbots have no "mental model" of the world. They have only mental models of words. So to teach them concepts that are best described with pictures is more difficult. You have to use your words.
More about AI, if you're interested:
|
|
|
|
|
|
|
|
Re: Aliza [message #97284 is a reply to message #97281] |
Wed, 27 December 2023 09:13 |
|
Wayne Parham
Messages: 18792 Registered: January 2001
|
Illuminati (33rd Degree) |
|
|
There are a lot of similar technologies in chatbots these days. Most are transformers using a large language model, which is a form of recursive neural network. Some use multi-modal models, having access to text, images and sometimes sound. Those are tricky to get the individual modes to tie together but the goal is better accuracy.
Some think that will cause emergent behavior - which I agree, that's the whole point of any of these kinds of systems - but I still think we have some addition hurdles to cross. I think we need models that continuously learn rather than being trained before use. Emergent behavior is always the result of complex systems, but how close that emergent behavior resembles true intelligence requires understanding of concepts, in my opinion, and I think that will require both multi-modal approaches and a continuous learning mechanism. Only then can it gain experience and eventually, perhaps, self-reflection.
But back to your question, most of the "widgets" or automated assistants use either a rules-based approach or a limited database targeted for the subject desired. Aliza is a "generative" system, meaning its goal is to create new content based on the data it has been trained upon. I have it dialed way back at "temperature 0.0," which is a configuration setting that makes it be as definite as possible.
At this setting, if it doesn't "know" something, it will still make inferences - actually, it will always make inferences or rather try to combine words and phrases that are often used together - but the point is that the matches need to be closer than if the temperature setting were higher.
I have also started "fine-tuning" training in cases where I found it inaccurate and will continue to do that as I find need. So please provide feedback if you find inaccuracies.
|
|
|