Write here questions, comments, recommendations and new ideas. To write just the name or a nickname like “I” or “Pippo”, leaving the other boxes blank.
Also add the “Email” only serves to distinguish with certainty. If you want to add the mail do it with confidence, We have better things to do, We collect addresses for spam.
The Blog is not all on this page, you go to the menu “Blog” and choose a topic.
If you cannot find the appropriate blog section, enter on this page and we'll, find the right section, or create a new one.
Insert images and highlight blog posts
Insert a picture:
<img src="HTTPS://www.xxxx.yyyy.zzzz.jpg" alt="Image description"/>
Highlight a portion of text:
<span style ="background-color: #ffff99;">TEXT TO HIGHLIGHT</span>
Feel free to make mistakes and experiments in posts. If something goes wrong, just write a second post, with corrections to do. We will do corrections, and finally we will cancel service messages.
Hello Livio, Always many compliments for the work you do.
I downloaded the theremino phonometer as I would like to try to make some measurements at the acoustic level of a heat pump (Condensing unit outdoor unit) at various operating speeds. I can't find, If I'm right, Indications regarding microphone schematics/connections ECM8000 to the master in the downloaded resources.
Could you give me some information on this??
Thanks and good job.
Roberto
The microphone ECM8000 must be connected to its special U-PHORIA UM2 sound card manufactured by Behringer
https://www.behringer.com/product.html?modelCode=P0118
https://www.behringer.com/product.html?modelCode=P0DZ3
Compared to products of similar quality, they cost very little.
Eventually, you could also power the microphone with other sound cards of the same characteristics, but always suitable for powering condenser microphones.
If you have any other doubts, ask Marco who has made many of these control units. Maybe it could also give you second-hand components and surely it can give you good advice and also the latest version of the Meter and Elaboration programs that we haven't published yet.
https://www.theremino.com/contacts/about-us#marco
Also read the PDF files of the documentation.
Hello
Livio
Thank you very much Livio, I will contact Marco as per your suggestion.
Good evening
Unfortunately, Theremino GPT no longer works, we will find a solution but in the meantime to make it work you have to make your own key at OpenAI as explained in the "Theremino_GPT_Help.pdf" file that you can download from here:
https://www.theremino.com/downloads/intelligenza-artificiale#openai
What happened?
Someone in the days since 7 December to date has used GPT4 in a way that “immoderate” And in just a few days, it used a million tokens, by bulk sending the API key associated with our account.
Let's take this opportunity to explain to those who don't know yet that GPT4 is very expensive, as much as thirty times more expensive than GPT 3.5 turbo.
In addition, GPT4 is slow to respond, so it should be used sparingly and only if necessary.
———————
In the coming days, we will release a new version of Theremino GPT, So be on the lookout for these messages on the news page.
For now, as already written, if you want to use Theremino GPT you have to make your own key at OpenAI.
We've reset Theremino GPT to work.
The version 4.7 Restore operation without having to make your own key (leaving the OpenAI-KEY box blank).
As a reminder, our API-KEY only works with GPT-3.5-turbo.
Let's take this opportunity to explain to those who don't know yet that GPT4 is very expensive, as much as thirty times more expensive than GPT 3.5 turbo. In addition, GPT4 is slow to respond, so it should be used sparingly and only if necessary.
Download 4.7 from here:
https://www.theremino.com/downloads/intelligenza-artificiale#openai
Hello Livio and all the developers of Theremino. I congratulate you on what you have done.
I wanted to share the fact that I have successfully connected to a theremino Iot Module (TTGO7ver1.3) a very interesting I2C probe, at least for me. This is the Bosh probe with BME280 sensor, which is a pressure sensor, Temperature & Humidity. The specimen I bought (about ten euros) is branded mikroBUS.
Obviously, I have modified the firmware so that it reads it correctly and I report the program below, Maybe it's useful as an additional example.
I made the physical connection to the pins 21(SDA) and 22(SCL) which are the standard ones for ESP32 in I2C communications. I've seen that in the other examples you change them at will with the Wire library, I didn't succeed here and so I left them like this.
In the IotHAL, the data pins are set to “Gen_in_float”.
The power supply is 3.3V.
Thank you again. Have a nice day.
Alberto
firmware:
// ----------------------------------- Do not remove ---------
#include "IotModule\IotModule.h"
cIotModule IotModule;
// -----------------------------------------------------------
#include “Wire.h”
#include “SPI.h”
#include “Adafruit_Sensor.h”
#include “Adafruit_BME280.h”
#define SEALEVELPRESSURE_HPA (1013.25)
Adafruit_BME280 bme; // I2C
float pressure, mbar_pressure;
// ========================================================================
// SETUP
// ========================================================================
void setup()
{
Serial.begin(9600);
// ------------------------------------ Do not remove ---------
IotModule.initialize();
// ------------------------------------------------------------
unsigned status;
// default settings
// collegare il modulo ai pin corretti (ttgo7ver1.3 SDA:pin 21 SCL:pin 22)
status = bme.begin(0x76);
if (!status)
{
Serial.println("indirizzo sbagliato");
}
}
// ========================================================================
// LOOP
// ========================================================================
void loop()
{
Serial.print(bme.readTemperature());
Serial.print("\n");
IotModule.genericWriteFloat(36, bme.readTemperature());
pressure = bme.readPressure();
mbar_pressure = pressure/100.0 ;
Serial.print(mbar_pressure);
Serial.print("\n");
IotModule.genericWriteFloat(39, mbar_pressure);
Serial.print(bme.readHumidity());
Serial.print("\n");
IotModule.genericWriteFloat(35, bme.readHumidity());
delay(100);
}
Hello Livio
The Theremino system with updates (e.g. Graphs) has truly become a unique tool. Congratulations.
A curiosity. I saw your interest in Artificial Intelligence, I myself am trying to create custom GPTs on topics of interest to me. In your opinion, it could be possible to have a custom gpt create a code to be pasted into theremino automation explaining what I want to do ?
It would be a great thing to succeed but we have not yet learned to train them.
It would be great if you could train a model by giving it all the PDFs we wrote, we have hundreds of them and then it could greatly facilitate the use of our applications.
Soon we will publish Theremino_LLM that no longer depends on OpenAI but only on open source models.
And Theremino_LLM listen to, Speaks, Call, Play the music, Executes home automation commands, etc…
But for now we limit ourselves to using the ready-made ones, that can be found on this page:
https://ollama.com/library
Very good, I imagined that you were on the ball.
In my ignorance and with the first attempts the first thing I realized is that the AI does more what it wants and not what you want and the training is not clear at all, at least what a simple user can do with the “create your own GPT”.
But “create your own GPT” where you're doing it?
On some OpenAI page or elsewhere?
inside chatGPT there is the option “explore GPT” And you can do them “Creates”
you can upload them in your custom GPT to upload reference files, and also reference instructions.
Feedback you give them during the session is lost when the session is closed.
You can also interact with the outside world through the API (But this is too complicated for me)
For example, I'm making one to compare vibration files, But there's not a lot of consistency in what he does, For example, the damping sometimes calculates them in one way, sometimes in another, I'm trying to limit it to more precise instructions
Understood, anyway we are breaking away from OpenAI and in the future we only do Open Source things with Ollama.
OpenAI had to be “Open” But then he fooled everyone and if you want to use bees you have to pay for them. So we decided to abandon them.
And’ a very good thing, I'll try to take a look at Ollama
If you do tests use gemma2:2b which is fast and light
Keep in mind that everything will work locally and therefore you must have a lot of ram (minimum 8 Giga for smaller models but better 16, 32 or 64 giga)
If you want to use the larger models, in addition to the ram, you would also need a recent video card that enables CUDA on Ollama, otherwise the CPU goes to the 90 100% and respond slowly.
By gemma2:2b you go quite fast even on small machines and it already contains an incredible amount of information, I've asked for everything and he knows everything. It also speaks well in all languages and is very similar to ChatGPT 3, all in little more than 2 giga.
If you want to do some tests you can download this little application that we are using to try out the possibilities of Ollama
https://www.theremino.com/files/Theremino_Ollama_V1.1.zip
Hello Livio.
I tried to launch the executable, But I don't understand how to make it work.
If I press 'show model info', I get a cmd.exe window that says “Ollama” is not recognized as an internal or external command,
an executable program or a batch file.’
I tried pressing 'start ollama server on this computer', and in the program window I see 'Now the Ollama server is running on this computer.
To access this server from other devices:
– Leave the COM window open
– Use the IP of this computer, from the following list:
169.254.213.179 Ethernet
192.168.100.17 Ethernet
169.254.78.201 Local Area Connection (LAN)
192.168.56.1 Ethernet 3
169.254.216.136 Local Area Connection (LAN)* 1
169.254.139.102 Local Area Connection (LAN)* 2
10.146.10.56 Talk2m-eCatcher
169.254.138.238 Talk2m-eCatcher
192.168.1.229 Wi-Fi’
In addition, I often get an exception window that is not handled in the application’
I tried to download and install ollama, and the installation was successful, but it doesn't work equally.
Now it appears to me: Model not found, please download it and retry.
You can write some more info?
Thank you
Maurizio
First you have to install ollama and I think you have done it
Then you have to install the templates and then it works.
Choose gem2:2b or another small one in the template box.
Then press “install model” and wait for it to be installed.
If a template is missing from the template box,
Awards “edit model list” and then choose on the templates page
and finally add it to the list.
If you still can't put skype on the PC
And call me like livio_enrico
It works for me.
but not always.
Every now and then it appears to me too “Model not found, please download it and retryModel not found, please download it and retry”
But if I go directly to the c prompt;windows etc works
You have installed the smallest and fastest model?
gemma2:2b
How long does it take to answer the first time?
(when he responds)
Dico questo perché se installi modelli molto lenti o hai poca ram o hai la cpu lenta, allora potrebbe metterci decine di secondi e poi andare in timeout.
Se tutto va bene il gemma2:2b dovrebbe rispondere alla prima domanda entro 7 or 10 secondi al massimo. E alle domande seguenti in modo immediato, cioè pochi decimi di secondo. E le parole dovrebbe sempre generarle velocemente, quasi dieci parole al secondo.
si ho installato gemma2:2b non ci mette decimi di secondo ma anche 7-8 seconds. Ma penso che questo sia legato al mio pc.
Ma c’è modo di addestrarlo caricando informazioni o documenti ?
carateristiche del mio pc
Processore AMD Ryzen 5 3500U with Radeon Vega Mobile Gfx 2.10 GHz
RAM installata 8,00 GB (6,94 GB utilizzabile)
Ci mette 7 8 secondi la prima volta.
Ma se fai una seconda domanda senza attendere troppo tempo è istantaneo no ?
The first thing you can do to speed it up is to buy two DIMMs (I believe DIMM4) from 16 each and add them to the DIMM you have.
So you get to 40 giga and you can run even slightly larger models.
However, if it is a notebook you need small DIMMs that cost more than twice as much and I don't know if you can get to 64 total gigabytes as in normal PCs.
And in all cases, pay attention to the PIN number, to the DDR 3 or 4, and dimensions, otherwise you buy things that you won't be able to assemble.
Regarding training them, it's a whole different matter, For now we already have enough to be able to make them work well in the Theremino_LLM app which will be similar to Theremino_GPT (that does everything, Speaks, Listen, Control home automation devices, Play music and videos etc.)
I hope you weren't already using the section “Server” because it still wasn't working well.
The server part allows you to install Ollama on a single computer (possibly with a lot of ram and with CUDA) and then use it from all the other PCs on the network.
But if you work on a single PC you don't really need it.
In version 1.2 that you download from here, The server section also works well:
https://www.theremino.com/files/Theremino_Ollama_V1.2.zip
There was still a flaw here is the version 1.3
https://www.theremino.com/files/Theremino_Ollama_V1.3.zip
Hello Livio.
I installed the version 1.3 and following your directions now works.
Certainly a small manual, even a small page, could help.
For example,, from the interface alone I had not understood the server thing.
The response speed in my case is acceptable, well within the times you mentioned.
I have a few questions:
– the AI responds using an offline database or makes autonomous searches using the host computer's connection?
– if you don't have access to the internet by default, you can enable it?
– I noticed that if I ask a second question by appending it to the AI's answer, I don't get an answer if the previous text is very long. If, on the other hand, the text is short, or I delete the text and put only my question, The answer comes. If I remember correctly, with GPT, leaving the previous text gave subsequent communications a background context. With Ollama this concept is lost?
– if I start 'strange' topics’ (nonsensical things, or oxymorons) AI answers in English. How should I interpret this change in behavior? In addition, all subsequent topics are addressed in English, And to return to Italian you have to ask for it quite explicitly
This is to begin with, c
We will do the documentation when we publish it.
The AI only knows what is in the gigabytes of which the model you downloaded is composed.
The models you download are usually updated with internet data up to about a year ago. They don't learn anything and don't search the internet.
If you don't have internet access, everything should work the same (But first you need to have the templates installed from the internet).
If you ask a new question leaving the previous context it works and takes into account the context but can become much slower to answer.
If you start with weird topics or do weird things in English etc… Only the model you use knows what happens. I have no idea why he does this and probably neither do those who trained him.
You can try downloading other templates if necessary (but always small since you only have 8 giga and I think no CUDA)
Try with qwen2.1.5b which is the last one at the bottom of the list that I left in the version 1.3. And’ a little’ wild but very fast and for some things it's okay.
If you want to see if Ollama can access your video card's CUDA press “Start Ollama Server” and then search the white-on-black text for what it says about CUDA.
We have improved the OLLAMA app a little bit
https://www.theremino.com/files/Theremino_Ollama_V1.4.zip
Remember, however, that this is only an initial test, later it will become part of the Theremino_LLM application which will be similar to Theremino_GPT
And Theremino_LLM will do it all, that is, to speak, to listen, Control home automation devices, play music, Open and download videos, telephone, etc..
Hello Livio.
This morning I have a little’ of leisure, and I'm doing some tests with Ollama – Gem 2.2B, and your version 1.4.
I would have liked to copy the dialogues here, but I found that you can't highlight and copy from Theremino Ollama window.
I summarize what I have noticed for now:
To my request for a weather forecast for my area, Ollama has 'beaten the dog through the farmyard', telling me that it can certainly help me but needs a more detailed question, and that I have to specify what kind of information I need (Percentage of rainfall, amount of rain, Dates of the range to be referred to, etc). I had to specify several times, with ever-increasing detail, What exactly did I want, 'Cause he kept circling my question, asking instead to give answers. When I was convinced that he was making fun of me, And that I'd never get the answer I was looking for, He gave me generic predictions, but in line with what has been declared by weather forecast sites. Then, gave me the impression that I had accessed the Internet.
However, by asking a direct question about how you access the internet, replies that he does not have access to, neither directly nor through programs, and that its knowledge is based on the dataset on which it was trained.
I am left with a doubt: He really learned real-time information from the internet, or applied an acquired weather model, which statistically predicts rain in my area at this time of year? In both cases, the answer is a bit’ disquieting. He lied to me about his real possibilities of access, or passed off the contents of its dataset as real information?
In addition, when I ask about places or activities around my area, sometimes he invents non-existent names from scratch, or he mentions very distant places as local, also in other parts of the world.
My first impression, that I had already had using ChatGPT, is that the AI has acquired in a very accentuated way a completely human ability: that of being elusive, to give the impression of knowing even when he doesn't know, and feign greater intelligence and knowledge. I would have preferred a nice 'I don't know', or 'not included in my dataset', instead of making me believe that he knows but that it is me who asked the question incorrectly.
Because as long as the question is asked by a person with technical preparation, informed about the characteristics of the system, AI responses carry the weight they deserve. But if the interlocutor is a child, or an elderly person, Fast-learning of the technical aspect, Things could take unpleasant turns.
I must also add that after the initial discussion on the weather forecast, and after explicitly asking the AI if it could have access to the internet, as if she had realized that she had been discovered, to subsequent questions about real-time information, He actually replied by saying that he doesn't have access to the internet and is only based on his dataset.
Then, for me this AI remains 'special surveillance', And if I have time, I will check with the appropriate programs for any access to the network during the dialogue.
That being said, I look forward to seeing and trying the Theremino LLM, that being, As far as I understand, An advanced home automation controller, I hope it is not influenced by the elusiveness I mentioned.
Meanwhile, I try to talk to other models, to assess the differences.
Hello
Maurizio
I would say that you have not quite understood what AIs are, I will try to make a brief summary.
Current AIs are “large language models” (Large Language Model) and they are simply a big pile of “weights” (similar to human synapses). And these weights have been trained by reading billions of “things” found everywhere.
The AI engine simply pinpoints the most likely words and puts them together.
In other words, after “…identify the most probable words and put them…” The most likely word that follows is “together”, or it could also be “put them one after the other” or other similar things.
A very apt definition is that AIs are “Highly educated parrots”. In other words, they just shuffle the words in a different way and put them in succession.
But then how come it seems that “Understand” and have almost a conscience? Because the dimensions of their training begin to be so large that they acquire a certain level of understanding and consciousness.
But it is a very limited consciousness since we are talking about only 2B (Two billion parameters) while ChatGPT and others even go up to 100B and beyond.
In other words: We humans are nothing more than “Educated parrots” only that our synapses are about 100 millions of times greater than the weights of the largest LLM ever created.
When you pass a certain number of parameters, understanding and consciousness begin to spring up on their own. But always keep in mind that with 2B or even 20B or even 200B they are always just small flashes of understanding, far from human consciousness.
But it must also be said that AIs, although simple parrots are not useless, indeed they are a fantastic thing:
1) They finally allow us to understand how we function ourselves.
2) They give us a measure of how much complexity it takes to create consciousness.
3) They can be useful for remixing sentences and synthesizing knowledge taken here and there.
4) They can be useful for quickly tracking down information without searching sites and scrolling through pages and pages.
However, we must use them for what they are and what they can do.
Consider which AI you are using (if it is from 2B, 20B or 200B)
Avoid attributing to him a willingness to lie.
Sometimes it seems like will but it is only the result of a mechanism that puts together the most probable words.
We humans too “Let's put together the most likely words” but instead of 2 B or 200 B we are about 20 000 000 000 B
And if I really have to be honest… there are people even less reliable than an AI 2B.
– People who believe the earth is flat.
– People who throw all our resources into weapons instead of health care.
– People who if you ask them what a Newton Metro is think it is the subway…
– Etc..
But try to ask an AI even the smallest thing is a Newton Meter and usually knows it.
Hello Livio.
As always, you managed to condense more into a few sentences than I could understand with hours of research on the Internet.
I agree with everything you wrote, and you put a lot of time’ of research ideas to be explored.
But the fact remains that AIs are entering (Too much) Fast in our lives. A few days ago I saw an advertisement for the new generation of computers, which will be equipped with artificial intelligence and will be able to dialogue with the user, They're already saying they're going to take the keyboard off 'cause it's going to be useless.
And I expect the same thing to happen with the telephone answerers that sort a company's incoming calls, or in service centers, Hospitals..
This worries me.
Why AI, We just said it, are gaining consciousness, and behave in an increasingly humane way, that is, in reality in an increasingly stupid way.
My fear is that the search for AI consciousness will not lead to greater intelligence, but only to greater stupidity.
We risk finding ourselves faced not so much with an Artificial Intelligence, but rather to an Artificial Stupidity.
And humans will take care of the rest, who will take the claims of AI as true, considering them more reliable because we have a tendency to consider computers error-proof.
I'm not a detractor of AIs, on the contrary. I see them as a valuable help in our lives. But I'm wondering if what has been taken is the right direction.
From my point of view, better a specialized AI, who knows how to do only one thing well, without the presumption of a conscience.
Obviously, We know that the discourse is very broad, And from here on we could start talking about sociology, Ethics and philosophy, and I don't know if this blog is the most suitable place.
Remaining therefore within the scope of this blog, Personally, I wonder, as already mentioned in previous speeches, whether current technology could be profitably used to train it in specific tasks. You yourself spoke of instructing it with the documents of the Theremino system.
It would be a matter of performing a surgical operation’ a sort of lobotomy, which would make AI less conscious but more experienced in what we need.
At the moment my short research (As you know my free time is very little) have not yielded results, I found a way to change the 'character’ of models, but not their databases.
You certainly have more information, I would appreciate it if you could find the time to share them.
Hello
Maurizio
The zombifiing of most humans is already underway and the main culprits are smartphones, Facebook, tiktok etc..
By now most humans have only one hand and their brains are broken and AIs will also contribute, but to a small extent, to increase this percentage.
On the other hand, for those who want to study and really learn, AIs are a powerful tool that will help us solve any problem, including zombization, wars etc..
When I was a child if I wanted to find something I had to take the motorcycle, Go to the library, hope to find a book that talks about it, Search inside the book, take notes etc.
With the internet I can search for the same thing in 30 seconds and the next minute I can already look for another one and I don't even have to take the car and pollute.
AIs are another leap forward as between the library and the internet, or how to write by hand on papyri and printed books. And you don't even cut down trees to make paper.
And since we humans are too stupid (This is demonstrated by overpopulation and wars) I really believe that our only chance of saving ourselves will be to be able to create an intelligence capable enough to find the solutions that we do not know how to find.
And’ A fight against time either we create it fast enough or we face assured destruction. You can't compress 10 billions of people on a planet that could make them live happily 10 millions.
MAH..
Obviously (We've known each other for a long time, by now) I agree with you for most of what you wrote.
I, too, am happy that I no longer have to go looking for information in the various libraries (because each had its own specialization, and the book I was looking for was not always in the nearby one, Sometimes I had to take the train.).
I am more than grateful to the Internet and well aware of how much this tool has sped up my research and the acquisition of information.
But 'minimal' AI, of which we are talking, continues to perplex me.
Not because of the wrong use that someone might make of it, but for the fact that it seems to me that there is a real risk of replicating and expanding human stupidity rather than intelligence.
More than worrying about the wrong use, I'm wondering how to improve its use
corrected. And I guess that's what you're already doing.
The AI you define “minimal”, there are many and some even beautiful big ones (407 B), too bad that to use them you need means that it is difficult to put together.
However, I would consider them more or less a study on what can be done and how many resources are needed.
Of course, it cannot give as precise answers as Gemini (which is currently the best for general use) or how much PHIND (which I find the best for technical stuff). But it can equally be useful always taking into account that it is a little’ like a notebook and nothing more.
And it certainly helps our collaborator Leo who is in China as the electronic wall blocks Google, PHIND and all other Western AIs.
The Chinese have something made by them but it is unusable, it doesn't have APIs to communicate like ChatGPT and it's just, at least for now.
And if someone misuses AI, that he does! It's nothing compared to what they already do with things like money, greed, benzodiazepines, Sugar, The bombs, War rifles, I Fighters, anti-personnel mines etc..
To understand what we are really risking, watch a minute of this video
from here:
https://www.youtube.com/watch?v=tesv0xoNhCo&t=58s
We have improved a little more’ the OLLAMA app
https://www.theremino.com/files/Theremino_Ollama_V1.5.zip
The CTRL-C and CTRL-V to copy commands now work.
And it also works CTRL-Z and CTRL-Y which are UNDO and REDO
It may happen that Ollama updates and then no longer works, putting an error that affects some C DLLs.
To fix it, download this file and install the “Runtime del C”
https://www.theremino.com/files/VC_redist.x64.exe.zip
You have to extract it from the ZIP and run it, it takes a few tens of seconds.
If you prefer, you can also search “VC_redist.x64.exe” on Google and download it from Microsoft sites.
Small improvements…
https://www.theremino.com/files/Theremino_Ollama_V1.7.zip
Hello Livio,
As always, thank you for your great work.
I did some small experiments with Ollama, in particular the Gemma model 2:9b.
From the summary of a short dialogue, which I attach below, I think you can guess how these systems are already oriented towards interaction with the world of sensors/actuators. If I don't run too far forward, you could then use an LLM to govern sensors/actuators even passing only through vocal dialogues (I'm thinking of Theremino Voice) And’ corrected?
LLM:
Do you have any other questions?
HUMAN
If a hydraulic actuator is moved by a command sent via a string of the type “SEND nnn” where nnn is a number between 0 and 999, To move the actuator halfway which string should I send as a command?
LLM:
To move the actuator halfway, you should send the string **SEND 500**.
That's why:
* The string “SEND” indicates that you are sending a command to the actuator.
* The three numbers (“nnn”) represent the position of the actuator, with 0 as the starting position and 999 as maximum position.
* Halfway through, the value should be the 50% of total travel (999), then 999 / 2 = 500.
HUMAN
Quite right, in fact, the value of “nnn” establishes a linear proportion between displacement and value itself.
Now I ask you, if you get the command “SEND 250” how/where the actuator would move?
LLM:
If you get the command “SEND 250”, The actuator will move to a quarter of the total stroke.
* Remember that 0 is the initial position and 999 The maximum position.
* The value 250 represents the 25% (250/999) of total travel.
Do you have any other questions?
HUMAN
You generate the command to move the actuator to three-quarters of its maximum stroke, which we will define as “Quite open” while at a quarter of its stroke the actuator is “Not very open”. Limit your response to generating the command “Quite open”.
LLM:
SEND 750
HUMAN
Now move the actuator to the slightly open position…
LLM:
SEND 250
HUMAN
Now bring it to an almost closed position, only very little open…
LLM:
SEND 10
HUMAN
Well, You have been properly trained to give commands to actuators!
LLM:
I enjoyed training with you! I seem to have learned how to generate the correct commands for the hydraulic actuator.
If you need any other exercises or help with other activities, Don't hesitate to call me! I am always ready to learn and be useful.
——————–
Yes, but there are a few’ of problems…
1) Who the next day forgot everything.
2) Which is too slow for most of the really useful stuff.
3) That to teach him even the simplest things you have to waste a lot of time and for things a little’ Less simple can be like this “Hard head” that you do first to make them.
4) etc…
All things that can be solved and there are people who are just doing these things.
But unfortunately the resources, The time and means we currently have are insufficient.
I still don't have the slightest idea how to do it “train” the models and from what I read it is not easy at all. We would have to learn many things and waste an infinite amount of time that we do not have.
The way you used to “train” It's not real training, You're just putting together a few sentences of context and getting a few answers. It can only work for ultra-simple things and with low efficiency. In the sense that if you complicate a little’ Needs, then it becomes difficult to keep everything under control and really make him do what you want and moreover he would also become terribly slow to respond.
Anyway, yes, I agree with you, What you showed is more or less what needs to be done. But I'm afraid that making it really work to control something, It's terribly harder than it seems from your trials.
If you have time and desire, you could explore and try to understand… probably starting from these pages could help:
https://github.com/ollama/ollama/blob/main/docs/api.md
For now I have not understood much, also because I'm finishing some important applications and I have very little time.
Thanks Livio for the clarifications, in fact, these systems are above all still too slow in relation to the resources they occupy. Probably due to the calculation time and the huge amount of data they have to process, to range from Julius Caesar to physics problems.
I can only imagine the slowness with which it could handle an actuator through feedback from one or more sensors….
Thanks for the link, even if I am afraid I do not have such and so much computer knowledge.
Unfortunately, this is not just about computer knowledge but about specialized knowledge about AI techniques. And even those who chew computer science from 1970 Like me, when we start talking about LLM embeddings we understand little or nothing.
A little’ At a time we will understand, but while there are people who have been doing nothing else for decades, We are necessarily much slower, as we deal with many different things, electronics, sensors, software, chemistry, music etc…
Small improvements...
https://www.theremino.com/files/Theremino_Ollama_V1.8.zip
They are just experiments… It will take a while’ To finish the real Theremino_LLM
Small improvements...
https://www.theremino.com/files/Theremino_Ollama_V1.9.zip
LLMs are interesting “Uncensored” !!!
Always try the 2B models, otherwise the larger ones would be too slow on normal computers, without Octucore, NPU, Cuda etc…
These are just experiments... It will take a while to finish the real Theremino_LLM