Whatever will we do if we can’t count on artificial “intelligence” with certainty? Will we need to go back to ordinary intelligence? What is the world coming to?
Public service announcement: ChatGTP is *not intelligent*. It doesn’t draw logical conclusions based on ideas, it generates text that’s statistically likely a human would produce, by processing a huge database of human-written text, tweaked by human tuning of sample outputs.
ChatGPT is only a replacement for a junior engineer… They perform a search based on what you ask, come back with a minimal half-working solution combined with a ton of unearned confidence, and you have to verify and hand-hold everything. In the end, you’re just better off doing it all yourself.
But, because it does well conversationally and acts so sure of itself, many people are fooled into thinking that it’s really smart. Not much different than dealing with a know-it-all type person.
Wow. I’m a bit speechless.
The current generation of chatbots are fine for entertainment but should NOT be relied on for accurate information on anything
Any tool isn’t your friend if you’re not using it correctly.
Everyone knows rectifiers aren’t polarized duh
Ok, for those who did not dig into this chatbot trend, here’s the single thing to be aware of:
those AIs are not trained to filter truth, they are trained to say something *plausible*
Which means that the answer to any question is very, very, very likely to be false.
ChatGPT is just unguided plagiarism on a mass scale. It plagiarizes misinformation a lot.
It’s your uncle who knows how to sound credible but doesn’t know anything above a superficial level.
Whatever will we do if we can’t count on artificial “intelligence” with certainty? Will we need to go back to ordinary intelligence? What is the world coming to?
Hot tip: don’t use a gimmick system to source information that is clearly published and well documented by the component manufacturer
Selenium rectifiers are not your friend either, they make quite the terrible smell when they inevitably fail
Well, it’s half right
Bing AI provides sources, which has been pretty helpful to verify answers.
Public service announcement: ChatGTP is *not intelligent*. It doesn’t draw logical conclusions based on ideas, it generates text that’s statistically likely a human would produce, by processing a huge database of human-written text, tweaked by human tuning of sample outputs.
You have to verify everything ChatGPT says
ChatGPT is only a replacement for a junior engineer… They perform a search based on what you ask, come back with a minimal half-working solution combined with a ton of unearned confidence, and you have to verify and hand-hold everything. In the end, you’re just better off doing it all yourself.
But, because it does well conversationally and acts so sure of itself, many people are fooled into thinking that it’s really smart. Not much different than dealing with a know-it-all type person.
erm then how do they rectify anything?