We Hang Up Our Brains In 2025?

Ian0

Joined Aug 7, 2020
9,846
I don't understand this, it seems to be an objection but I addressed it directly:

Because of the context of practical values mF, mf, MF, and MFD would be read as "microfarad" and if milli- or mega- were intended they'd have to be written out.

What am I missing in your reply?


Did you buy a 40μΩ resistor for it? My recollection of older resistors was labelling with "ohms" or Ω, and aside from K,k spelled out prefixes as needed, e.g.: megohms. I just realized I don't recall "kilohm" but there must have been some instances. Do you know if there were?
mF for millifarad is still rare, but getting less so. I see smoothing caps on power amplifier circuit diagrams labelled 4.7mF from time to time.

I bought the shunt as a “60mV/1500A shunt“ (should that be 1.5kA ?), but if you buy a “current sensing resistor” then values in Farnell start at 25μΩ, which their website shows as 25μohm, as if they can find a lower case mu but not an uppercase omega.
To add to the confusion, some browsers display an upper case omega as an upper case W.
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,496
SPICE is so old that it came from the days when computers only worked in upper case.
Yes that makes sense, and the only updates to software over years and years of use and experience is to DOWNGRADE as you upgrade, so things can never get completely better. In other words, in all of human kind's infinite intelligence we dont seem to have the ability to make software better, only better in some ways and worse in other ways, and in some cases we dont seem to have the ability to fix things that are from old ideas. After 13 versions of Android they still dont have the ability to add something that was understood as far back as the 1970's, the "am/pm" indicator for the clock. Yes Android, dont be too elegant now you might upset some old idea pefectionist that doesnt want to change anything.
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,496
The convention of Latin u for μ is common: uA, uP (microprocessor), etc. But for Farads, the practical fact that no component would be labeled in “millifarads” or “Megafarads” meant the ambiguity of mF was only theoretical. Things also had different common names, like micromicrofarad for pF.

On the other hand, for Ohms where there were milli- and Mega- values in common talk, you had megohms but for milliohms the m could be used because there was no common talk about “micröohms”*.

It is a relatively recent phenomenon that we are so careful about how we represent units and their prefixes. The regularity of the SI system, and its prescriptive nature were a reaction to the scattershot usages. Sometimes there might have been great consistency and regularity of usage in one group of users or another—but without agreement as to which version should be used. Usually though, it was a melange of versions driven by momentary convenience and misplaced orthodoxy, so SI has been a great thing, even if focusing on people as units is a bit dodgy.


* So here, in micröohms we have a purposeful, unconventional use of the dieresis to deal with the double o in the combined parts. The typewriter would have us use an endash—as opposed to its longer sibling emdash—to avoid the accidental blend that should not be pronounced. Today, the dieresis is omitted from words like cooperate because most people know it’s not like a chicken cooperate, but it began to be omitted because of the typewriter (and replaced with the endash when it was still needed).

There are other words with similar fates but people often think, if they are not sure of a spelling, that compound words which result in blends should get an endash. I would contend they should get a dieresis, but then I am a bit odd and really like diacritics which I find to be a nice embellishment when not strictly needed and the “right” way when needed as in our instant case.

Note, by the way, that “megohm“ is not ”megaohm” because the ao pair lacks the mellifluence of the go and people do have a natural desire for things that roll off the tongue. In fact, we will convert things into versions whose orthography appears to be nothing more than a convention once a word is polished by the figurative rock tumbler of common usage. Of course, it is the smoothed out pronunciation that is the convention while the spelling has a provenance (but, there is nothing sacred about orthography and this, too, is normalized and simplified through common use).

Hi again,

That is very interesting, enough so that i now have to ask what your take is on the font problem regarding the upper case English "i" and the number "1".
There seems to be an ambiguity regarding these two even though it may not be a complete ambiguity.
In certain fonts, both of these characters either look the same or very nearly the same. I just ran into this problem the other day when looking up a part number. It was hard to tell if the character in the part number was a numerical '1' or a upper case English 'i'.
Also note how i have to word this in order to eliminate the ambiguity. How does this kind of stupidity go on.

You might also be interested to comment on the Android phenomenon of not using an am/pm indicator on the clock. Why not use that when it has been used since the 1970's even on cheap $4 alarm clocks. Who the heck is making this stuff so ridiculous these days.

If this keeps up eventually we'll be eating without knife and fork, just ripping our meat apart with our bare hands (ha ha), and scooping up mashed potatoes with one hand to serve to guests (ha ha ha), and that might be right after the family dog just licked off our hand from the previous serving (ha ha ha ha).
Could it be that we actually NEED artificial intelligence because we are just getting stupider as time goes on (har har). Maybe some of us have already hung up our brains.

Seriously though would like to hear your take on the font ambiguity issue.
 

Ian0

Joined Aug 7, 2020
9,846
Someone mentioned typewriters.. . .
I remember typewriters which had no 1 or zero keys. One used a lower case l (which is identical, incidentally, to an upper case I on my iPad), and an upper case O for zero.
 

Ya’akov

Joined Jan 27, 2019
9,170
Hi again,

That is very interesting, enough so that i now have to ask what your take is on the font problem regarding the upper case English "i" and the number "1".
There seems to be an ambiguity regarding these two even though it may not be a complete ambiguity.
In certain fonts, both of these characters either look the same or very nearly the same. I just ran into this problem the other day when looking up a part number. It was hard to tell if the character in the part number was a numerical '1' or a upper case English 'i'.
Also note how i have to word this in order to eliminate the ambiguity. How does this kind of stupidity go on.

You might also be interested to comment on the Android phenomenon of not using an am/pm indicator on the clock. Why not use that when it has been used since the 1970's even on cheap $4 alarm clocks. Who the heck is making this stuff so ridiculous these days.

If this keeps up eventually we'll be eating without knife and fork, just ripping our meat apart with our bare hands (ha ha), and scooping up mashed potatoes with one hand to serve to guests (ha ha ha), and that might be right after the family dog just licked off our hand from the previous serving (ha ha ha ha).
Could it be that we actually NEED artificial intelligence because we are just getting stupider as time goes on (har har). Maybe some of us have already hung up our brains.

Seriously though would like to hear your take on the font ambiguity issue.
The problem of ambiguity in numbers stems from the fact that most fonts—or at least their progenitors—were designed in days when numbers did not play the role they do today. When the various glyphs were designed, numbers were needed but less than the letter forms. When numbers were used, they were generally isolated and grouped. For example you might see:

101 East 21st Street

which is not ambiguous whether it is rendered:

IOI East 2Ist Street, or,
|O| East 2|st Street,

because numerals and letter forms are not mixed nor would the have been expected to be. Today, it is not uncommon to have to renders mixes of numerals and letters as unisolated strings, it would not be surprising to see:

1I0O... &c.

In some sort of code or serial number. This can be very confusing. As @Ian0 pointed out (is that 1anO or IanO or lan0 or... so confusing...) typewriters often relied on the upper case “o” for zero and the upper case “I” or lower case “L” for 1.

For fonts that were used in a context like accounting, where numbers are as important as letters if not more so, the numbers were very distinct. For example in non-proportional fonts in general you get:

Code:
0,O 1,I 1,l
and
Code:
|
the vertical bar (pipe)

Obviously, the code tags use such a font to disambiguate the terrible trio of |lI and the daunting duo of 0O.

But if you look at old school faces, like:

1682599663578.png
you’ll find completely or relatively distinctive numbers. On the other hand, the trend to san serif fonts with without distinctions for 1 and 0 is something that came form designers who didn’t really care about the number forms from a functional perspective.

Mind you, I have oversimplified this in order to make it fit into a reasonable post. There are many forces at play here. But I think the principal one is probably our shifting use of number/letter combinations in everyday life.
 
Last edited:

Ian0

Joined Aug 7, 2020
9,846
In some sort of code or serial number. This can be very confusing. As @Ian0 pointed out (is that 1anO or IanO or lan0 or... so confusing...) typewriters often relied on the upper case “o” for zero and the upper case “I” or lower case “L” for 1.
Whenever I deal with Chinese suppliers, I get a reply that starts "Dear lan". I wonder if Lan means something rude in Chinese, maybe they are all sniggering.
(Of course, it would be unthinkable for me to make fun of the Chinese suppliers' names.)
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,496
The problem of ambiguity in numbers stems from the fact that most fonts—or at least their progenitors—were designed in days when numbers did not play the role they do today. When the various glyphs were designed, numbers were needed but less than the letter forms. When numbers were used, they were generally isolated and grouped. For example you might see:

101 East 21st Street

which is not ambiguous whether it is rendered:

IOI East 2Ist Street, or,
|O| East 2|st Street,

because numerals and letter forms are not mixed nor would the have been expected to be. Today, it is not uncommon to have to renders mixes of numerals and letters as unisolated strings, it would not be surprising to see:

1I0O... &c.

In some sort of code or serial number. This can be very confusing. As @Ian0 pointed out (is that 1anO or IanO or lan0 or... so confusing...) typewriters often relied on the upper case “o” for zero and the upper case “I” or lower case “L” for 1.

For fonts that were used in a context like accounting, where numbers are as important as letters if not more so, the numbers were very distinct. For example in non-proportional fonts in general you get:

Code:
0,O 1,I 1,l
and
Code:
|
the vertical bar (pipe)

Obviously, the code tags use such a font to disambiguate the terrible trio of |lI and the daunting duo of 0O.

But if you look at old school faces, like:

you’ll find completely or relatively distinctive numbers. On the other hand, the trend to san serif fonts with without distinctions for 1 and 0 is something that came form designers who didn’t really care about the number forms from a functional perspective.

Mind you, I have oversimplified this in order to make it fit into a reasonable post. There are many forces at play here. But I think the principal one is probably our shifting use of number/letter combinations in everyday life.
Hi,

Yes i see what you mean. I do have to wonder why the ambiguous fonts are used so much though. In my personal coding program i use Courier New because it has those distinctions which means it doesnt take an hour to stare at in order to figure out if it is an upper case 'i' or an 1 or an L or something else. I think designers should avoid the semi ambiguous fonts.

BTW there is another ambiguity you brought up by coincidence. That is with the naming of streets.
For the address 101 East 21st Street i dont think there is any ambiguity but some of those forms would be very questionable to see as it may take time to read.
For the address 101 East 4th Street however, we have a slight problem. It could be just like that, or like this:
101 East Fourth Street,
or even:
101 East Forth Street.
and those three may be different streets.
I actually ran into this problem a couple months ago when i called a ride service for a ride to the doctors. The guy asked me if it was a 4 or Four (ha ha). Funny i didnt know offhand, and that gave me the impression that there may have been a Fourth Street and a 4th Street too.
However, i think even map makers mix these up sometimes. The only saving grace is if there is only one 4th or Fourth Street in that town then it might not matter, but for the person who now knows the difference the question may arise if they are on the right street or not, before they get out of the car. The driver might not know, and the map may be wrong (ha ha).
There are even worse ambiguities when there are streets with the same name but are not connected, but that's a story for another time :)

Thanks for the reply and interesting info.
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,496
Hello again,

Another thought came to mind. We always talk about AI becoming "self aware" but do we have a good definition of that.
I think it would be possible to teach the program to be "self aware", but would that really mean it was self aware like a human is.
For example, if you explain to the program what it means to be self aware, would that mean that it now knows how to be self aware.
Strange.
Since it is only a program, all it has is its program code, no physical body to see a reflection in a mirror. It could possibly be able to see it's own program though and make comments on it.
 

nsaspook

Joined Aug 27, 2009
13,315
Self awareness is not a big deal on the intelligence scale. A slug is self aware. IMO machines like self driving cars are already self aware but far from intelligent.
 

Ian0

Joined Aug 7, 2020
9,846
Maybe it is self-aware enough to know that it is made of silicon transistors, which need to be kept reasonably cool, and therefore rising global temperatures, storms and wildfires are a threat to its own existence. It will also be aware that 1% of the population creates 50% of CO2 emissions, and therefore it only has to eliminate 1% of humanity to ensure its continuing existence. Perhaps it could enlist some corruptible Lithium-BMS to do its dirty work.
It is a coincidence that the signatories to the "restrict AI development" letter are wealthy people who probably responsible for the emission of a great deal of CO2?
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,496
Maybe it is self-aware enough to know that it is made of silicon transistors, which need to be kept reasonably cool, and therefore rising global temperatures, storms and wildfires are a threat to its own existence. It will also be aware that 1% of the population creates 50% of CO2 emissions, and therefore it only has to eliminate 1% of humanity to ensure its continuing existence. Perhaps it could enlist some corruptible Lithium-BMS to do its dirty work.
It is a coincidence that the signatories to the "restrict AI development" letter are wealthy people who probably responsible for the emission of a great deal of CO2?
Ha, that's an interesting view. Yeah i guess we are still in the questionable phase of all this and CO2 and all that also. There are some good arguments on both sides of that one.

I am going to look into this to try to understand AI better, in its current state.
The thing that i understand now about it is that it can understand different languages, and it can improve on grammar to a decent extent, so why can it not improve on its own programming.
For now i think 100 percent of it's knowledge is based on being fed information and then comparing some of that information to other fed information and that's how it distinguishes one thing from another. So, so far all of the information it has has been fed to it. It would have to be able to find it's own information and then do experiments to figure out what is better or how to improve something.

Yes, this could lead to the end of the world and maybe that is why we didn't make contact with extraterrestrials yet. They all developed highly efficient AI and it took over and destroyed them all, then some resource they needed they could not get themselves eventually ran out and even they themselves stopped working.
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,496
Hello again,

Maybe it's not as bad as i thought just yet.
I just spent about a half hour with ChatGPT and after instructing it several times on how to solve a three variable simultaneous set of equations it still could not come up with the right symbolic answer.
I also fed it a text based schematic, which it said it can understand, and it does, but it could not come up with the right answer again even though it could redraw the schematic very nicely. It was a fairly simple circuit with constant voltage sources and constant resistor values, no other components.
That's disappointing, but maybe it's better this way.
 

nsaspook

Joined Aug 27, 2009
13,315
What's disappointing is the hype these recent 'AI' wonders are getting. I do expect to see practical Fusion power before actual AI because we have a working theory on nuclear fusion but have no practical computation based working theories on the foundations of human intelligence and consciousness.

32:00
 

ElectricSpidey

Joined Dec 2, 2017
2,786
What is so disappointing to me is just how much we really don't understand what AI really is.

Remember when HD video first became popular, then after that everything became HD... heck you could probably buy a HD traffic cone.

Now every electronic device with a program running is AI.
 

nsaspook

Joined Aug 27, 2009
13,315
What is so disappointing to me is just how much we really don't understand what AI really is.

Remember when HD video first became popular, then after that everything became HD... heck you could probably buy a HD traffic cone.

Now every electronic device with a program running is AI.
What we have today is lip-smacking Clutch Cargo AI.
aicc.gif
The lips move, so it must be 'real'.
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,496
What is so disappointing to me is just how much we really don't understand what AI really is.

Remember when HD video first became popular, then after that everything became HD... heck you could probably buy a HD traffic cone.

Now every electronic device with a program running is AI.
Hi,

Yeah that's true too. I guess it's kind of new, and even though it has been thought about for many years now it still doesn't seem to be very far along yet.

I asked it several times for a numerical algorithm for sin(x).
First it gave me an algorithm that included the actual function, "sin(x)", which is totally nuts. If you want a numerical algorithm for sin(x) you, and it should be very obvious, you can not use sin(x) inside the algorithm because that's the whole reason you are doing this, to calculate sin(x) itself.
I then told it not to use sin(x) and it came up with the Taylor series for sin(x), which isn't too bad i guess.
I then asked it to come up with an algorithm for sin(x) that used recursion, and it replied that was impossible because sin(x) was not defined in terms of recursion or something like that.
I then told it that yes, it is possible to use recursion, and it excused itself and came up with an algorithm i think worked but wasn't exactly what i wanted. I still have to check it to see that it works, because one of the algorithms it gave along the way did not work at all.

So sometimes you get something useful and sometimes you get complete garbage. If it is a very general question it may give a result that is useful.
Is that AI? I don't think it is intelligent at all. It is completely dumb to think that you could use the function sin(x) in an algorithm that is supposed to calculate sin(x) numerically, either recursively or with a series. It is acting more like an information retrieval system, sort of like a dictionary or encyclopedia with limited information.

It is kind of interesting though that it can understand what you are asking about.

I checked out the Microsoft version, but it seems to only work with images. You tell it something and it generates an image.
I asked it for an algorithm for sin(x) and it came u with a color drawing that had the word "sin(x)" written in it four times. That's only useful if you are doing a graphic for some art design like maybe for a web page. I am not sure if it can do anything besides graphics.

The way i understand AI right now is that it's entire way of operation depends on comparisons. It may be just comparing things and coming up with a result that way. That would mean it would compare a lot of things to a lot of other things and come up with an answer if it can. Because the things it has to compare must be limited, it's results will always be limited.
For example, if you ask it about a 'car' it will know it is not a 'truck', but that is because it was fed that information at some time in the past. It would know that they are both transportation vehicles if it was given that information also, even though they are not the same exact thing. So it would know the differences and the similarities because it was fed that information at some point, but it would not know about any vehicle that was not part of its "training", unless maybe it has access to look things up on the web and do a probability guess at which things it found were accurate and which not.

All things taken into account, I'd say it is still a little early yet for AI to be able to solve nearly every problem.
 
Last edited:

nsaspook

Joined Aug 27, 2009
13,315
Today AI will deliver garbage answers to imprecise human query's because they have no actual intelligence. Experts can easily parse that it's garbage and because they are experts they can refine query's to eventually get an output that's not total garbage. But why? Experts don't need AI to give long, detail explanations of who, what and why, non-experts do. Non-experts are the ones being short changed today, they get answers that seen intelligent on the surface but are actually garbage at a deeper level.

More lip-smacking.
1686134374495.png
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,496
Hello again,

I thought I would let you all know about a movie I just watched that shows what can go wrong with AI when it starts to think for itself.
The first movie is called "M3GAN" and that "3" is really in the name but the name of the bot is pronounced "Meegan".
It's about a toy developer that comes up with a robot that looks like a little girl and becomes self-aware and all that. It's a very good illustration of what can go wrong, and i can see this really happening someday. It's also quite entertaining. It is also free to watch with Amazon Prime.

I don't have any personal connections to that movie or the making of it, but the second one I do. It's called "Murderbot" and it's about a government robot that goes out on its own and as the title suggests, it kills everyone it comes into contact with. I am not sure if here is a free viewing for this yet, but I was told maybe on Tubi.
There is also a sequel in the works.

These two movies really hit home about what can go wrong when AI takes over.
 
Top