Origin Of The Hexadecimal Numbers

Thread Starter

Glenn Holland

Joined Dec 26, 2014
703
I've been looking for an explanation how/why the hexadecimal numbers came into existence.

I can understand the binary representation for 0 through 9, but why are the remaining numbers represented by A,B,C,D,E, and F?

Hex is frequently used for displaying event codes in control systems and indicating computer faults. My theory is that hex numbers are easier to indicate on a 7-segment display and they also require only a one-digit position when the count reaches 10 or more.
 

MaxHeadRoom

Joined Jul 18, 2013
28,619
I believe it is more of a handy and compact way of representing a byte in two characters/digits.
A-F because something had to represent a count up to 15 by a single character.
Max.
 

Thread Starter

Glenn Holland

Joined Dec 26, 2014
703
Dual hex displays are used for fault codes in many industrial controls and I suspected that representing a number greater than 9 with only one digit was the idea.
 

MaxHeadRoom

Joined Jul 18, 2013
28,619
I personally don't believe that was the origin, just a by product. It has been used in coding in assembly from the beginning for shorthand so probably preceded the display itself.
Just as Boolean arithmetic preceded the computer.
Max.
 

cmartinez

Joined Jan 17, 2007
8,220
I've been looking for an explanation how/why the hexadecimal numbers came into existence.

I can understand the binary representation for 0 through 9, but why are the remaining numbers represented by A,B,C,D,E, and F?

Hex is frequently used for displaying event codes in control systems and indicating computer faults. My theory is that hex numbers are easier to indicate on a 7-segment display and they also require only a one-digit position when the count reaches 10 or more.
There are several reasons... among them, one of the first is that all modern computing is based on the binary system, which developed around powers (exponents) of two. The very first processors were built around a 4-bit architecture, and the next logic step would be twice of that: 8 bits. Then the next logic steps would be 16, 32, 64, etc... today's most advanced commercial processors work at 256 bits per operation. (there are exceptions that work on 12, 14, 18 and 20 bits, but that's a different story)
To answer your question about letters, they were chosen because it was the easiest way to represent a number greater than nine with a single digit (they could've chosen different symbols, other than letters). And since computer technology got stuck for a very long time at 16 bits, when 32 bit technology finally arrived everyone was already used to hexadecimal notation.
Say I invented a new notation for the representation of 32 bit numbers... it would mean that I'd need 32 different symbols to represent it in a single digit, so my numbering system could be: 0, 1, 2 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, and V.
So the number 1,000,000 as F4240 (15x16^4 + 4x16^3 + 2x16^2 + 4x16^1 + 0x16^0) in hexadecimal, but in base 32 it would be UGI0 (30*32^3 + 16*32^2 + 18*32^1 + 0*32^0). It really doesn't matter what symbols you chose to represent your system, what matters to the computer is how many simultaneous bits (zeroes and ones) are fed into its processor.
 

takao21203

Joined Apr 28, 2012
3,702
The address bar, where you see forum.allaboutcircuits.com

You have to replace the URL with it and it needs to start with data and you need to press return
 

Thread Starter

Glenn Holland

Joined Dec 26, 2014
703
The address bar, where you see forum.allaboutcircuits.com

You have to replace the URL with it and it needs to start with data and you need to press return
Actually, I got cramped fingers from holding the mouse to copy the entire length of it.

So what site does it take me to?
 

MrChips

Joined Oct 2, 2009
30,712
Computers work with binary numbers, zeros and ones.

It gets very tedious for us humans to write down all those zeros and ones. And it increases the chance of making an error during transcription.

Some computers use 12-bit words.
So instead of writing down 111 110 101 100
It is easier to write 7654 if one were using octal representation, which was commonly used at one point.

Or you could use hexadecimal representation.
So the same binary string 1111 1010 1100 can be written as FAC.

111110101100 = 7645 = FAC

Take your pick. Which is easier to write down, recognize or remember?
 

crutschow

Joined Mar 14, 2008
34,284
Hexadecimal is a more compact way to write binary numbers that are grouped in Byte lengths as compared to Octal or Binary notation and that's its primary reason for being.

A through F were used for the numbers above nine because they are commonly recognized characters (at least for anyone who is familiar with Latin script). New number characters could have been invented for that purpose, of course, but then everyone who used hexadecimal would have had to learn those characters.
 

Papabravo

Joined Feb 24, 2006
21,159
The choices available in Baudot, sixbit, and Holerith codes were limited to letters, numbers, and a few punctuation characters. We used octal on the IBM 7090, and the DEC PDP-8 and PDP-11. IIRC Hex showed up with the System 360.
 

ian field

Joined Oct 27, 2012
6,536
I believe it is more of a handy and compact way of representing a byte in two characters/digits.
A-F because something had to represent a count up to 15 by a single character.
Max.
AFAICR; the first MCU - the 4004 was a 4-bit device, hex would be the obvious logical choice to display a nyble on a single digit.

Whether hex was in general use before that, I've no idea.
 

cmartinez

Joined Jan 17, 2007
8,220
AFAICR; the first MCU - the 4004 was a 4-bit device, hex would be the obvious logical choice to display a nyble on a single digit.

Whether hex was in general use before that, I've no idea.
Yes... that's my opinion too... everything else went dominoes after the nibble... byte, word, dword, qword... wonder if that system's ever going to change
 

joeyd999

Joined Jun 6, 2011
5,237
...Say I invented a new notation for the representation of 32 bit numbers... it would mean that I'd need 32 different symbols to represent it in a single digit, so my numbering system could be: 0, 1, 2 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, and V.
So the number 1,000,000 as F4240 (15x16^4 + 4x16^3 + 2x16^2 + 4x16^1 + 0x16^0) in hexadecimal, but in base 32 it would be UGI0 (30*32^3 + 16*32^2 + 18*32^1 + 0*32^0)...
FYI, base 32 != 32 bits.

A 32 bit numbering system would require 4,294,967,296 symbols.
 

cmartinez

Joined Jan 17, 2007
8,220
FYI, base 32 != 32 bits.

A 32 bit numbering system would require 4,294,967,296 symbols.
:eek: ... I stand corrected... and that is why I'm not a university professor... thanks for clarifying ...
it should've read "... notation for the representation of base 32 numbers ..."
 
Top