8086 segmentation: Can I expand from 20 bits to 24 bits while keeping compatibility ?

Thread Starter

AnalogDigitalDesigner

Joined Jan 22, 2018
121
Hi friends,

If I modify the intel 8086 segmentation unit so that instead of shifting the segment left by 4 bits only, it would shift it by 8 bits, would I be able to still run the same programs that used the 4 bit shift ?

Basically I am designing an 8086 from scratch as a hobby project, and I want to make the address bus 24 bits rather than 20.

Of course the physical addresses will change by increasing the width to 24 bit, but what I want to know is if any of the programs written for the 8086 assumed 20 bit addressing in some way ?

I am not sure what could happen! Programs of course use the logic addressing of segment:eek:ffset, but could it be that they use the quirk of overlapping to their advantage somewhere?

I am hoping to install CP/M or MS-DOS, or MINIX into this system, and I wonder if any of these OS'ses use the overlapping quirks somewhere ?

Because by expanding from 20 bits to 24 bits I will have much less overlap. So any addresses that were referenced to assuming correct overlap will not work.

By expanding from 20 bits, which has only 16 different physical addresses before it overlaps with another segment, I will have, with 24 bit physical addressing, and a left shift of 8 bits, I will have 256 different physicals before overlap. If any programs assumed that overlap would occur after those 16 physical addresses, it would not work unless it was greater than at least 256 positions.

So the final question is, do programs ever assume overlap of physical addresses? Or do they not ever do this, and always use proper addressing ?
 

philba

Joined Aug 17, 2017
959
Look at the 80286 for how they kept 8086 compatibility but increased memory address space beyond 1M. Each segment is still restricted to 64K but the segment regs can point anywhere in the physical 16M address space.

You could build external MMU circuitry. There were some people doing that prior to the 286 but it was very kludgy and unprotected. An utter hack. Heck, even the 286 was damn kludgy. Lots of little segments? Managing segmented memory? Ugh. Been there, done that, never again. There's a reason why that approach went into the dust bin when the 386 was introduced.

DOS makes all sorts of assumptions about access to the ROM bios. It's always there at a specific address. Go back and read about the cloning of the IBM PC to understand what it took to get DOS to run on a new design.

On your final question, in the 286 world, overlapping segments were quite possible but too much effort to figure out so most if not all compilers just punted. I think PL/M 268 tried (thank god that sank beneath the waves and may god bless John Crawford for guiding Intel into the 32 bit world). The compilers and OSes dumped the management of segments on the programmers who fled to 32 bit architectures in droves. Messing with segments got complicated very fast - look at near and far directives and try to imagine building any kind of application that mixed them.That was why RISC (and eventually ARM) gained a lot of traction. If Intel hadn't rolled Motorola with Orange Crush, it would have been a completely different story. I'm sure in most OSes there were assumptions about the interrupt vectors and BIOS ROM space overlap.
 

MrChips

Joined Oct 2, 2009
30,706
Yeah. Memory segmentation was one of the most dumb things Intel did.
It set back computer advancement by about 25 years.
 

philba

Joined Aug 17, 2017
959
To be fair, segmentation was kind of the de facto computing model before the 1980s. Intel was just following the herd on that one. The VAX showed the benefits of a clean 32 bit architecture and Xerox PARC championed RISC. Those were key pillars of modern computing architectures. In fact, it was a VAX 11/780 that Intel bought in like 1980 that Crawford used as a model for the 386. It is a testament to Intel management at the time to allow a compiler developer to be the 386 architect. A lot of the EEs at Intel in those days referred to Software Engineers as "software bunnies".
 

nsaspook

Joined Aug 27, 2009
13,079
To be fair, segmentation was kind of the de facto computing model before the 1980s. Intel was just following the herd on that one. The VAX showed the benefits of a clean 32 bit architecture and Xerox PARC championed RISC. Those were key pillars of modern computing architectures. In fact, it was a VAX 11/780 that Intel bought in like 1980 that Crawford used as a model for the 386. It is a testament to Intel management at the time to allow a compiler developer to be the 386 architect. A lot of the EEs at Intel in those days referred to Software Engineers as "software bunnies".
I had some of the early stepping 368DX chips in an old Compaq machine, it was a buggy nightmare trying to boot early versions of Linux on it because the chip was riddled with 32-bit processing bugs. Swapped it out with a later stepping and boom it ran like a top.
 

nsaspook

Joined Aug 27, 2009
13,079
Hmm, sounds like someone foisted an A0 stepping of the chip on you.
Maybe, 386 protected-mode was totally broken.
http://www.os2museum.com/wp/deskpro-386-at-30/
The first 32-bit PC operating systems also didn’t take too long to materialize (386 XENIX and other 32-bit UNIX variants), although early Intel 386 chips had major bugs making it very difficult or impossible to run a 32-bit protected-mode OS with paging. And as mentioned above, it took another decade before the mainstream moved to a 32-bit OS.
 

Thread Starter

AnalogDigitalDesigner

Joined Jan 22, 2018
121
Look at the 80286 for how they kept 8086 compatibility but increased memory address space beyond 1M. Each segment is still restricted to 64K but the segment regs can point anywhere in the physical 16M address space.

You could build external MMU circuitry. There were some people doing that prior to the 286 but it was very kludgy and unprotected. An utter hack. Heck, even the 286 was damn kludgy. Lots of little segments? Managing segmented memory? Ugh. Been there, done that, never again. There's a reason why that approach went into the dust bin when the 386 was introduced.

DOS makes all sorts of assumptions about access to the ROM bios. It's always there at a specific address. Go back and read about the cloning of the IBM PC to understand what it took to get DOS to run on a new design.

On your final question, in the 286 world, overlapping segments were quite possible but too much effort to figure out so most if not all compilers just punted. I think PL/M 268 tried (thank god that sank beneath the waves and may god bless John Crawford for guiding Intel into the 32 bit world). The compilers and OSes dumped the management of segments on the programmers who fled to 32 bit architectures in droves. Messing with segments got complicated very fast - look at near and far directives and try to imagine building any kind of application that mixed them.That was why RISC (and eventually ARM) gained a lot of traction. If Intel hadn't rolled Motorola with Orange Crush, it would have been a completely different story. I'm sure in most OSes there were assumptions about the interrupt vectors and BIOS ROM space overlap.


Bummer.
 
Top