Sponsored Link

Thursday, October 16, 2008

Intel 8086

The 8086[1] is a 16-bit microprocessor chip designed by Intel and introduced on the market in 1978, which gave rise to the x86 architecture. Intel 8088, released in 1979, was essentially the same chip, but with an external 8-bit data bus (allowing the use of cheaper and fewer supporting logic chips[2]), and is notable as the processor used in the original IBM PC.


Background

In 1972, Intel launched the 8008, the first 8-bit microprocessor[3]. It implemented an instruction set designed by Datapoint corporation with programmable CRT terminals in mind, that also proved to be fairly general purpose. The device needed several additional ICs to produce a functional computer, in part due to its small 18-pin "memory-package" which prevented a separate address bus (Intel was primarily a DRAM manufacturer at the time).

Two years later, in 1974, Intel launched the 8080[4], employing the new 40-pin DIL packages originally developed for calculator ICs to enable a separate address bus. It had an extended instruction set that was source- (not binary-) compatible with the 8008 and also included some 16-bit instructions to make programming easier. The 8080 device, often described as the first truly useful microprocessor, was nonetheless soon replaced by the 8085 which could cope with a single 5V power supply instead of the three voltages of earlier chips.[5] Other well known 8-bit microprocessors that emerged during these years were Motorola 6800 (1974), Microchip PIC16X (1975), MOS Technology 6502 (1975), Zilog Z80 (1976), and Motorola 6809 (1977), as well as others.

The first x86 design

The 8086 was originally intended as a temporary substitute for the ambitious iAPX 432 project in an attempt to draw attention from the less-delayed 16 and 32-bit processors of other manufacturers (such as Motorola, Zilog, and National Semiconductor) and at the same time to top the successful Z80 (designed by former Intel employees). Both the architecture and the physical chip were therefore developed quickly (in a little more than two years[6]), using the same basic microarchitecture elements and physical implementation techniques as employed for the one year earlier 8085, which it would also function as a continuation of. Marketed as source compatible, it was designed so that assembly language for the 8085, 8080, or 8008 could be automatically converted into equivalent (sub-optimal) 8086 source code, with little or no hand-editing. This was possible because the programming model and instruction set was (loosely) based on the 8085. However, the 8086 design was expanded to support full 16-bit processing, instead of the fairly basic 16-bit capabilities of the 8080/8085. New kinds of instructions were added as well; self-repeating operations and instructions to better support nested ALGOL-family languages such as Pascal, among others.

The 8086 was sequenced[7] using a mix of random logic and microcode and was implemented using depletion load nMOS circuitry with approximately 20,000 active transistors (29,000 counting all ROM and PLA sites). It was soon moved to a new refined nMOS manufacturing process called HMOS (for High performance MOS) that Intel originally developed for manufacturing of fast static RAM products[8]. This was followed by HMOS-II, HMOS-III, and eventually a CMOS version. The original chip measured 33 mm² and minimum feature size was 3.2 μm.

The architecture was defined by Stephen P. Morse and Bruce Ravenel. Peter A.Stoll was lead engineer of the development team and William Pohlman the manager. While less known than the 8088 chip, the legacy of the 8086 is enduring; references to it can still be found on most modern computers in the form of the Vendor entry for all Intel device IDs which is "8086". It also lent its last two digits to Intel's later extended versions of the design, such as the 286 and the 386, all of which eventually became known as the x86 family.

No comments: