Intel 8080 and 8085

The 8080 was the first true processor, its production began in the first half of 1974. The first version of the 8080 had some flaws and therefore it was replaced six months later with the 8080A, which in fact is often called the 8080 too. This processor is still being produced and finds its use. It was repeatedly cloned around the world, in the USSR it had the designation KP580BM80A. Modern Intel processors for the PC still easily reveal their kinship to this in some sense relic product. I myself haven't written codes for this processor but being well acquainted with the architecture of the Z80, I would venture to give some of my comments.

The 8080 instruction system like other Intel processors for the PC can hardly be called ideal, but it is universal, quite flexible and has some very attractive features. The 8080 favorably differed from its competitors, the Motorola 6800 and the MOS Technology 6502, by a large number of even somewhat clumsy registers. The 8080 provided a user with one 8-bit accumulator, a 16-bit semi-accumulator and simultaneously fast index register HL, a 16-bit stack pointer, as well as two more 16-bit registers BC and DE. The BC, DE, and HL registers could also be used as 6 byte-registers. In addition the 8080 had support for an almost full set of status flags: carry, sign, zero and even parity and auxiliary carry. Some commands from the 8080 instruction set had been speed champions for a long time. For example the XCHG command makes the exchange of the contents of the 16-bit DE and HL registers in just 4 clock cycles, it was extremely fast! A number of other commands, although they did not set such bright records, were also among the best for a long time:

In addition to the 64 KB main memory, the 8080 can use two more address spaces: 256 byte I/O ports and a 64 KB stack. The successors of the 8080, the 8085 and Z80 did not inherit the latter feature.

This processor was used in the first 'almost personal computer' the Altair 8800, which became very popular after the journal publication in early 1975. By the way in the USSR a similar publication happened only in 1983 and corresponding to it in relevance only in 1986.

The first almost PC

The Intel's 8080 became the basis for the development of the first mass professional operating system CP/M, which occupied a dominant position among microcomputers for professional work until the mid-80's.

Now about the shortcomings. The 8080 required three supply voltages of -5, 5, and 12 volts. Working with interrupts was rather clumsy: it required a dedicated controller, and non-maskable interrupts were not supported at all. In general the 8080 was rather slower if you compare it with competitors, which soon appeared. The 6502 could be up to 3 times faster when working on the same frequency as the 8080. In the instruction set, the presence of 6 senseless instructions (a kind of MOV A,A) slightly irritates – they could have been left undocumented saving opcode space for new operations. The instruction for decimal correction can only be used after addition. After subtraction, a special code must be used for decimal correction, usually consisting of 4 instructions. There are no non-rotating shifts. There is no instruction to reset the carry flag, instead, to reset this flag, you need to use any binary bitwise logical operation (AND, OR, XOR), which is theoretically far-fetched and unnatural and practically complicates the work with the carry. This drawback was inherited by the 8085 and Z80. Here it can only be noted that on the 68k architecture there are even more artificial drawbacks when working with the carry. The implementation of working with port and stack address spaces requires additional external logic for the 8080. The 8085 and Z80 can use ports directly.

However in the architecture of the 8080 was laid as it turned out the correct vision of the future, namely it was a vision of a fact unclear in the 70's that processors would be faster than memory. The 8080's DE and BC registers are a prototype of modern caches with manual control, rather than general-purpose registers. The 8080 could use 2 MHz frequency, while competitors could only use 1 MHz, which reduced the performance difference between them.

At first, the 8080 was sold at a very high price of $360. This was a kind of reference to the large IBM/360 computers. Intel seemed to say that if you buy the 8080, you can get something similar to a very expensive mainframe.

It's hard to call the 8080 a 100% 8-bit processor. Indeed its ALU is 8 bits wide, but there are many 16-bit commands that work faster than if you use only 8-bit counterparts instead, and for some instructions there are no 8-bit analogs at all. The XCHG instruction is essentially and by timing 100% 16-bit and there are real 16-bit registers. Therefore I venture to call the 8080 partially 16-bit. It would be interesting to calculate this processor's bit index based on the set of its features, but as far as the author knows, no one has still done such work.

The author of this text does not know the reasons why Intel abandoned direct support of the 8-bit PC's with their processors. Intel has always distinguished the complexity and ambiguity of the policy. Its connection with politics in particular is illustrated by the fact that for a long time Intel has had fabs in Israel and until the end of the 90's it was secret. Intel practically did not try to improve the 8080, only the clock frequency was raised to 3 MHz. In fact the 8-bit computer market was given to Zilog with the z80 processor which was related to the 8080, and the z80 was able to quite successfully withstand the main competitor, The Terminator 6502. Zilog by the end of the 70s was a company with huge capabilities, with almost unlimited funding from Exxon and even two newest fabs, this was really a lot – Motorola, with a billion-dollar business, also had only two chip factories at the time. Interestingly, that in the mid 80's when the importance of the 6502 became rather insignificant, Zilog also rapidly lost its own significance. The 8080 and 8085 were usually used as controllers and as such could be successfully sold at a higher price. The presence of the z80 allowed Intel to distance itself from the competition of 8-bit processors for computers where the 6502 greatly influenced the reduction of prices.

In the USSR and Russia the domestic clone of the 8080 became the basis of many popular computers that remained popular until the early 90s. Those are of course the Radio-86RK, Mikrosha, the multicolor Orion-128, Vector, and Corvette. Eventually cheap and improved ZX Spectrum clones based on the z80 won the clone wars.

This is a real PC

In early 1976 Intel introduced the 8085 processor, compatible with the 8080, but significantly superior to its predecessor. In it the power supply of -5 and 12 volts became unnecessary and the clock frequency was used from 3 to a very solid 6 MHz, the command system was expanded with very useful instructions: 16-bit subtraction, 16-bit shift right for only 7 cycles (it was very fast), 16-bit rotate left through the carry flag, loading of a 16-bit register with an 8-bit offset (this instruction is possible to use with the stack pointer too), writing of the HL register contents to an address in the DE register, analogous reading of the HL via an address in the DE. All the instructions mentioned above, except for the shift to the right, are executed in 10 cycles – this is sometimes significantly faster than their counterparts or emulation on the Z80. Some more instructions and even two new processor status flags were added: the overflow flag and the flag XORing overflow and sign flags. The exact purpose of the second flag, typical for sign arithmetic, became known only in 2013, 37 years after the appearance of the 8085! This flag allows you to check "greater than or equal to" or "less" relationships at a time, but checks for paired relationships will also require an additional checking of the zero flag. Many instructions for working with byte data were accelerated by 1 clock cycle. This was very significant as many systems with the 8080 or Z80 used wait states, which due to the presence of extra cycles on the 8080 could stretch the execution time by almost twice. For example in the mentioned computer Vector, register-register instructions were performed for 8 cycles, and if there were the 8085 or Z80, then the same instructions would be executed only in 4 cycles. The XTHL instruction became faster by two cycles and jump instructions became faster even by three. With the new instructions, you can write code to copy a block of memory that runs faster than the Z80's LDI/LDD commands! The 8085 also usually executes programs for the 8080 somewhat faster than the Z80. However, some instructions, for example a subroutine call, a 16-bit increment and decrement, loading of SP, the PUSH and conditional returns became slower by a cycle.

The 8085 has a built-in serial I/O port and improved support for working with interrupts: in addition to the method dependent on an external controller inherited from the 8080, support for a non-maskable and three maskable interrupts has been added – this allows you to do without a separate interrupt controller in the system, if necessary. Working with the port and with interrupt management was implemented via the SIM and RIM instructions – only these two new instructions were officially documented. However, the interrupt handling itself remains the same as the 8080 – very minimalistic: when interrupting, the processor does not even save the status word; this saving must be explicitly written in the code. As already noted, in the 8085, the work with signed arithmetic remained somewhat not completely realized, but its realization was more complete than in the Z80. The 8085's 16-bit arithmetic also didn't get several very desirable commands including addition with carry and subtraction, these commands were added to the Z80. In the 8085, when adding for example 32-bit integers you need to use a conditional branch to account for the carry – this, by the way also looks like the IBM mainframes.

However I can repeat the statement "for unknown reasons" Intel refused to promote the 8085 as the main processor for PC's. It was only in the 80's that some fairly successful 8085-based systems appeared. The IBM System/23 Datamaster first appeared in the 1981, it was a predecessor and almost a competitor to the IBM PC. Then in 1982 a very fast computer with excellent graphics, the Zenith Z-100, was released, in which the 8085 was running at 5 MHz. In 1983 Japanese company Kyotronic created a very successful KC-85 laptop, versions of which were also produced by other companies: Tandy were producing the TRS-80 model 100, NEC – the PC-8201a, Olivetti – the M-10. In total they released perhaps more than 10 million of these computers! In Russia in the early 90's on the basis of domestic clone the ИM1821BM85A there were attempts to improve some systems, for example, the computer Vector. Surprisingly the main processor of the Sojourner rover, which reached the surface of Mars in 1997, was the 8085 at 2 MHz! Such a success of the 8085 in the 1980s is largely due to the fact that in early 1979 the 80C85 was ready as the 8085 low-power variant. The aforementioned first real pocket computer the Tandy 100 could work up to 20 hours on a single charge! It is possible that without the ARM, the 80C85 would have been actively used in mobile computers in the 90s.

In fact Intel gave way to the Z80 in the 70's. A few years later in the battle for the 16-bit market Intel behaved quite differently, starting a lawsuit to ban sales of the V20 and V30 processors in the United States. Interestingly the mentioned processors of Japanese company NEC could switch to full binary compatibility with the 8080, which made them the fastest processors of the 8080 architecture.

Another secret from Intel is the refusal to publish an extended instruction system which included support for two new flags. It was first published as an article in 1979, and then by some of the manufacturers of these processors. However published information about these new flags was rather incomplete. What are the reasons for this strange refusal? One can only guess. Could Zilog then play a role that AMD might have once played, and create the ostensible appearance of competition while the 8085 could have brought down Zilog? Was it maybe about wanting to keep the system of instructions closer to the 8086 then being designed? The latter seems doubtful. The 8086 was released more than 2 years after the release of the 8085 and it’s hard to believe that in 1975 the system of its commands was already known. And in any case compatibility with both the 8080 and 8085 on the 8086 is achievable only with the use of a macro processor, sometimes replacing one of the 8080's or 8085's instruction (POP/PUSH PSW, Jcc, Ccc, Rcc, XTHL, LDHI, LDSI, LHLX, SHLX) with several of its own. Moreover the two published new instructions (SIM and RIM) of the 8085 in the 8086 are not implementable at all. It is believed that the refusal occurred only because of difficulties in supporting the transfer of the new 8085 flags to the 8086 code. Indeed, such a transfer with support for bitwise operations with the processor status word turns out to be extremely cumbersome. But 7 out of 10 new instructions have no direct relation to the use of new flags and they could be published without creating difficulties for compatibility with the 8086. We can also assume that Intel was dissatisfied with the implementation of the signed arithmetic in the 8085 and decided that it was better to hide than to prepare the ground for constant criticism. Although in this case, seven new instructions could have been published, and only the flags and three instructions could have been hidden.

It is especially difficult to explain why Intel did not publish information about new instructions after the release of the 8086. We can also assume that most likely it was in the marketing. Due to artificially worsening specifications of the 8085, they received on this background a more spectacular 8086.

I would venture to suggest yet another version. The 8085 was very difficult, almost impossible to expand to a real 16-bit processor. On the contrary, the 6502, with almost half of the opcodes unused, could easily be expanded to 16-bits. Therefore, it was important for Intel to create a trend to switch to a 16-bit architecture, without binary compatibility with 8-bit. Rejecting the new useful functionality of the 8085, as if to say that the 8-bit is bad and no longer important, you need to switch to the 16-bit. Something similar happened around the 32-bit architecture, when Intel created a false trend to develop a complex and unpromising Intel 8800 a.k.a. iAPX 432.

The aforementioned clone 8080 has been produced in the USSR since 1977 – it was record-breaking fast, the clone was made just three years after the appearance of the original. Interestingly, the technology used was somewhat different from that used by Intel. Due to the speed of cloning, the USSR could even export these chips, this was the only such case. But with the cloning of the 8085, there was a hitch until the early 90s. Although since 1988, the USSR produced its own improved 8080, the KP580BM1, which could be clocked up to 5 MHz and which used only 5 volts. This processor, when compatible with the 8080, has many additional features: it can use an additional 64 KB for data, an additional register with the HL register functionality and several hundred new commands, including powerful 16-bit ones. Overall, it was significantly superior to both the 8085 and Z80.

Edited by Richard BN


mirror