MacEdition Logo
 

Big chip, bad mojo
(Or, Why Mac OS X on Intel is still a bad idea)

By SoupIsGood Food, (soup@macedition.com), March 18, 2002

The 64-bit system is becoming more and more important in the desktop space, as it allows the system software and applications to use staggering amounts of RAM with ease. This comes in handy for non-linear film and video editing, digital imaging and 3D rendering applications – all places where the Macintosh enjoys a commanding industry presence.

Feedback Farm

Have something to say about this article? Let us know below and your post might be the Post of the Month! Please read our Official Rules and Sponsor List.

Forums

Want to dig even deeper? Post to the new MacEdition Forums (beta)!

If and when Apple needs to move to a 64-bit architecture, IBM has already figured out the hows and whys of PowerPC on 32-bit and 64-bit platforms. Like the makers of Sparc, MIPS and PA-RISC, IBM has added 64-bit features to its existing RISC architecture, keeping backwards compatibility and making it trivial to add 64-bit features to new software. If Apple moves to an Intel or AMD chip, the path forward is no longer clear, and a wrong choice could mean yet another switch in the near future – a death knell for a company with such a small installed base and developer community as Apple. The prospects of a 64-bit future in the PC world are currently grim.

Intel’s Great White Hope, the underwhelming Itanium, after better than five years in the pipe and two years behind schedule, has sold just over 2500 chips since its May introduction. Sun moves more than twice as many UltraSparc IIIs in a month. What’s worse, IBM bought almost all of those shiny new Itaniums to build a huge, experimental supercomputer running Linux. Not counting IBM’s science project, you’re looking at just a couple hundred chips in the hands of a wildly disinterested industry.

Itanium was supposed to be the end of RISC , the great and bold new way of microprocessor design that would make everything else on the market obsolete. Now it’s a bad joke in the Unix industry, and has everyone in the Wintel workstation/server space whistling past the graveyard. How did such an important new technology fail so spectacularly?

There’s one born every minute

Related older articles:

There came a time in a plucky Hewlett-Packard executive’s life where he was called upon to take a gamble. This was in the mid-nineties, at the leading edge of the great dot-com bubble, when Microsoft’s Windows NT was lauded by know-nothing PC experts as being better than the entrenched Unix workstation and server standard. Much mouth music was made about NT on workstation-class systems from NEC running MIPS processors and on Digital Equipment Corporation systems running the mighty Alpha processor. Despite the utter lack of headway these RISC Windows boxes made against regular Unix workstations, our hero decided he had read the weather, and detected the winds of change. His name? Rick Belluzzo – and boy, did he ever take a sucker’s bet.

Hewlett-Packard was enjoying success in the Unix market with its HP9000 line of servers and workstations based on HP’s own homegrown 64-bit PA-RISC processor and Visualize-fx graphics system. These chips rivaled DEC Alpha systems for processing power, and SGI workstations for graphics performance, making them a favorite in the CAD/CAM and engineering fields, and a darling of the enterprise database market. Not content with their success, the big brains behind the PA-RISC had a new idea cooking that promised to be as revolutionary as RISC itself – VLIW.

RISC is fast, because it only uses a small set of general instructions but executes them very, very quickly, and can do many operations at the same time. There are drawbacks, such as certain operations that are very tricky to code and require a lot of instructions to piece together, eating loads of processor cycles. Modern RISC designs add a few extra instructions for special operations not easily handled by a pure RISC approach; for instance, the “rotate” instruction that allows the relatively plebeian PowerPC G4 to trounce even the nigh-legendary Alpha in certain applications. The G4 also uses the Velocity Engine, which gives the processor even more specialized functions that can be run with a single instruction. This approach is sometimes called “Post-RISC,” but it’s still RISC at its core.

Another big bottleneck in the RISC approach to performance is figuring out which operation should be executed in which order, and lining everything up so it flows smoothly. A processor does a whole heckuva lot of work trying to figure out what it should be doing. Lots of effort goes into managing the pipeline, doing branch predictions and other tricks so the processor is always doing work, rather than waiting to be told what to do.

VLIW, short for Very Long Instruction Word, seemed like a solution. It relieved the processor of all those hassles, and had the compiler software figure that stuff out ahead of time. This put some serious demands and restrictions on systems software engineers, making high-level language compiler design insanely difficult and “writing to the metal” with assembly instructions outright impossible. This is an acceptable trade-off, as voluminous, repetitive RISC assembly language programming had already turned most software engineers into firm high-level language converts. The gain with the VLIW approach would be an intense speed boost, as the processor could break long, complex “instruction words” into different operations, and send them to where they needed to be, when they needed to be. By way of comparison, RISC processors use small instruction words, at most 64 bytes for either instructions or data (the PowerPC arithmetic logic unit uses 16 bytes) and only for one operation at a time. What’s more, VLIW easily allows the processor to imitate another processor using a technique called “code morphing,” so HP could build a new chip that ran all of the software for PA-RISC systems without rewriting it – backwards compatibility being critical when moving to a new platform. VLIW (called EPIC by HP engineers, for Explicitly Parallel Instruction Computing) was a bold idea for HP’s R&D powerhouse, and a perfect followup to its resoundingly successful PA-RISC chip line. But then Rick Belluzzo, the pointy-haired boss in charge of HP’s hardware development, decided to screw it all up.

Sold down the river

He did this by approaching Intel, offering them the “crown jewels” of HP’s chip R&D in exchange for a substantial piece of Intel’s gigantic desktop market share. The thinking was that Unix was a dead end, destined to be replaced on the desktop and in the server room by Windows NT. Belluzzo wagered he could assure HP’s place in that future by providing Intel with a 64-bit architecture that would surpass its RISC rivals, while incorporating backwards compatibility with the old x86 processor line through code-morphing. This wouldn’t be the last wager he made, and if it were on horses instead of the computer industry, his bookie would be sending the boys around to break a few fingers.

Intel immediately demanded to take the lead in designing the new chip architecture, code-named “Merced” and designated “IA-64” by Intel’s marketing machine. The trouble was, its designers were no damn good at it. Delay after delay took its toll, and serious friction was rumored to exist between the R&D departments of either company, usually instigated by Intel’s ineptness. Whatever the case, Intel’s Keystone Cops approach to R&D stalled the project and weighed down the performance potential until it was nothing special compared to RISC systems. HP had to hurriedly re-start its PA-RISC engineering effort to keep from being overwhelmed by the strong and unflinching engineering effort Sun was pouring into the UltraSPARC. Did Rick Belluzzo have to answer for tying his company to such an expensive and embarrassing albatross?

Heck no – he was now the CEO of Silicon Graphics, Inc.

SGI had a slump in its business, and Ed McCracken, the CEO who had overseen its transition from a tiny garage operation founded by a professor and a couple of Stanford University students to the maker of the world’s leading high-performance workstation and graphics systems – a multi-billion-dollar journey – was forced to resign. Remember when Apple’s board of directors made John Sculley fall on his sword for a $100 million loss, and replaced him with Michael Spindler, who then pushed the company a few billion dollars into the red? Same story with SGI ... only starring Rick Belluzzo instead of “Iron Mike."

The first thing Belluzzo did was alienate his entire installed base by announcing SGI was going to make like his old company HP, and move entirely to Windows on IA-64 and Pentium systems. While the Pentium systems announced by McCracken were quickly moved to market, and laughed out of it just as quickly, the IA-64 chips were still more than three years from their eventual, lackluster introduction. Eager to burn his bridges, Belluzzo spun off SGI’s microprocessor division, MIPS, into a separate company and killed all further development on high-end MIPS chips, regarded as some of the most potent RISC silicon in the industry.

So SGI, after years of hemorrhaging money and talent, is on the long, slow road to turnaround. It’s mended its ties with MIPS, it has started development on new MIPS-based systems and seem to have lost interest in the IA-64 and Windows pipe dream. This is because Belluzzo is now an executive at Microsoft, and in a position where he can’t really hurt anything.

Now the second generation of Itanium, the processor code-named McKinley, is the first of the IA-64 designs Intel and HP feel they can sell people. It’s not going to live up to the speed expectations, and will be slower than its RISC rivals, while being the second largest processor ever manufactured. This results in a pricey chip that produces an insane amount of heat and gulps down power like no other processor – for a chip that will run x86 programs slower than current Pentiums and has no solid industry support for its 64-bit mode.

This is the sad and sordid history of Intel’s Itanium, the much-talked-about 64-bit successor to the Pentium hegemony. Once, the pundits spoke of it as the end of the RISC/Unix market, a technological powerhouse that would leave its rivals in the dust. Things didn’t work out that way.

Here comes the Hammer

In the meantime, AMD’s been taking names and kicking ass, making consumer-grade processors that are faster and cheaper than anyone else’s. It’s the darling of home hobbyists, mom-and-pop PC screwdriver shops and small-scale workstation/server vendors, but largely ignored by the large PC manufacturers. Still, AMD is comfortable in its niche, and makes a moderate amount of money in it. Realizing that the needs of large databases and high-performance Linux systems are rapidly outgrowing 32-bit processors, AMD planned for the 64-bit future with its Hammer architecture. Instead of moving to an entirely new processor design that’s incompatible with old software without a special compatibility mode, 32-bit x86 operating systems and applications can benefit from the speed of the new chip without having to be rewritten. New software can take advantage of the special 64-bit operations without significant changes – unlike the Itanium 64-bit mode. Think of a sort of MMX- or AltiVec-type enhancement to current programs so they can take advantage of a 64-bit memory space. Itanium’s code morphing can be used to run old x86 instructions, but to make full use of the Itanium’s speed and 64-bit-ness, software will need to be rewritten and compiled explicitly for the IA-64 chip from the ground up, a serious disadvantage when compared to AMD’s elegant solution.

Unfortunately, AMD is a very small player in a very large field. The Intel brand carries a ton of mindshare, and sells a lot of boxes to corporate environments while cutting very attractive high-volume deals with OEMs. AMD, while having an upgrade path guaranteed to make the most of a 64-bit migration, just doesn’t have the market presence to drive a wholesale move to its way of doing things. It’s possible that the 64-bit features will go unused, and be eventually dropped due to a profound lack of interest. Microsoft doesn’t have a 64-bit native version of Windows developed for the Hammer architecture yet, and it isn’t clear it ever will.

Regardless, Intel is scared silly of losing its lead to any other chip manufacturer, so it has to confront the looming failure of the Itanium. There is a rumored “Plan B” code-named Yamhill, which will take an approach identical to AMD’s: Build an x86 chip with extra 64-bit functions. In the fine tradition of the microprocessor industry, these extensions will in all likelihood be proprietary and exclusive to Intel chips, rather than the same ones AMD has planned. Yamhill is still just speculation, as Intel has so much money, manpower and industry credibility invested in the Itanium. It would be chowing down on a huge helping of crow pie for Intel to admit that plucky little AMD got it right, while the industry leader frittered away years and billions on a dead end.

An uncertain future

So, the x86 path to 64-bit-ness will probably trifurcate within the next two years. Hammer will be popular in Linux server farms and workstations. Popular free software applications that can benefit from a 64-bit chip, like Postgres, MySQL, Apache and Gimp, will be compiled to take advantage of the Hammer extensions. Commercial applications and operating systems, like Windows, Solaris, Oracle and Photoshop, will largely ignore it. They’ll largely ignore the as-yet vaporous Yamhill extensions, too ... and will until they’ve been in place on Pentium chips for a year or more. IA-64’s just not a player, unless we see significant amounts of Itaniums moved by Compaq and HP, and even then, largely in True64 or HP-UX Unix systems rather than Windows boxen.

Microsoft will keep on keeping on with 32-bit Windows on regular-style x86 while tentatively supporting a new 64-bit architecture with native operating systems like the 64-bit version of Windows XP for the Itanium, but won’t port its Windows applications. Just like it “supported” NT on RISC hardware. Sure, it made the OS itself available, but nothing else, leaving the vendors to scramble for an emulation layer on which to run essential x86 Windows software, negating the performance advantage of the non-Intel systems. Microsoft does not like diversity, and has an ingrained all-or-nothing mindset. It’s impossible to tell which way to jump with the new 64-bit desktop architectures, and a wrong answer could cost hundreds of millions in development and marketing missteps. It is entirely possible that Microsoft, like it did with Windows NT on RISC, will just let everything that isn’t a bone-standard 32-bit Pentium wither on the vine and die. It’s the safer course for a monopolist – all good things come from the status quo.

IBM and Motorola’s desktop PowerPC upgrades have been slow and steady, much to the frustration of Mac aficionados who can’t help but envy the ludicrously high clock speeds of x86 chips. Yet, despite the claims of certain misleading benchmark suites, the PowerPC remains well within the real-world performance range of its desktop rivals, even exceeding them in some applications, as we have seen. PowerPC likewise provides an unparalleled platform for mobile systems, critical for the PowerBook and iBook and an area where AMD in particular has shown no interest. More to the point, IBM has already engineered a way forward to 64-bit systems, a key strategic move for a platform famed for its content creation capabilities. Moving from PowerPC to any of the three contenders for 64-bit dominance in the Wintel world at this time is sheer folly; there are no guarantees any one of them will be what’s powering 95 percent of the desktops shipped in two years. As frustrating as it is to lag behind the x86 world, PowerPC still offers Apple the best strategic position for the future.

E-mail this story to a friend

Talkback on this story!

Cannot connect to the database.
Please contact the administrator.