Download the authoritative guide: Cloud Computing 2019: Using the Cloud for Competitive Advantage
With the Intel transition, there have been two issues that have cropped up quite a bit -- getting 64-bit chips from Intel and what about Altivec.
Both of these are of no small importance to some key Apple customers, namely the High-Performance Computing, or HPC folks, ala the cluster in Virginia, the COLSA cluster, and others. These people need both 64-bits and the advantages that Altivec brings over SIMD. Without a good answer to those questions, they aren't going to be adding in whatever the follow-on to the Xserve will be.
If they were to just buy Intel servers, there'd be little sense in them buying Apple hardware, since there would be no clear advantage to Apple. (These are notaverage users. They have no problem custom-coding solutions.)
The 64-bit question is going to be answered -- probabably later this year. Intel's been pretty straightforward about moving the X86 chips to 64-bit. Not ala Itanium, mind you. That's a bit server chip, and a very different architecture, but just getting the chips that Apple would want to use to 64-bit. That's going to happen. It's just a matter of when the new models can leave the fabs. That's not going to be a problem for much longer.
The real question, for me at least, has been ''whither Altivec?'' Intel has had a vector unit implementation for some time now with SIMD (Single Instruction Multiple Data). However, there's always been a bit of a problem with SSE compared to Altivec.
With SSE, to process a 128-bit SIMD instruction, it took two steps. One step to handle the lower 64 bits, and one for the upper. Altivec, on the other hand, could handle the entire instruction in one step. That's a rather large gain in efficiency for Altivec, and one that as Intel's clock speed advantage shrank, was rather hard for SSE to overcome. That's especially true for code that made heavy use of SIMD, such as the kind you see in the HPC world.
However, this past March Intel revealed the answer to that particular problem.
This latest SIMD implementation is called ''Advanced Digital Media Boost''. (They must be hiring Microsoft to do their branding). It fixes that efficiency issue.
With Advanced Digital Media Boost, (aka ADMB, because it's just a ridiculously tedious name), the new architecture will be able to handle the entire 128-bit SIMD instruction in one step. That's going to really help Apple in moving their HPC customers to Intel whenever the follow-on to the Xserve shows up.
Yes, every one of Apple's competitors will have access to the same chips, so logically, the HPC people could still dump Apple for Dell/Microsoft. However, the big reason for resisting moving off of the G5 for HPC was the 64-bit/Altivec issue. Moving to 32-bit/SSE would have caused them a world of problems that not even OS X could have overcome. However, with a 64-bit/ADMB implementation, then you have no loss of performance/capabilities, and you still have the advantages that OS X brings, like xGrid, etc.
It's not just the HPC folks that benefit from this. Videographers, Photoshop jockeys, Mathmatica users, gamers -- all of them will see benefits from this new architecture that they aren't going to see on the 32-bit implementations that Apple is currently shipping.
In addition, Intel is on a fast track to regularly increase the number of cores in their chips on a regular basis. Company execs also are looking toward quad core chips in 2007, and eight core chips not long afterwards.
So the rest of 2006, especially the Apple Worldwide Developer Conference, in August, should be quite interesting for OS X users at all levels.