As president and chief operating officer of Intel, Paul Otellini oversees the day-to-day running of the largest manufacturer of PC chips in the world. Alongside Microsoft, his company rode two decades of PC industry growth to become arguably the most successful company in Silicon Valley.
But with the economy in a spin and PC sales in a slump, Intel, like many other IT vendors, is having to seek new areas to ensure its continued growth. Armed with its Itanium chip, it may have found an answer in the market for high-end servers, where it hopes to undercut rivals such as Sun Microsystems Inc. and IBM Corp.
This time it has two software partners at its side: Microsoft, its longtime ally, and Linux, viewed by many as one of the most tangible threats to Microsoft's Windows operating system.
Otellini was a keynote speaker at the OracleWorld conference recently, where Oracle pushed clusters of Intel-based servers running Linux as the most cost-effective way to run its database. In this interview, he talked about a potentially bright future for the open source OS, but seemed wary of offending Intel's longtime partner, Microsoft, in the process.
Q: We're hearing a lot from Oracle recently about the advantages of running its software on Intel-based servers running Linux and the potential cost savings there for customers. What's your prognosis for Linux over the next year, and what sort of an opportunity does it present for Intel?Oracle is working with a variety of operating systems. They've demonstrated HP-UX results, Linux, and Windows as well. We work with Oracle principally because they are multi-OS, that's one of the advantages.
When coming from Solaris or HP-UX or even AIX and going to anything but Linux, you have a long porting activity. Porting to Linux is very quick, it's a matter of days. So what we're finding is, companies with a lot of homegrown applications, like Wall Street, airlines, the auto industry, insurance, financial services, where they do a lot of in-house application development and they've done that for years in a Unix environment, it's a very logical choice to pick Intel and Linux to get a quick time to market and to lower their costs.
The other side of that is, places where they have a lot of shrink-wrapped software, like Intel, to run their enterprise, and you come at it from the Windows environment growing up and you deploy those applications.
So at Intel we have a mixed environment today. We use Linux for engineering servers and workstations, but Windows everywhere else. If I extrapolate our own experience, I don't see one operating system gaining momentum against the other.
Q: In terms of a business opportunity, how much do you see Linux driving your sales in the year ahead?As I said earlier, there's a lot of displacement of proprietary RISC (reduced instruction set computing), and for us that's incremental sales. Someone who was on Sun can move to Intel, or someone who was on HP-UX can move to HP with Itanium. Those are all incremental sales for us, so that's good. I have no way of quantifying how big it is, though.
Q: Would you say that you're platform agnostic, that Linux and Windows offer you equal opportunities for growth?I'm not going to go that far. We support multiple platforms. Our principal market is Windows-based, and Microsoft is our key partner.
Q: You must be paying close attention to the development of the Linux kernel, to be sure it evolves in a way that makes the operating system suitable for running enterprise applications and databases. To what extent is Intel involved in that development?We do tools for all the operating systems that run on Intel. We are supporters of an organization called OSDL, the Open Source Development Lab, along with HP and IBM and Dell and a bunch of others.
Q: You support them with financial investments?Yes. It's also some collaborative engineering. It's trying to get capabilities put into the kernel that are required to take advantage of our server architectures.
Q: Do you have developers inside of Intel doing work on the kernel and suggesting changes to Linus Torvalds (the creator of Linux, who oversees its development)?Yes, but I don't want to overplay the relative weighting. The bulk of our software engineering work is on Windows, internally and externally.
Q: I'm not trying to get you to say you prefer Linux over Windows.You're coming close! (laughter).
Q: We've heard a lot recently about customers running Oracle's software on Intel-based servers. What's the most popular Intel hardware for that?It depends. The scale-out stuff (grouping servers together to achieve more computing power) is almost all Xeon DP (dual-processor) and MP (multiprocessor), the scale-up stuff (using multiple processors in a single system to boost computing power) is a mixture of that and Itanium. I would expect over time to have more Itanium than Xeon in scale up, and I'd also expect as we bring the costs down to have Itanium in scale out.
As long as 32-bit applications and operating systems predominate, the Xeon family will be by far the highest volume. Over time our 64-bit architecture will move up, and at some point in time, just as we went from 16-bit to 32-bit, so 64-bit will become the predominant architecture. But I don't know when that crossover will be.
Q: So when does Itanium take hold for scaling out?I think next year.
Q: Besides moving to a 64-bit architecture, which gives you greater memory addressability, what are some of the other things you can do in hardware to boost the performance of databases and enterprise applications?More cache. We also build in hardware transparency features where, if there's a hardware fault, it becomes (apparent) to the operating systems instantly, as opposed to having to go through the applications and notify memory or something else.
Q: Anything else? You have a lot of transistors on your chips these days, can you make use of some of those?We can do multiple cores. Mike Fister, who runs our server group, talked about that at our developers' forum a couple of months ago. We're looking at other generations of Itanium that would implement multiple cores.
Q: The hyper-threading technology you have now gives you a 'virtual' dual-core processor. What conditions need to come about in order to make a true dual-core processor a viable product for you?As you said, hyper-threading is the first step along the way because it's essentially dual-processing for free. And as more and more of the operating systems are threaded to take advantage of that, that sets the precondition for multiple cores to be useful.
Q: Could you do a dual-core processor on the 0.13-micron manufacturing process you use today?You'd probably have to move to the next, but then you're making trade-offs in terms of cache size. Right now there's more performance from the incremental megabyte of cache than there is in shrinking the cache substantially and adding another core. When the transistor counts get to typically about 90 nanometer (0.09 micron), we get to where we can start thinking about this in a cost-effective fashion.
Q: So would it be overstating it to say we should expect to see you do a dual-core processor when you hit 90 nanometer?You'd be overstating it.
Q: What are some of the desktop applications on the immediate horizon that will drive the need for faster chips? It seems like every year you add another gigahertz. Last year Intel told us 2GHz is fantastic for doing multimedia computing, so what do we need 3GHz for?Have you ever seen the program Stitcher? It stitches together two or three photographs into a panoramic view. That brings a 2GHz to its knees. It just hangs. 3GHz is not super-speed but it's substantially faster. On the business side I've talked about running background tasks for security, and data-mining and so forth. That stuff just sucks compute power.
Q: So you're confident that demand will keep pace with the performance you offer?Yes. In general the hardware side of the business tends to move a little bit faster than the software, which I think is the natural order of things. That way you have a target to write to.
Q: Do you see a need for 64-bit computing on the desktop?Not any time soon. We use that in workstations; there are a number of server-type applications that take advantage of the memory addressability. But there are very few desktop client applications that take advantage of even the full 32 bits today. Even the Pentium 4 has a 40-bit architecture that very few software developers use. Why? Because you don't have the need for memory addressability, and memory subsystems to populate it are terribly expensive.
If you plot the memory requirements of typical applications in terms of their growth, and plot that against the cost of memory subsystems coming down over time, you don't get a reasonable intersect point until very late this decade.
Q: So should I read into that that Intel won't have a 64-bit processor for the desktop until very late this decade?You shouldn't read anything into that, I'm just commenting on the market.
Q: You've traditionally applied your most advanced manufacturing technologies to chips used in notebook computers. Will that stay true as you move deeper into handheld computers and cell phones?It's a little bit different there. That wireless Internet-on-a-chip I talked about (during a keynote presentation here), by the time it comes out we'll be at 90 nanometer and it won't be on our most advanced technology, it will be on 0.13. But there's enough transistor budget there that we're able to deal with it. Basically it's all-digital, we didn't need the mixed-signal capabilities so we're able to put it all on one chip.
Q: So when 90 nanometer comes along, Banias (a new processor design for notebooks due next year) will be the first thing to be manufactured on it?A version of Banias will be one of the first chips.
Q: Apart from the Itanium chip family being a success, could you offer a couple of IT predictions for 2003?The biggest thing I'd suggest you look at is wireless Web services. Everyone is aiming at Web services, that's kind of a no-brainer. But as you probe, you find out that most people are aiming at simply interconnecting their servers. That's good, it's a necessary precondition, but getting access to that data in an increasingly wireless fashion is essential. And if you don't develop those Web services applications to take advantage of that now, you just have to rewrite them in a year and a half.
Q: People talked a lot about wireless data services two years ago but it didn't really happen.People talked about it in a different way. They talked about how 3G is going to save the world and we're going to get all these data services and it's going to fix the telecoms industry. I don't think that's what I'm talking about. I'm talking about simply being able to access through the Internet the services you need, even your Schwab (brokerage) account, on your PDA, on your phone. That's different from waiting for the data services model to find a home. This is taking advantage of existing data models.
Q: So maybe in the post-dot com world we're all a little more realistic, a little less ambitious?I think we're more pragmatic, and the business model prevails.