Interview: CTO sees future of distributed computing

Given its market dominance and financial stability, Intel can afford to have a very long-term view of computing. The person charged with shaping that view is company CTO Pat Gelsinger, who will be a keynote speaker at InfoWorld's CTO Forum in San Francisco. In an interview with InfoWorld Editor in Chief Michael Vizard and Test Center Director Steve Gillmor, Gelsinger outlines a future of distributed computing that spans the globe, thanks to the ever expanding principles of Moore's Law.

Q: How should people perceive Intel today? Intel is providing the building blocks that enable the dramatic transformation that converges the communication and computing environments. That's what Intel is driving. When you walk into the datacenter, for a second you see some communications front ends, you see some Web front ends, you see some middle tier servers, you see some back-end databases, and you see storage devices, and then some stuff that hooks them together. Those are the five big elements of the datacenter today. Our strategy is to deliver the building blocks for all of the above. The aggregated datacenter that's hooked together with fiber IP networks is increasingly built on Intel platforms. That's our strategy, and I think most of the industry trends are very favorable to us.

Q: What longer-term trends does Intel have its eye on? We've had this view of what I'll call expanding Moore's Law for many, many years. Every 18 to 24 months, we double the number of transistors, and that gives us about 2x performance every two years. Life is good and the industry builds around that. But through our research activities, we're going to be expanding Moore's Law. We're going to take the capabilities of silicon and move them into an entire new domain. Specifically, wireless communication will be directly integrated into silicon, and sensor networks will be directly integrated in silicon and silicon plutonics, where we integrate optical connectivity and optical connections directly into silicon. All of this suggests some new opportunities, not just for us but also for applications that will emerge. For instance, when you fully integrate wireless communication in silicon, we term that "radio-free Intel." Radios just become free because we've integrated those into our standard silicon products and chip sets and processors are able to directly communicate in a wireless fashion. Similarly, with silicon plutonics, when I'm able to drive and receive optical signals directly out of silicon. So we're going to be able to transform what a datacenter or what a server of the future would look like to deliver an enormous amount of communication capabilities and bandwidth into those platforms.

Q: What are some of the implications of this in terms of how we approach software? We're suggesting that some of the core technology breakthroughs that we're working on will allow us to begin ushering in the next step beyond the mid-term debates of Java and .Net compute models. Let's imagine that I could put sensors in every agricultural environment for the future, and now those sensors start to intelligently communicate back to nodes that tell us to turn on water, to turn on heaters for frost prevention, that give indications of all of the other environments around them. Every node can talk to every [other] node. We think it's going to usher in entirely new models of networks and communication and it's not even clear that you want a heavyweight language like Java or .Net running on these kind of very lightweight environments. Those are areas that we think some of these breakthroughs are allowing us to begin to research. We're not suggesting we have the answer of what that needs to be. What we are saying is that just due to the core technologies that we're driving down at the silicon level, we are going to deliver technologies that transform our view of computing and communications as we know it today. We have this view that machines today are reactive to human initiation. Web services start to make them a little bit proactive because a computer can initiate an action with another computer, so they collectively take some action. But for the most part, everything is still reactive to human intervention. As you go to sensor environments, you're actually going to get computers that are smarter or not dependent upon human initiation. So now you can start developing policies for these networks. You start developing agent models for these networks that deliver services to people, as opposed to people being forced to request those services.

Q: With the advent of Java and Microsoft .Net, what role do you see Intel playing in terms of advancing the coming generation of enterprise computing? The whole reason that run-time environments have emerged is to make software more productive and more portable. We look at those things from the silicon up, and we have a whole new set of potential things we can do to make run-time environments run great. That's the area of our focus. [What] we find is we can make caches work really great for Just-in-Time [JIT] applications. Another example might be garbage collection. All of these run-time environments have garbage collectors associated with them. We can add a few instructions, change the optimization event, and make garbage collectors run really great. That's really our job. We look up into these software environments, find those common elements of a core or primitives, and then we melt them into silicon. Some of that might be worth melting into drivers initially and then into firmware of the processor and eventually all the way into gates in the chip.

Q: What is the relationship between Web services and peer-to-peer computing?If we had this conversation a year and a half ago, we would have started out talking about peer-to-peer and what we're doing in silicon. Now we say Web services and what we're doing in silicon about it. In my view, they are all part of an inexorable march toward fully distributed computing. To me it's not clear if Web services is sort of today's vogue statement around that overall trend, or if in fact it is a stable point for development environments of the future. I sort of tend to think it's a little bit of a vogue, but it's all a consistent direction, just as peer-to-peer was and distributed services and now Web services are. And if you've noticed some of the most recent announcements around some of the Web services stuff, people are putting a grid layer on top of their Web services infrastructure. These topics are highly interrelated activities and some deal with the granularity of the application, some deal with the abstraction and the resource, and some deal with the model of the distributed network.

Q: In terms of accelerating the performance of things specifically, like the SOAP (Simple Object Access Protocol) stack in Web services, what might Intel be doing? People might put SOAP accelerators at the edge of the firewall. That may be a good place for people to do different caching elements and accelerators. We see things like ... offload engines coming into the platform, mostly in the NICs and the chip sets over time. That's where you expose more of the SOAP protocol. I don't see it getting all the way into the CPU instruction set for quite a while. But eventually, that could in fact be the case.

Q: What impact is the overhead associated with processing XML going to have on hardware? Most systems today would appear to be in need of an upgrade as companies move to embrace XML. Isn't it great? I just love these abstract data types and all this interpretive language stuff. You need more security in the network and you need more processing in the client. Man, I just love it. I'm being a little bit facetious. But what we would call the software spiral is alive and well, because now we have enough processing capabilities and enough network and bandwidth capability that you really can take this next step up in abstraction. And as you start to take advantage of that next layer of abstraction, you generate huge demands for additional computing. That's where computing enables new software models and those software models force demands on computing and then communication. That spiral is what we live and die for.

Q: Based on everything you are saying here, where will the line between personal and business computing be at the end of the day? It becomes harder and harder to distinguish as you go forward. Our technology is touched by about a quarter of the planet today. What we want to do is have our technology touch the other three-quarters of the planet and usher in an entirely new consumer and business application for what we're doing. I don't care about next week's revenue. I don't care about next year's revenue. I want to create a $100 billion Intel that is connecting the Internet to every human on the planet and transforming both business and social life for everybody on the planet.

Join the newsletter!

Error: Please check your email address.

More about Core TechnologyIntel

Show Comments