Phil Hester joined AMD in last September as AMD’s new CTO, replacing Fred Weber at this position. Hester worked 23 years at IBM at several positions (including CTO of the PC division and VP of the RS/6000 hardware development), leaving IBM to launch Newisys in 2000, a company dedicated to build servers based on the Opteron CPUs, later acquired by Sanmina-SCI in 2003. In this exclusive interview to Hardware Secrets, Hester talks about AMD’s plans, strategies, new CPUs and a little bit of his professional life and experience.
What products is AMD currently developing that you can talk about?
I break this answer down into two major categories. The first is products for the PC market as we know it today. The second category is products for emerging opportunities, and in particular, emerging information technology devices for developing geographies.
In the traditional PC area we have products under development that are going to be optimized for the notebook, the desktop, and the server space. And there are common themes across all of those around power efficiency, price performance, reliability, and scalability. We also will be applying virtualization and security technologies across all three of those product areas. But obviously the implementation of a product in those areas is tuned for that particular market segment. For the mobile market we are optimizing for low power, lower-cost devices; we’re focusing on power efficient devices in the desktop space, and on scalable, power efficient, highly available designs in the server space.
Besides the traditional markets, AMD also has initiatives for new opportunities in the information device area, particularly around developing geographies. This includes the 50×15 initiative, which says that we want to see 50 percent of the world attached to the Internet by the year 2015, and are developing exploratory products that would be best suited for different markets to try to achieve that goal. So things like the PIC, or Personal Internet Communicator, and the One Laptop Per Child effort from MIT’s Nicholas Negroponte are good examples of specialized products that we’d be developing for those markets.
What can you tell us in terms of what AMD plans to develop, or at least can you tell us what you think the “hot” technologies are in the future?
Well, “hot” is an interesting choice of words, because one of the issues we are focusing on is power efficiency. You know, if you look back historically at the notebook computer space, literally from day one, the design of the microprocessor had to be thought through in terms of power efficiency. That directly determined how long the notebook would last on batteries, and that was something that was clearly visible to the end user.
So we started looking at the power efficiency issue by asking the question, ‘what does the end user care about, and what do we do to improve the end user experience?’ As a result, beyond our historical focus on power management in the notebook space, there will be even more focus on power efficiency in the desktop and the server space. If you look at enterprise computing and how much power is being consumed with desktop PCs, you’ll see that it represents a very important piece of the total power budget. The cost and construction of a datacenter is often limited not by its physical size but by the requirements of cooling these very high powered servers. And there’s kind of an interesting rule of thumb here: if you look at a PC or a server in an enterprise, for every one watt of power that that device consumes, there’s typically three watts of power going into the building. Those three watts break down to one watt through the device, probably 200 milliwatts, for power distribution and electrical power supply, and all the remaining power – almost two watts – is associated with cooling.
So the point is that there is around a three-to-one lever in terms of energy savings at the company level, so for every watt you can take out at the desktop or the datacenter, their power meter goes down by three watts. So that’s a very good way to deal with power issues; it’s also ecologically responsible and something that we’ve been supporting.
We have a number of initiatives going on, literally from the transistor design level up through the system design, and then working with the operating system vendors, all aimed at increased power efficiency at all levels in the system, whether it’s a notebook, a desktop, or a server.
In terms of other key technologies, we also believe that as digital media becomes even more pervasive, that things like privacy, security, and Digital Rights Management will be very important.
One other clear focus here is on parallelism and what’s referred to as multi-core technology. If you look at what’s on the market today, the higher-end CPUs are all dual-core, in other words two processors within the CPU. In the server space, from a software standpoint, parallel and multi-threaded software is fairly well-established, and can take immediate advantage of dual-core designs today and multi-core in the future.
That’s not necessarily the case for software in the desktop and mobile spaces, which for the last roughly 20-plus years was always designed to run on a uniprocessor environment. So there’s a lot of work we’re doing with the software industry to make sure that we’re enabling multi-threaded, multi-CPU aware software, so that particularly as the multi-core technology evolves in the client space, the software technology will be there to effectively exploit it and give the end user the improved experience they would expect.
When do you think AMD’s CPUs with DDR2 support will reach the end user market?
That will happen in 2006.
Are there any plans to replace AMD’s 64 micro architecture in the near future by a new micro architecture?
I break things down into two levels here. One level is the instruction set architecture in the macro architecture. There we absolutely believe that full binary compatibility with the current software base is an absolute requirement; we would never break that compatibility because of the benefits it gives to our customers and the users of PC technology.
At the macro architecture level, just like you’ve seen evolution of the x86 instruction set in the past to include things like floating point instructions and now extensions for virtualization and security, we would expect that the minor instruction level extensions will continue to be made as the x86 architecture moves into new market segments.
A second-level discussion is around the internal plumbing or what’s inside the chip. At the micro architecture level, the optimizations that you need to build a processor that’s optimized for the notebook, desktop and server spaces are different. As we go forward, we are developing new micro-architectures that are designed to very efficiently support the macro architecture instruction set, but implemented in a way that’s best tuned for the market segments they’ll be used in; so you’ll see one micro-architecture optimized for a mobile market, one for the desktop market, and one for the server market.
Does AMD have any roadmap for 65 nanometer CPUs?
What I said last November at our annual analyst meeting still holds. We have had 65nm preliminary silicon running in our Dresden Fab 36 since last June; we plan to begin 65nm volume production in the second half of 2006, and we plan to be substantially converted to 65nm in Fab36 by mid-2007.
Regarding the $100 notebook proposed by Negroponte, based on an AMD CPU (I guess would be from the Geode family), so far we don’t have any official statement from AMD?
We’re a founding member of One Laptop Per Child and believe that the effort is a very interesting and innovative approach to bringing Internet connectivity to more people around the world. The whole 50×15 effort is a very serious undertaking at AMD, from Hector Ruiz on down, and OLPC is great example of that. It’s important to note that all of these efforts in the developing market economies are partnership-driven. We’re not trying to control this space, we’re trying to enable it, and to do that we reach out, we want to be inclusive as opposed to exclusive.
You came to AMD to replace Fred Weber, who was considered the father of the Athlon 64. This may have raised some eyebrows in the industry and even inside AMD. So far, what have you seen as your personal challenges as AMD’s new CTO?
Fred Weber and I have known each other for probably 15 years, and our paths have crossed many times. We have a deep mutual respect, and I personally respect all of Fred’s contributions in helping AMD get to the point that is today.
The culture here at AMD is very friendly in terms of embracing new ideas and moving forward. What I hope I can do is help take AMD to the next level and build on the great base that Fred and the rest of the team have created, and help scale AMD to a true leader in the enterprise across all the market segments that we compete in.
My biggest personal challenge is just getting up to speed in terms of all the different activities that AMD is engaged in, and the different groups within AMD in terms of their technical skills and their geographic diversity.
After 23 years at IBM, you truly know the mind of IBM. What are the main culture differences between IBM and AMD?
There’s a common misperception that IBM is homogeneous. If you ask people how they describe IBM, you get that image of a bureaucratic-based, large organization, a hierarchy, well-trained in a positive way.
But if you actually get underneath it, each group at IBM has their own personalities. There certainly exist some elements of that image – the wingtips, the professional but bureaucratic culture. Down here in Austin, I can tell you the culture was more the cowboy, UNIX culture.
While I was at IBM I had the opportunity of having at least ten different jobs, and each of those also had a different culture associated with it. And I also had both the challenge and the very helpful learning experience of living through the implosion and rebirth of IBM, and going through, if you will, the old IBM to the new IBM. I think it opened my eyes to how you can create a more empowered company culture that more effectively can get the right products to market.
For example, one of my jobs at IBM was as the technical leader of a group that went from less than $100 million of revenue to about $2 billion in around three years. So I got to see very explosive growth, and developed a very deep understanding for what I would refer to as “scalable process” in a collaborative culture. My last job at IBM was as the CTO for the PC group. When I left IBM and went to a startup, I got to see firsthand a startup culture for roughly two years, selling the company, and then staying on with a midsized company after that.
So the short answer is, I’ve seen a huge range of both cultures, group sizes and a wide range of technical challenges from super, ultra portable devices up through to enterprise servers. I have a deep appreciation for how culture matters.
Leave a Reply