Friday, May 24, 2024

Intel Offers Glimpse of Chip Future

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

MOUNTAIN VIEW, Calif. – Intel is not a company without a sense of irony. It chose the Computer History Museum, a place with a collection of relics from the past like the first Apple computer, old DEC PDP 11 mainframes and even vacuum tube-powered “computers” (if you can call them that) to show off its latest science projects.

The experiments, which range far beyond the CPUs that Intel is known for, are part of an annual event known as Research@Intel Day. Some of these projects, noted Intel (NASDAQ: INTC) CTO and Research Fellow Justin Rattner, may never see the light of day. He said it’s the duty of Intel’s research arm, which he heads, to experiment, dream up new ideas, and not be afraid to fail.

“Generally, there’s a lot of pressure to solve short-term problems, what we call the tyranny of the urgent,” he said in his opening remarks. “But that’s not what we’re funded to do. We’re paid to take risks and find the exceptional opportunities that will take Intel into the future.”

His first news of the morning was a bit of rebranding, something Intel is doing a lot these days. Research had been under the umbrella of the Corporate Technology Group, but the group has been renamed Intel Labs. It has been reorganized into five units, all reporting to Rattner, who in turn reports to CEO Paul Otellini.

The units are Circuits and Systems, Future Technologies, Integrated Platforms, Microprocessors and Programming and Intel China. The China group has been in operation for a decade and Intel decided it had reached a level of maturity that was appropriate to bring it up to staff level.

Intel has both microprocessor and non-CPU projects in the works, many of which Rattner outlined. Intel has created an energy research group focused on “the other 98 percent of the problem,” as Rattner put it, since IT consumes about two percent of the total energy in this country.

Intel’s terascale high performance computing work is transitioning to exascale work, and Rattner promised more news in the coming weeks and months.

He also discussed, a 3D Internet world simulation for scientists to enter a 3D virtual world and look at multi-dimensional scientific data as well as collaborate with other scientists around the world.

Projects on the show floor featured everything from using solid state drives (SSDs) as cache drives on a server to the next generation terascale processor, featuring the ability to partition cores in a system, to new libraries in C++ to assist in writing parallel visual computing applications. As they are all research projects, they had no release date.

More Moore, less power

In addition to Rattner’s talk and the show floor, there were a few informal sessions hosted by Intel researchers. Mike Mayberry, vice president of the Technology and Manufacturing group, discussed how Intel is keeping Moore’s Law (define) on track.

“We’re responsible for what to build and how can we integrate it,” he told the small meeting group. “It’s a mix of ‘wouldn’t it be nice to do this’ and ‘would it be possible or not’ and can you make it viable in the marketplace?”

This branch of Intel is focused on long lead projects. It began working with the high-k metal gate technology in 1999 and had first silicon in 2003, but it didn’t hit the market until 2007. Now Mayberry says he’s looking at lithography, the process of printing the microprocessor, below 32nm in size.

Intel has gotten down as small as 15nm, but it’s basic lithography and costly. For now, he forsees using a combination of extreme ultraviolet lithography and wet immersion to make the next generation processors.

Next was Paul Diefenbaugh, principle engineer at Intel Labs, who discussed Intel’s energy efforts, focused mostly on low-power devices like Atom processors and mobile Internet devices.

“The problem for us is dramatic reductions in power consumption haven’t materialized,” he said. “We made some good advances but haven’t been able to get that 2x improvement in battery life. After a couple of years, we realized the problem was the platform.”

Rather than tackle it piece by piece, or as part of a subsystem, Intel finally took a holistic approach. Today, there is lots of frequent activity keeping the system in a state of high readiness and high power. The result was what he called Platform Power Management, which allows for bursts of activity with long idle periods, rather than constant activity.

Intel started this for MIDs and smartphones, but said it’s 100 percent applicable across the board, meaning it can be applied to desktops, notebooks and servers.

Article courtesy of

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles