Intel’s ‘Meteor Lake’ CPU power will be controlled by AI

Table of Contents

Intel’s Meteor Lake chip will certainly be used as an engine for specific AI tasks on the PC. But Intel is also applying AI to how the chip operates, too: specifically applying AI to how it manages power and transitions between active and low-power states.

In 2008, Intel’s Centrino platform used a catchphrase to describe the company’s power philosophy: HUGI, or Hurry Up and Get Idle. It was an acknowledgment that meeting the need for low-power processor necessitated getting all the work that needed to be done, done as quickly as possible. Then the processor could return to a low-power sleep state.

That hasn’t changed. AI — coincidentally or not, often referred to as Intel’s “Centrino moment” — is also factoring heavily into Meteor Lake’s power management, Intel executives said at the Hot Chips conference at Stanford University. (Originally, Intel referred specifically to Meteor Lake in its program synopsis, but settled on a more generic talk called “Intel Energy Efficiency Architecture” instead.)

In any event, the new AI power scheme will apply to future products, according to Efraim Rotem, responsible for client SoC architecture at Intel’s Design Engineering Group. In two months, Intel will launch its new client processors, which will use these new features, he said.

Intel showed how the new AI( orange) compared to the older algorithm in its Hot Chips presentation. Notice how power drops as a result.

Intel

The problem is a simple one. “We care very much about responsiveness when we interact with the computer,” Rotem said. “We want an immediate action, and we don’t want to wait too much.”

To enable more performance, the typical solution is to route more power to the processor, which can then run at a faster speed and get the job done faster. But the CPU then must figure out when the job is done, and the processor can transition to a low-power state. This is known as Dynamic Voltage and Frequency Scaling, or DVFS. “The question in power management…is how we figure out which is the right frequency to run,” Rotem said.

Intel first implemented the basics of this decision-making process in the 6th-gen “Skylake” core, with a technology known as Speed Shift. That technology intelligently shifted back and forth between an active high-power state and idle speeds. But Speed Shift used a standardized estimate for how humans opened and closed a web page, for example.

With Meteor Lake, Intel has shifted once again, to AI. Now, the algorithm “understands,” and can predict, how a user will open a web page, scan it, close it, and move on. The same algorithm has been applied to numerous other tasks. What’s different is that the algorithm taught itself, extracting patterns of behavior that are more finely detailed than what Intel previously programmed in.

That will improve Meteor Lake, bringing up to 35 percent more responsiveness — the reaction time in which the CPU can rev up into a high-power state, Rotem said. But knowing when to shift into a low-power state can pay off, too: saving energy up to 15 percent more than before. Rotem made a distinction between “energy” — the work over time, divided by the power such work consumes — with just the overall power consumption.

The idea is to give the processor the energy budget it needs for the time that it needs it, and no more. Fielding questions from the audience, Rotem made clear that there’s room for improvement: The AI has trained itself on specific scenarios. And offline: It has already been trained, and will not dynamically react to individual user preferences. Put another way, your PC will not learn how you act — well, not in this generation, at least. Rotem also suggested that different AI models could apply to different scenarios — gaming, for example.

Intel ‘s Rotem also suggested that the popular performance per watt metric could be out of date.

Intel

Rotem closed by suggesting something a bit controversial: that performance per watt, a key metric for energy-efficient architectures like Arm, didn’t matter any more. Most laptops spend just four minutes of a typical day in a high-power state, Rotem said, and desktops spend about 100 minutes in the same state. Over time, he said, the ratio between the thermal design power of a chip and the actual energy consumed over time will diminish, as the processors themselves become more efficient.

We know that Intel plans to talk more about its upcoming client processors at its Intel Innovation conference in San Jose on September 19. It sounds like energy efficiency could be one of the components of Meteor Lake.

CPUs and Processors

PCWorld  Intel’s Meteor Lake chip will certainly be used as an engine for specific AI tasks on the PC. But Intel is also applying AI to how the chip operates, too: specifically applying AI to how it manages power and transitions between active and low-power states.

In 2008, Intel’s Centrino platform used a catchphrase to describe the company’s power philosophy: HUGI, or Hurry Up and Get Idle. It was an acknowledgment that meeting the need for low-power processor necessitated getting all the work that needed to be done, done as quickly as possible. Then the processor could return to a low-power sleep state.

That hasn’t changed. AI — coincidentally or not, often referred to as Intel’s “Centrino moment” — is also factoring heavily into Meteor Lake’s power management, Intel executives said at the Hot Chips conference at Stanford University. (Originally, Intel referred specifically to Meteor Lake in its program synopsis, but settled on a more generic talk called “Intel Energy Efficiency Architecture” instead.)

In any event, the new AI power scheme will apply to future products, according to Efraim Rotem, responsible for client SoC architecture at Intel’s Design Engineering Group. In two months, Intel will launch its new client processors, which will use these new features, he said.

Intel showed how the new AI( orange) compared to the older algorithm in its Hot Chips presentation. Notice how power drops as a result.Intel

The problem is a simple one. “We care very much about responsiveness when we interact with the computer,” Rotem said. “We want an immediate action, and we don’t want to wait too much.”

To enable more performance, the typical solution is to route more power to the processor, which can then run at a faster speed and get the job done faster. But the CPU then must figure out when the job is done, and the processor can transition to a low-power state. This is known as Dynamic Voltage and Frequency Scaling, or DVFS. “The question in power management…is how we figure out which is the right frequency to run,” Rotem said.

Intel first implemented the basics of this decision-making process in the 6th-gen “Skylake” core, with a technology known as Speed Shift. That technology intelligently shifted back and forth between an active high-power state and idle speeds. But Speed Shift used a standardized estimate for how humans opened and closed a web page, for example.

With Meteor Lake, Intel has shifted once again, to AI. Now, the algorithm “understands,” and can predict, how a user will open a web page, scan it, close it, and move on. The same algorithm has been applied to numerous other tasks. What’s different is that the algorithm taught itself, extracting patterns of behavior that are more finely detailed than what Intel previously programmed in.

That will improve Meteor Lake, bringing up to 35 percent more responsiveness — the reaction time in which the CPU can rev up into a high-power state, Rotem said. But knowing when to shift into a low-power state can pay off, too: saving energy up to 15 percent more than before. Rotem made a distinction between “energy” — the work over time, divided by the power such work consumes — with just the overall power consumption.

The idea is to give the processor the energy budget it needs for the time that it needs it, and no more. Fielding questions from the audience, Rotem made clear that there’s room for improvement: The AI has trained itself on specific scenarios. And offline: It has already been trained, and will not dynamically react to individual user preferences. Put another way, your PC will not learn how you act — well, not in this generation, at least. Rotem also suggested that different AI models could apply to different scenarios — gaming, for example.

Intel ‘s Rotem also suggested that the popular performance per watt metric could be out of date.Intel

Rotem closed by suggesting something a bit controversial: that performance per watt, a key metric for energy-efficient architectures like Arm, didn’t matter any more. Most laptops spend just four minutes of a typical day in a high-power state, Rotem said, and desktops spend about 100 minutes in the same state. Over time, he said, the ratio between the thermal design power of a chip and the actual energy consumed over time will diminish, as the processors themselves become more efficient.

We know that Intel plans to talk more about its upcoming client processors at its Intel Innovation conference in San Jose on September 19. It sounds like energy efficiency could be one of the components of Meteor Lake.

CPUs and Processors 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top