IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Is There a Solution to Keep Moore's Law Alive?

The axiom that the number of transistors in a microchip would double about every year or two, making the brain behind computers more powerful and less expensive, is on the demise. But have researchers found a new way?

By Matt Day, The Seattle Times

Microsoft thinks it may have found the solution to the demise of Moore’s Law.

The maxim, named for Intel co-founder Gordon Moore, held that the number of transistors in a microchip would double about every year or two, making the brain behind computers more powerful and less expensive.

That phenomenon held for five decades, and with a bit of help from inflation meant that by 2000, instead of just a calculator, $1,000 bought a laptop computer capable of handling millions of instructions per second.

Now, some technologists say, those gains are stalling, limited in part by the physical bounds of the raw materials that go into central processing units built by the likes of Intel and AMD.

“The question,” Doug Burger, a Microsoft distinguished engineer and researcher, said in an interview, “Is what bag of tricks are we going to pull out to keep (Moore’s Law) going.”

Burger thinks his team may have the answer.

One of Microsoft’s major announcements this week at Ignite, the Redmond company’s annual information technology worker conference, was essentially a better supercomputer.

Burger, in a simulated demo on Monday, showed how field programmable gate arrays (FPGAs), a type of computer chip that can be reprogrammed for specific tasks after they leave the factory floor, were adding firepower to Microsoft’s network of on-demand computing power.

Using all of the power of Microsoft’s data centers worldwide, the company could translate all 5 million articles on the English language Wikipedia in less than a tenth of a second.

The company during the past two years has quietly been installing FPGAs on the new servers Microsoft added to its global fleet of data centers. They’re currently helping to rank results in the Bing search engine and speed the performance of Microsoft’s Azure cloud-computing network.

Microsoft is alone among major cloud-computing players in widely deploying FPGA technology, Burger said, though Chinese search giant Baidu is experimenting with FPGA-powered machine-learning applications, and IBM and Oracle have used the devices on a smaller scale.

Amazon.com, Microsoft’s Seattle-area rival in the growing cloud-computing business, can’t match the FPGA-enabled horsepower at the same price, said Mark Russinovich, Microsoft’s chief technology officer for Azure.

“We started, I think, pretty early,” Burger said. “It took us four years to get it right.”

For Burger, a former University of Texas professor, this is just the beginning. The company aims for FPGAs, currently available only as a network-speed booster for the high end of Microsoft’s cloud-computing network, to eventually help power the rest of the hardware and machine-learning services the company rents to customers.

With that kind of computing power, the previously costly world of supercomputing could leave the domain of the well funded laboratory or corporate server room.

Just like the jump from an ancient calculator to a modern laptop.

©2016 The Seattle Times Distributed by Tribune Content Agency, LLC.