News & Events

Google is using machine learning and artificial intelligence to wring even more efficiency out of its mighty data centers.

In a presentation today at Data Centers Europe 2014, Google’s Joe Kava said the company has begun using a neural network to analyze the oceans of data it collects about its server farms and to recommend ways to improve them. Kava is the Internet giant’s vice president of data centers.

In effect, Google has built a computer that knows more about its data centers than even the company’s engineers. The humans remain in charge, but Kava said the use of neural networks will allow Google to reach new frontiers in efficiency in its server farms, moving beyond what its engineers can see and analyze.

Google already operates some of the most efficient data centers on earth. Using artificial intelligence will allow Google to peer into the future and model how its data centers will perform in thousands of scenarios.

In early usage, the neural network has been able to predict Google’s Power Usage Effectiveness with 99.6 percent accuracy. Its recommendations have led to efficiency gains that appear small, but can lead to major cost savings when applied across a data center housing tens of thousands of servers.

Why turn to machine learning and neural networks? The primary reason is the growing complexity of data centers, a challenge for Google, which uses sensors to collect hundreds of millions of data points about its infrastructure and its energy use.

“In a dynamic environment like a data center, it can be difficult for humans to see how all of the variables interact with each other,” said Kava. “We’ve been at this (data center optimization) for a long time. All of the obvious best practices have already been implemented, and you really have to look beyond that.”

Enter Google’s ‘Boy Genius’

Google’s neural network was created by Jim Gao, an engineer whose colleagues have given him the nickname “Boy Genius” for his prowess analyzing large datasets. Gao had been doing cooling analysis using computational fluid dynamics, which uses monitoring data to create a 3D model of airflow within a server room.

Gao thought it was possible to create a model that tracks a broader set of variables, including IT load, weather conditions, and the operations of the cooling towers, water pumps and heat exchangers that keep Google’s servers cool.

“One thing computers are good at is seeing the underlying story in the data, so Jim took the information we gather in the course of our daily operations and ran it through a model to help make sense of complex interactions that his team – being mere mortals – may not otherwise have noticed,” Kava said in a blog post. “After some trial and error, Jim’s models are now 99.6 percent accurate in predicting PUE. This means he can use the models to come up with new ways to squeeze more efficiency out of our operations. ”

How it Works

Gao began working on the machine learning initiative as a “20 percent project,” a Google tradition of allowing employees to spend a chunk of their work time exploring innovations beyond their specific work duties. Gao wasn’t yet an expert in artificial intelligence. To learn the fine points of machine learning, he took a course from Stanford University Professor Andrew Ng.

Neural networks mimic how the human brain works, allowing computers to adapt and “learn” tasks without being explicitly programmed for them. Google’s search engine is often cited as an example of this type of machine learning, which is also a key research focus at the company.

“The model is nothing more than series of differential calculus equations,” Kava explained. “But you need to understand the math. The model begins to learn about the interactions between these variables.”

Gao’s first task was crunching the numbers to identify the factors that had the largest impact on energy efficiency of Google’s data centers, as measured by PUE. He narrowed the list down to 19 variables and then designed the neural network, a machine learning system that can analyze large datasets to recognize patterns.

“The sheer number of possible equipment combinations and their setpoint values makes it difficult to determine where the optimal efficiency lies,” Gao writes in the white paper on his initiative. “In a live DC, it is possible to meet the target setpoints through many possible combinations of hardware (mechanical and electrical equipment) and software (control strategies and setpoints). Testing each and every feature combination to maximize efficiency would be unfeasible given time constraints, frequent fluctuations in the IT load and weather conditions, as well as the need to maintain a stable DC environment.”

Runs On a Single Server

As for hardware, the machine learning doesn’t require unusual computing horsepower, according to Kava, who says it runs on a single server and could even work on a high-end desktop.

The system was put to work inside several Google data centers. The machine learning tool was able to suggest several changes that yield incremental improvements in PUE, including refinements in data center load migrations during upgrades of power infrastructure, and small changes in the water temperature across several components of the chiller system.

“Actual testing on Google (data centers) indicates that machine learning is an effective method of using existing sensor data to model DC energy efficiency and can yield significant cost savings,” Gao writes.

The Machines aren’t Taking Over

Kava said that the tool may help Google run simulations and refine future designs. But not to worry — Google’s data centers won’t become self-aware anytime soon. While the company is keen on automation, and has recently been acquiring robotics firms, the new machine learning tools won’t be taking over the management of any of its data centers.

“You still need humans to make good judgments about these things,” said Kava. “I still want our engineers to review the recommendations.”

The neural networks’ biggest benefits may be seen in the way Google builds its server farms in years to come. “I can envision using this during the data center design cycle,” said Kava. “You can use it as a forward-looking tool to test design changes and innovations. I know that we’re going to find more use cases.”

Google is sharing its approach to machine learning in Gao’s white paper, in the hopes that other hyper scale data center operators may be able to develop similar tools.

“This isn’t something that only Google or only Jim Gao can do,” said Kava. “I would love to see this type of analysis tool used more widely. I think the industry can benefit from it. It’s a great tool for being as efficient as possible.”

SAN FRANCISCO: For decades, medical technology firms have searched for ways to let diabetics check blood sugar easily, with scant success. Now, the world\\\\\\\'s largest mobile technology firms are getting in on the act. 

Apple, Samsung Electronics and Google, searching for applications that could turn nascent wearable technology like smartwatches and bracelets from curiosities into must-have items, have all set their sites on monitoring blood sugar, several people familiar with the plans say. 

These firms are variously hiring medical scientists and engineers, asking US regulators about oversight and developing glucose-measuring features in future wearable devices, the sources said. 

The first round of technology may be limited, but eventually the companies could compete in a global blood-sugar tracking market worth over $12 billion by 2017, according to research firm GlobalData.

Diabetes afflicts 29 million Americans and costs the economy some $245 billion in 2012, a 41% rise in five years. Many diabetics prick their fingers as much as 10 times daily in order to check levels of a type of sugar called glucose. 

Non-invasive technology could take many forms. Electricity or ultrasound could pull glucose through the skin for measurement, for instance, or a light could be shined through the skin so that a spectroscope could measure for indications of glucose.

All the biggies want glucose on their phone, said John Smith, former chief scientific officer of Johnson & Johnson\\\'s LifeScan, which makes blood glucose monitoring supplies. Get it right, and there\\\'s an enormous payoff.

Apple, Google and Samsung declined to comment, but Courtney Lias, director at the US Food and Drug Administration\\\'s chemistry and toxicology devices division, told Reuters a marriage between mobile devices and glucose-sensing is made in heaven. 

In a December meeting with Apple executives, the FDA described how it may regulate a glucometer that measures blood sugar, according to an FDA summary of the discussion. 

Such a device could avoid regulation if used for nutrition, but if marketed to diabetics, it likely would be regulated as a medical device, according to the summary, first reported by the Apple Toolbox blog.

The tech companies are likely to start off focusing on non-medical applications, such as fitness and education. 

Even an educational device would need a breakthrough from current technology, though, and some in the medical industry say the tech firms, new to the medical world, don\\\\\\\'t understand the core challenges.

There is a cemetery full of efforts to measure glucose in a non-invasive way, said DexCom chief executive Terrance Gregg, whose firm is known for minimally invasive techniques. To succeed would require several hundred million dollars or even a billion dollars, he said. 

Silicon Valley is already opening its vast wallet. 

Medtronic senior vice president of Medicine and Technology Stephen Oesterle recently said he now considers Google to be the medical device firm\\\'s next great rival, thanks to its funding for research and development, or R&D. 
We spend $1.5 billion a year on R&D at Medtronic — and it\\\'s mostly D, he told the audience at a recent conference. Google is spending $8 billion a year on R&D and, as far as I can tell, it\\\'s mostly R. 

Google has been public about some of its plans: it has developed a smart contact lens that measures glucose. In a blog post detailing plans for its smart contact lens, Google described an LED system that could warn of high or low blood sugar by flashing tiny lights. It has recently said it is looking for partners to bring the lens to market.

The device, which uses tiny chips and sensors that resemble bits of glitter to measure glucose levels in tears, is expected to be years away from commercial development, and skeptics wonder if it will ever be ready. 

Previous attempts at accurate non-invasive measurement have been foiled by body movement, and fluctuations in hydration and temperature. Tears also have lower concentrations of glucose, which are harder to track. 

But the Life Sciences team in charge of the lens and other related research is housed at the Google X facility, where it works on major breakthroughs such as the self-driving car, a former employee who requested anonymity said. 

Apple\\\\\\\'s efforts center on its iWatch, which is on track to ship in October, three sources at leading supply chain firms told Reuters. It is not clear whether the initial release will incorporate glucose-tracking sensors. 

Still, Apple has poached executives and bio-sensor engineers from such medical technology firms as Masimo, Vital Connect, and the now-defunct glucose monitoring startup C8 Medisensors. 

It has scooped up many of the most talented people with glucose-sensing expertise, said George Palikaras, CEO of Mediwise, a startup that hopes to measure blood sugar levels beneath the skin\\\\\\\'s surface by transmitting radio waves through a section of the human body. 

The tech companies are also drawing mainstream interest to the field, he said. When Google announced its smart contact lens, that was one of the best days of my career. We started getting a ton of emails, Palikaras said. 

Samsung was among the first tech companies to produce a smartwatch, which failed to catch on widely. It since has introduced a platform for mobile health, called Simband, which could be used on smart wrist bands and other mobile devices. 

Samsung is looking for partners and will allow developers to try out different sensors and software. One Samsung employee, who declined to be named, said the company expects to foster noninvasive glucose monitoring. 

Sources said Samsung is working with startups to implement a traffic light system in future Galaxy Gear smartwatches that flashes blood-sugar warnings. 

Samsung Ventures has made a number of investments in the field, including in Glooko, a startup that helps physicians access their patients\\\' glucose readings, and in an Israeli glucose monitoring startup through its $50 million Digital Health Fund. 

Ted Driscoll, a health investor with Claremont Creek Ventures, told Reuters he\\\'s heard pitches from potentially promising glucose monitoring startups, over a dozen in recent memory. 

Software developers say they hope to incorporate blood glucose data into health apps, which is of particular interest to athletes and health-conscious users. 

We\\\'re paying close attention to research around how sugar impacts weight loss, said Mike Lee, cofounder of MyFitnessPal. 

After decades of false starts, many medical scientists are confident about a breakthrough on glucose monitoring. Processing power allows quick testing of complex ideas, and the miniaturization of sensors, the low cost of electronics, and the rapid proliferation of mobile devices have given rise to new opportunities. 

One optimist is Jay Subhash, a recently-departed senior product manager for Samsung Electronics.  I wouldn\\\'t be at all surprised to see it one of these days, he said.


Contact us

SNIT Business School,

Old Moka Road,

Bell Village,


P: (+230) 211 10 92

P: (+230) 213 01 95