Google is using machine learning and artificial intelligence to wring even more efficiency out of its mighty data centers.
In a presentation today at Data Centers Europe 2014, Google’s Joe Kava said the company has begun using a neural network to analyze the oceans of data it collects about its server farms and to recommend ways to improve them. Kava is the Internet giant’s vice president of data centers.
In effect, Google has built a computer that knows more about its data centers than even the company’s engineers. The humans remain in charge, but Kava said the use of neural networks will allow Google to reach new frontiers in efficiency in its server farms, moving beyond what its engineers can see and analyze.
Google already operates some of the most efficient data centers on earth. Using artificial intelligence will allow Google to peer into the future and model how its data centers will perform in thousands of scenarios.
In early usage, the neural network has been able to predict Google’s Power Usage Effectiveness with 99.6 percent accuracy. Its recommendations have led to efficiency gains that appear small, but can lead to major cost savings when applied across a data center housing tens of thousands of servers.
Why turn to machine learning and neural networks? The primary reason is the growing complexity of data centers, a challenge for Google, which uses sensors to collect hundreds of millions of data points about its infrastructure and its energy use.
“In a dynamic environment like a data center, it can be difficult for humans to see how all of the variables interact with each other,” said Kava. “We’ve been at this (data center optimization) for a long time. All of the obvious best practices have already been implemented, and you really have to look beyond that.”
Enter Google’s ‘Boy Genius’
Google’s neural network was created by Jim Gao, an engineer whose colleagues have given him the nickname “Boy Genius” for his prowess analyzing large datasets. Gao had been doing cooling analysis using computational fluid dynamics, which uses monitoring data to create a 3D model of airflow within a server room.
Gao thought it was possible to create a model that tracks a broader set of variables, including IT load, weather conditions, and the operations of the cooling towers, water pumps and heat exchangers that keep Google’s servers cool.
“One thing computers are good at is seeing the underlying story in the data, so Jim took the information we gather in the course of our daily operations and ran it through a model to help make sense of complex interactions that his team – being mere mortals – may not otherwise have noticed,” Kava said in a blog post. “After some trial and error, Jim’s models are now 99.6 percent accurate in predicting PUE. This means he can use the models to come up with new ways to squeeze more efficiency out of our operations. ”
Gao began working on the machine learning initiative as a “20 percent project,” a Google tradition of allowing employees to spend a chunk of their work time exploring innovations beyond their specific work duties. Gao wasn’t yet an expert in artificial intelligence. To learn the fine points of machine learning, he took a course from Stanford University Professor Andrew Ng.
Neural networks mimic how the human brain works, allowing computers to adapt and “learn” tasks without being explicitly programmed for them. Google’s search engine is often cited as an example of this type of machine learning, which is also a key research focus at the company.
“The model is nothing more than series of differential calculus equations,” Kava explained. “But you need to understand the math. The model begins to learn about the interactions between these variables.”
Gao’s first task was crunching the numbers to identify the factors that had the largest impact on energy efficiency of Google’s data centers, as measured by PUE. He narrowed the list down to 19 variables and then designed the neural network, a machine learning system that can analyze large datasets to recognize patterns.
“The sheer number of possible equipment combinations and their setpoint values makes it difficult to determine where the optimal efficiency lies,” Gao writes in the white paper on his initiative. “In a live DC, it is possible to meet the target setpoints through many possible combinations of hardware (mechanical and electrical equipment) and software (control strategies and setpoints). Testing each and every feature combination to maximize efficiency would be unfeasible given time constraints, frequent fluctuations in the IT load and weather conditions, as well as the need to maintain a stable DC environment.”
Adobe is continuing its full-court press to convince photographers to move to its Creative Cloud subscription-based licensing model. Today’s announcement of Creative Cloud 2014 marks its biggest effort yet. New features in Photoshop, lots of new mobile goodies, and a permanent discounted subscription for photographers were highlighted by Adobe as it rolled out its newly branded 2014 edition of its Creative Suite for the Cloud.
Photoshop 2014: Path-based blurs and focus-based selection are headline features
While Adobe plans to continue to roll out incremental improvements to its Suite as they are ready, it has decided to provide annual milestone releases to make it easier for plug-in developers to have known release numbers for testing. Today’s CC 2014 launch features updates to all 14 Creative Suite applications, but two new features in Photoshop CC will be of the most interest to photographers — path-based blurs and focus-based selections.
Adobe has previously provided a variety of tools for creative blurring on an image, including for simulating motion, but Photoshop 2014 takes the capability to a new level. Motion blurs can be made along a line, a radius, or just about any path that can be constructed using Photoshop’s curve construction tools. So in addition to simple motion, like a vehicle in a straight line, it is possible to mimic spinning wheels or even a vehicle in a swerving path.
Creating selections based on focus is also new in Photoshop 2014. You can tell Photoshop to only select areas that are in focus, and use that selection to create a mask for other commands. The magic is far from perfect, so you can further refine the selection using the usual set of Adobe tools, of course. This worked quite well for the demo images Adobe chose — that featured an in-focus subject in the foreground with a distant and out-of-focus background. We’ll see how accurate it is with real world images now that the production version has been released.
Photoshop 2014 now has “experimental features,” including touch and high-dpi support for Windows
With this release of Photoshop CC, Adobe is also providing an experimental features capability. Users will be able to selectively activate features that otherwise would not have made it into the product. The most exciting of these for Windows users are support for high-dpi displays and for touch gestures. The high-dpi support scales user interface elements by 200%, which will make Photoshop a lot less painful to use on high-resolution laptops and tablets. Touch gesture support includes standard Windows 8 gestures like pinch to zoom, and the new version offers improved stylus support.
Creative Cloud subscribers with an iPhone or iPad will also benefit from a new capability to manage their Adobe assets from their mobile device, using Adobe’s Creative Cloud app for iOS. All these goodies are available for immediate download from Adobe, or by using the integrated Update capability in your Creative Cloud applications.
Adobe is pushing hard into mobile — as long as you own an iPad
Apple, Samsung Electronics and Google, searching for applications that could turn nascent wearable technology like smartwatches and bracelets from curiosities into must-have items, have all set their sites on monitoring blood sugar, several people familiar with the plans say.
These firms are variously hiring medical scientists and engineers, asking US regulators about oversight and developing glucose-measuring features in future wearable devices, the sources said.
The first round of technology may be limited, but eventually the companies could compete in a global blood-sugar tracking market worth over $12 billion by 2017, according to research firm GlobalData.
Diabetes afflicts 29 million Americans and costs the economy some $245 billion in 2012, a 41% rise in five years. Many diabetics prick their fingers as much as 10 times daily in order to check levels of a type of sugar called glucose.
Non-invasive technology could take many forms. Electricity or ultrasound could pull glucose through the skin for measurement, for instance, or a light could be shined through the skin so that a spectroscope could measure for indications of glucose.
All the biggies want glucose on their phone, said John Smith, former chief scientific officer of Johnson & Johnson\\\'s LifeScan, which makes blood glucose monitoring supplies. Get it right, and there\\\'s an enormous payoff.
Apple, Google and Samsung declined to comment, but Courtney Lias, director at the US Food and Drug Administration\\\'s chemistry and toxicology devices division, told Reuters a marriage between mobile devices and glucose-sensing is made in heaven.
In a December meeting with Apple executives, the FDA described how it may regulate a glucometer that measures blood sugar, according to an FDA summary of the discussion.
Such a device could avoid regulation if used for nutrition, but if marketed to diabetics, it likely would be regulated as a medical device, according to the summary, first reported by the Apple Toolbox blog.
The tech companies are likely to start off focusing on non-medical applications, such as fitness and education.
Even an educational device would need a breakthrough from current technology, though, and some in the medical industry say the tech firms, new to the medical world, don\\\\\\\'t understand the core challenges.
There is a cemetery full of efforts to measure glucose in a non-invasive way, said DexCom chief executive Terrance Gregg, whose firm is known for minimally invasive techniques. To succeed would require several hundred million dollars or even a billion dollars, he said.
Silicon Valley is already opening its vast wallet.
Medtronic senior vice president of Medicine and Technology Stephen Oesterle recently said he now considers Google to be the medical device firm\\\'s next great rival, thanks to its funding for research and development, or R&D.
We spend $1.5 billion a year on R&D at Medtronic — and it\\\'s mostly D, he told the audience at a recent conference. Google is spending $8 billion a year on R&D and, as far as I can tell, it\\\'s mostly R.
Google has been public about some of its plans: it has developed a smart contact lens that measures glucose. In a blog post detailing plans for its smart contact lens, Google described an LED system that could warn of high or low blood sugar by flashing tiny lights. It has recently said it is looking for partners to bring the lens to market.
The device, which uses tiny chips and sensors that resemble bits of glitter to measure glucose levels in tears, is expected to be years away from commercial development, and skeptics wonder if it will ever be ready.
Previous attempts at accurate non-invasive measurement have been foiled by body movement, and fluctuations in hydration and temperature. Tears also have lower concentrations of glucose, which are harder to track.
But the Life Sciences team in charge of the lens and other related research is housed at the Google X facility, where it works on major breakthroughs such as the self-driving car, a former employee who requested anonymity said.
Apple\\\\\\\'s efforts center on its iWatch, which is on track to ship in October, three sources at leading supply chain firms told Reuters. It is not clear whether the initial release will incorporate glucose-tracking sensors.
Still, Apple has poached executives and bio-sensor engineers from such medical technology firms as Masimo, Vital Connect, and the now-defunct glucose monitoring startup C8 Medisensors.
It has scooped up many of the most talented people with glucose-sensing expertise, said George Palikaras, CEO of Mediwise, a startup that hopes to measure blood sugar levels beneath the skin\\\\\\\'s surface by transmitting radio waves through a section of the human body.
The tech companies are also drawing mainstream interest to the field, he said. When Google announced its smart contact lens, that was one of the best days of my career. We started getting a ton of emails, Palikaras said.
Samsung was among the first tech companies to produce a smartwatch, which failed to catch on widely. It since has introduced a platform for mobile health, called Simband, which could be used on smart wrist bands and other mobile devices.
Samsung is looking for partners and will allow developers to try out different sensors and software. One Samsung employee, who declined to be named, said the company expects to foster noninvasive glucose monitoring.
Sources said Samsung is working with startups to implement a traffic light system in future Galaxy Gear smartwatches that flashes blood-sugar warnings.
Samsung Ventures has made a number of investments in the field, including in Glooko, a startup that helps physicians access their patients\\\' glucose readings, and in an Israeli glucose monitoring startup through its $50 million Digital Health Fund.
Ted Driscoll, a health investor with Claremont Creek Ventures, told Reuters he\\\'s heard pitches from potentially promising glucose monitoring startups, over a dozen in recent memory.
Software developers say they hope to incorporate blood glucose data into health apps, which is of particular interest to athletes and health-conscious users.
We\\\'re paying close attention to research around how sugar impacts weight loss, said Mike Lee, cofounder of MyFitnessPal.
After decades of false starts, many medical scientists are confident about a breakthrough on glucose monitoring. Processing power allows quick testing of complex ideas, and the miniaturization of sensors, the low cost of electronics, and the rapid proliferation of mobile devices have given rise to new opportunities.
One optimist is Jay Subhash, a recently-departed senior product manager for Samsung Electronics. I wouldn\\\'t be at all surprised to see it one of these days, he said.
What is A+ certification?
A+ certification is an industry-wide, vendor neutral competency test for pc repair technicians.
What types of jobs are available with A+?
A partial list includes: PC repair technician, help desk analyst, desktop support specialist, computer specialist. Additionally many non-IT companies require A+ for such jobs as cable installers, postal equipment installation and repair and telecommunications installers