How computers got shockingly good at recognizing images

Enlarge (credit: Aurich / Getty)

Right now, I can open up Google Photos, type “beach,” and see my photos from various beaches I’ve visited over the last decade. I never went through my photos and labeled them; instead, Google identifies beaches based on the contents of the photos themselves. This seemingly mundane feature is based on a technology called deep convolutional neural networks, which allows software to understand images in a sophisticated way that wasn’t possible with prior techniques.

In recent years, researchers have found that the accuracy of the software gets better and better as they build deeper networks and amass larger data sets to train them. That has created an almost insatiable appetite for computing power, boosting the fortunes of GPU makers like Nvidia and AMD. Google developed its own custom neural networking chip several years ago, and other companies have scrambled to follow Google’s lead.

Over at Tesla, for instance, the company has put deep learning expert Andrej Karpathy in charge of its Autopilot project. The carmaker is now developing a custom chip to accelerate neural network operations for future versions of Autopilot. Or, take Apple: the A11 and A12 chips at the heart of recent iPhones include a “neural engine” to accelerate neural network operations and allow better image- and voice-recognition applications.

Read 104 remaining paragraphs | Comments

index?i=6Slx-wBYbVE:16kVDlFPyRQ:V_sGLiPB index?i=6Slx-wBYbVE:16kVDlFPyRQ:F7zBnMyn index?d=qj6IDK7rITs index?d=yIl2AUoC8zA

Leave a Reply

Your email address will not be published. Required fields are marked *