Apple Passes Coca-Cola as Most Valuable Brand

APPLE is the new most valuable brand in the world, according to a closely followed annual report.

The report, to be released on Monday, is from Interbrand, a corporate identity and brand consulting company owned by the Omnicom Group that has been compiling what it calls the Best Global Brands report since 2000. The previous No. 1 brand, Coca-Cola, fell to No. 3.

Not only has Apple replaced Coca-Cola as first among the 100 most valuable brands based on criteria that include financial performance, this is the first time that the soft drink known for slogans like “It’s the real thing” has not been No. 1.

Apple’s arrival in the top spot was perhaps “a matter of time,” Jez Frampton, global chief executive at Interbrand, said in a recent interview. Apple was No. 2 last year, climbing from No. 8 in the 2011 report.

“What is it they say, ‘Long live the king’?” Mr. Frampton asked. “This year, the king is Apple.”

The 2013 report begins: “Every so often, a company changes our lives, not just with its products, but with its ethos. This is why, following Coca-Cola’s 13-year run at the top of Best Global Brands, Interbrand has a new No. 1 — Apple.”

The report estimates the value of the Apple brand at $98.3 billion, up 28 percent from the 2012 report. The value of the Coca-Cola brand also rose, by 2 percent to $79.2 billion, but that was not sufficient to give Coca-Cola a 14th year as Interbrand’s most valuable brand.

Although “Coca-Cola is an efficient, outstanding brand marketer, no doubt about it,” Mr. Frampton said, Apple and other leading technology brands have become “very much the poster child of the marketing community.”

That is underscored by the brand in second place in the new report: Google, which rose from fourth place last year. In fact, of the top 10 Best Global Brands for 2013, five are in technology: Apple; Google; Microsoft, No. 5, unchanged from last year; Samsung, 8, compared with 9 last year; and Intel, 9, compared with 8 last year.

Samsung’s ascent followed the company’s adoption of a new brand strategy called the Brand Ideal, which includes “a greater focus on social purpose,” Sue Shim, executive vice president and chief marketing officer at Samsung, said by e-mail. That reflected research indicating American consumers would switch brands to “one that was associated with improving people’s lives,” she added.

I.B.M. — No. 4 in 2013, down a notch from 2012 — is ranked as a business services brand. Otherwise, technology would account for six of the top 10.

“Brands like Apple and Google and Samsung are changing our behavior: how we buy, how we communicate with each other, even whether we speak with each other,” Mr. Frampton said. “They have literally changed the way we live our lives.”

Among other transformative technology brands that performed well in the new report was Facebook, which climbed to 52 from 69 last year, its first year on the list.

However, not all technology brands fared well. BlackBerry, which tumbled last year to 93 from 56 in 2011, has disappeared from the list. And Nokia, which dropped to 19 from 14 in 2011, finished this year in 57th place — “the biggest faller” among the 100, Mr. Frampton said.

Among nontechnology brands, a notable addition to the list was Chevrolet, at 89, the first General Motors brand to rank among the Best Global Brands.

“It feels good to hit the list for the first time,” Alan Batey, global head of Chevrolet at G.M., said in a telephone interview. “It’s a great first step, but we’ve got a long way to go. There are a lot of big brands in front of us.”

The milestone reflects how General Motors has been “making a conscious effort to globalize Chevrolet,” Mr. Batey said, selling the brand in 140 countries in ads that play up attributes like “value for money and designs that move hearts and minds.”

Commonwealth, the creative agency for Chevrolet, “played a key role” in helping the brand make the list, he added. Commonwealth is part of the McCann Worldgroup division of the Interpublic Group of Companies.

Last year, when Coca-Cola finished atop the Best Global Brands list for the 13th consecutive time, an executive at the Coca-Cola Company acknowledged the streak but noted that “nothing lasts forever.”

A year later, the executive, Joseph V. Tripodi, executive vice president and chief marketing and commercial leadership officer, had this reaction: “Of course, we would like to remain on top of the list forever. That said, we are honored to continue to be included among such an esteemed group of global brands, and we congratulate Apple and Google, both valued partners of ours.”

“We’ve seen the value of technology brands rise as they create new ways for people to stay connected virtually,” Mr. Tripodi said by e-mail. “We understand this, as the lasting power of our brand is built on the social moment of sharing a Coca-Cola with friends and family.”

“Creating these simple moments and delivering on our brand promise each and every day remains our focus,” he added, “as we continue to grow the value of brand Coca-Cola for decades to come.”

If it is consolation, Coca-Cola remains far ahead of Apple and Google in likes on Facebook fan pages. Coca-Cola has 73.2 million, compared with 9.8 million for Apple and 15.1 million for Google.

Deep Learning Uganda

Building a Brain

There have been many competing approaches to those challenges. One has been to feed computers with information and rules about the world, which required programmers to laboriously write software that is familiar with the attributes of, say, an edge or a sound. That took lots of time and still left the systems unable to deal with ambiguous data; they were limited to narrow, controlled applications such as phone menu systems that ask you to make queries by saying specific words.

Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form. A program maps out a set of virtual neurons and then assigns random numerical values, or “weights,” to connections between them. These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.

Some of today’s artificial neural networks can train themselves to recognize complex patterns.

Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

But early neural networks could simulate only a very limited number of neurons at once, so they could not recognize patterns of great complexity. They languished through the 1970s.

In the mid-1980s, Hinton and others helped spark a revival of interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required heavy human involvement: programmers had to label data before feeding it to the network. And complex speech or image recognition required more computer power than was then available.

Finally, however, in the last decade ­Hinton and other researchers made some fundamental conceptual breakthroughs. In 2006, Hinton developed a more efficient way to teach individual layers of neurons. The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects.

Like cats. Last June, Google demonstrated one of the largest neural networks yet, with more than a billion connections. A team led by Stanford computer science professor Andrew Ng and Google Fellow Jeff Dean showed the system images from 10 million randomly selected YouTube videos. One simulated neuron in the software model fixated on images of cats. Others focused on human faces, yellow flowers, and other objects. And thanks to the power of deep learning, the system identified these discrete objects even though no humans had ever defined or labeled them.

What stunned some AI experts, though, was the magnitude of improvement in image recognition. The system correctly categorized objects and themes in the ­YouTube images 16 percent of the time. That might not sound impressive, but it was 70 percent better than previous methods. And, Dean notes, there were 22,000 categories to choose from; correctly slotting objects into some of them required, for example, distinguishing between two similar varieties of skate fish. That would have been challenging even for most humans. When the system was asked to sort the images into 1,000 more general categories, the accuracy rate jumped above 50 percent.

Big Data

Training the many layers of virtual neurons in the experiment took 16,000 computer processors—the kind of computing infrastructure that Google has developed for its search engine and other services. At least 80 percent of the recent advances in AI can be attributed to the availability of more computer power, reckons Dileep George, cofounder of the machine-learning startup Vicarious.

There’s more to it than the sheer size of Google’s data centers, though. Deep learning has also benefited from the company’s method of splitting computing tasks among many machines so they can be done much more quickly. That’s a technology Dean helped develop earlier in his 14-year career at Google. It vastly speeds up the training of deep-learning neural networks as well, enabling Google to run larger networks and feed a lot more data to them.

Already, deep learning has improved voice search on smartphones. Until last year, Google’s Android software used a method that misunderstood many words. But in preparation for a new release of Android last July, Dean and his team helped replace part of the speech system with one based on deep learning. Because the multiple layers of neurons allow for more precise training on the many variants of a sound, the system can recognize scraps of sound more reliably, especially in noisy environments such as subway platforms. Since it’s likelier to understand what was actually uttered, the result it returns is likelier to be accurate as well. Almost overnight, the number of errors fell by up to 25 percent—results so good that many reviewers now deem Android’s voice search smarter than Apple’s more famous Siri voice assistant.

For all the advances, not everyone thinks deep learning can move artificial intelligence toward something rivaling human intelligence. Some critics say deep learning and AI in general ignore too much of the brain’s biology in favor of brute-force computing.

One such critic is Jeff Hawkins, founder of Palm Computing, whose latest venture, Numenta, is developing a machine-learning system that is biologically inspired but does not use deep learning. Numenta’s system can help predict energy consumption patterns and the likelihood that a machine such as a windmill is about to fail. Hawkins, author of On Intelligence, a 2004 book on how the brain works and how it might provide a guide to building intelligent machines, says deep learning fails to account for the concept of time. Brains process streams of sensory data, he says, and human learning depends on our ability to recall sequences of patterns: when you watch a video of a cat doing something funny, it’s the motion that matters, not a series of still images like those Google used in its experiment. “Google’s attitude is: lots of data makes up for everything,” Hawkins says.

But if it doesn’t make up for everything, the computing resources a company like Google throws at these problems can’t be dismissed. They’re crucial, say deep-learning advocates, because the brain itself is still so much more complex than any of today’s neural networks. “You need lots of computational resources to make the ideas work at all,” says Hinton.

What’s Next

Although Google is less than forthcoming about future applications, the prospects are intriguing. Clearly, better image search would help YouTube, for instance. And Dean says deep-learning models can use phoneme data from English to more quickly train systems to recognize the spoken sounds in other languages. It’s also likely that more sophisticated image recognition could make Google’s self-driving cars much better. Then there’s search and the ads that underwrite it. Both could see vast improvements from any technology that’s better and faster at recognizing what people are really looking for—maybe even before they realize it.