What Makes Facebook’s App Center Work?

How does Facebook choose which applications to feature in its app center? The social network’s Wei Xu, Xin Liu, and T.R. Vishwanath explained the process in a note on the Facebook Engineering page.

How does Facebook choose which applications to feature in its app center? The social network’s Wei Xu, Xin Liu, and T.R. Vishwanath explained the process in a note on the Facebook Engineering page.

The engineers said an average of 220 million Facebook users visit app center each month, and they are 40 percent more likely to return the next day. App center became available globally Aug. 1.

The Facebook trio discussed their goals for app center, how they built a recommendation engine, how they determine whether apps are high-quality, and the algorithms the social network uses to populate app center. Here are some of the highlights:

The goal is for curation of the app center to be driven by quality and personalization, instead of editorialization. Just as with news feed, personalization in app center will improve over time as people and their friends engage with more apps.

To efficiently solve this problem, we built a recommendation engine directly into app center, so that, just as with news feed, each person would have a personalized experience. The recommendation engine powers the app center and helps it learn people’s preferences in order to serve them with app recommendations that are timely, socially relevant, and unique to them. This allows a more diverse set of apps to become discoverable, particularly those in harder-to-find or up-and-coming categories.

The system follows an aggregator-leaf architecture — very similar to that of a search engine. Because we have a lot of data, it is necessary to partition the objects into multiple subsets (shards), where each leaf node is only responsible for one subset. The aggregator acts as a central controller, receiving the recommendation request from the front-end Web server and distributing to leaf nodes. Each leaf node then finds a set of best candidates from the objects stored on the local machine and returns them to the aggregator. The aggregator then performs a final merge and returns the best results to the client.

After that, the front end collects user feedback, which is then integrated into the app recommendation engine. We scale this system in two ways: The first is to increase the number of shards so that we can handle more data. The second way is to have multiple replicas so that we can handle more traffic. Using replicas also adds redundancy to the system, which allows us to tolerate the failure of some machines.

In order to accurately measure quality, we developed a system that randomly surveys the user to rate an app shortly after we detect that the user has used the app. Then, when we compute the average rating for an app, we include a confidence adjustment based on the number of ratings the app has received.

We found that the number of daily active users (i.e., the average number of users who used the app in a day) was a good measure of how large the app is, while the number of monthly active users could be inflated by spikes of activity during the month. So we settled on a formula for app quality that is primarily a function of its average rating, as well as average daily active users.

From the algorithmic point of view, the app center recommendation system has three major elements: candidate selection, scoring and ranking, and real-time updates.

The key to candidate selection is efficiency and high recall. We use several heuristics to choose promising candidates, the first being the selection of popular items based on a user’s demographic information. The second heuristic we use is the selection of social items, because we believe that people are generally interested in their friends’ activities. The third heuristic is to select items related to objects liked or interacted with by the user in the past.

Once we obtain a set of candidates, we fetch their features from local storage and calculate ranking scores for them. A good scoring function should be able to capture high order interactions in three types of features.

The first type is explicit features we can obtain directly, like demographic information about the user. The second type is dynamic features, such as number of likes and impressions for objects. The third type — learned latent features — is more interesting. These features are learned from the user-object interaction history, which can capture user preference and object flavor.

Readers: How often do you visit app center?

david.cohen@adweek.com David Cohen is editor of Adweek's Social Pro Daily.
Publish date: October 3, 2012 https://stage.adweek.com/digital/what-makes-app-center-work/ © 2020 Adweek, LLC. - All Rights Reserved and NOT FOR REPRINT