Introducing Similarity Search at Flickr

At Flickr, we understand that the value in our image corpus is only unlocked when our members can find photos and photographers that inspire them, so we strive to enable the discovery and appreciation of new photos.

To further that effort, today we are introducing similarity search on Flickr. If you hover over a photo on a search result page, you will reveal a “…” button that exposes a menu that gives you the option to search for photos similar to the photo you are currently viewing.

In many ways, photo search is very different from traditional web or text search. First, the goal of web search is usually to satisfy a particular information need, while with photo search the goal is often one of discovery; as such, it should be delightful as well as functional. We have taken this to heart throughout Flickr. For instance, our color search feature, which allows filtering by color scheme, and our style filters, which allow filtering by styles such as “minimalist” or “patterns,” encourage exploration. Second, in traditional web search, the goal is usually to match documents to a set of keywords in the query. That is, the query is in the same modality—text—as the documents being searched. Photo search usually matches across modalities: text to image. Text querying is a necessary feature of a photo search engine, but, as the saying goes, a picture is worth a thousand words. And beyond saving people the effort of so much typing, many visual concepts genuinely defy accurate description. Now, we’re giving our community a way to easily explore those visual concepts with the “…” button, a feature we call the similarity pivot.

The similarity pivot is a significant addition to the Flickr experience because it offers our community an entirely new way to explore and discover the billions of incredible photos and millions of incredible photographers on Flickr. It allows people to look for images of a particular style, it gives people a view into universal behaviors, and even when it “messes up,” it can force people to look at the unexpected commonalities and oddities of our visual world with a fresh perspective.

What is “similarity”?

To understand how an experience like this is powered, we first need to understand what we mean by “similarity.” There are many ways photos can be similar to one another. Consider some examples.

It is apparent that all of these groups of photos illustrate some notion of “similarity,” but each is different. Roughly, they are: similarity of color, similarity of texture, and similarity of semantic category. And there are many others that you might imagine as well.

What notion of similarity is best suited for a site like Flickr? Ideally, we’d like to be able to capture multiple types of similarity, but we decided early on that semantic similarity—similarity based on the semantic content of the photos—was vital to facilitate discovery on Flickr. This requires a deep understanding of image content for which we employ deep neural networks.

We have been using deep neural networks at Flickr for a while for various tasks such as object recognition, NSFW prediction, and even prediction of aesthetic quality. For these tasks, we train a neural network to map the raw pixels of a photo into a set of relevant tags, as illustrated below.

Internally, the neural network accomplishes this mapping incrementally by applying a series of transformations to the image, which can be thought of as a vector of numbers corresponding to the pixel intensities. Each transformation in the series produces another vector, which is in turn the input to the next transformation, until finally we have a vector that we specifically constrain to be a list of probabilities for each class we are trying to recognize in the image. To be able to go from raw pixels to a semantic label like “hot air balloon,” the network discards lots of information about the image, including information about  appearance, such as the color of the balloon, its relative position in the sky, etc. Instead, we can extract an internal vector in the network before the final output.

For common neural network architectures, this vector—which we call a “feature vector”—has many hundreds or thousands of dimensions. We can’t necessarily say with certainty that any one of these dimensions means something in particular as we could at the final network output, whose dimensions correspond to tag probabilities. But these vectors have an important property: when you compute the Euclidean distance between these vectors, images containing similar content will tend to have feature vectors closer together than images containing dissimilar content. You can think of this as a way that the network has learned to organize information present in the image so that it can output the required class prediction. This is exactly what we are looking for: Euclidian distance in this high-dimensional feature space is a measure of semantic similarity. The graphic below illustrates this idea: points in the neighborhood around the query image are semantically similar to the query image, whereas points in neighborhoods further away are not.

This measure of similarity is not perfect and cannot capture all possible notions of similarity—it will be constrained by the particular task the network was trained to perform, i.e., scene recognition. However, it is effective for our purposes, and, importantly, it contains information beyond merely the semantic content of the image, such as appearance, composition, and texture. Most importantly, it gives us a simple algorithm for finding visually similar photos: compute the distance in the feature space of a query image to each index image and return the images with lowest distance. Of course, there is much more work to do to make this idea work for billions of images.

Large-scale approximate nearest neighbor search

With an index as large as Flickr’s, computing distances exhaustively for each query is intractable. Additionally, storing a high-dimensional floating point feature vector for each of billions of images takes a large amount of disk space and poses even more difficulty if these features need to be in memory for fast ranking. To solve these two issues, we adopt a state-of-the-art approximate nearest neighbor algorithm called Locally Optimized Product Quantization (LOPQ).

To understand LOPQ, it is useful to first look at a simple strategy. Rather than ranking all vectors in the index, we can first filter a set of good candidates and only do expensive distance computations on them. For example, we can use an algorithm like k-means to cluster our index vectors, find the cluster to which each vector is assigned, and index the corresponding cluster id for each vector. At query time, we find the cluster that the query vector is assigned to and fetch the items that belong to the same cluster from the index. We can even expand this set if we like by fetching items from the next nearest cluster.

This idea will take us far, but not far enough for a billions-scale index. For example, with 1 billion photos, we need 1 million clusters so that each cluster contains an average of 1000 photos. At query time, we will have to compute the distance from the query to each of these 1 million cluster centroids in order to find the nearest clusters. This is quite a lot. We can do better, however, if we instead split our vectors in half by dimension and cluster each half separately. In this scheme, each vector will be assigned to a pair of cluster ids, one for each half of the vector. If we choose k = 1000 to cluster both halves, we have k2= 1000 * 1000 = 1e6 possible pairs. In other words, by clustering each half separately and assigning each item a pair of cluster ids, we can get the same granularity of partitioning (1 million clusters total) with only 2 * 1000 distance computations with half the number of dimensions for a total computational savings of 1000x. Conversely, for the same computational cost, we gain a factor of k more partitions of the data space, providing a much finer-grained index.

This idea of splitting vectors into subvectors and clustering each split separately is called product quantization. When we use this idea to index a dataset it is called the inverted multi-index, and it forms the basis for fast candidate retrieval in our similarity index. Typically the distribution of points over the clusters in a multi-index will be unbalanced as compared to a standard k-means index, but this unbalance is a fair trade for the much higher resolution partitioning that it buys us. In fact, a multi-index will only be balanced across clusters if the two halves of the vectors are perfectly statistically independent. This is not the case in most real world data, but some heuristic preprocessing—like PCA-ing and permuting the dimensions so that the cumulative per-dimension variance is approximately balanced between the halves—helps in many cases. And just like the simple k-means index, there is a fast algorithm for finding a ranked list of clusters to a query if we need to expand the candidate set.

After we have a set of candidates, we must rank them. We could store the full vector in the index and use it to compute the distance for each candidate item, but this would incur a large memory overhead (for example, 256 dimensional vectors of 4 byte floats would require 1Tb for 1 billion photos) as well as a computational overhead. LOPQ solves these issues by performing another product quantization, this time on the residuals of the data. The residual of a point is the difference vector between the point and its closest cluster centroid. Given a residual vector and the cluster indexes along with the corresponding centroids, we have enough information to reproduce the original vector exactly. Instead of storing the residuals, LOPQ product quantizes the residuals, usually with a higher number of splits, and stores only the cluster indexes in the index. For example, if we split the vector into 8 splits and each split is clustered with 256 centroids, we can store the compressed vector with only 8 bytes regardless of the number of dimensions to start (though certainly a higher number of dimensions will result in higher approximation error). With this lossy representation we can produce a reconstruction of a vector from the 8 byte codes: we simply take each quantization code, look up the corresponding centroid, and concatenate these 8 centroids together to produce a reconstruction. Likewise, we can approximate the distance from the query to an index vector by computing the distance between the query and the reconstruction. We can do this computation quickly for many candidate points by computing the squared difference of each split of the query to all of the centroids for that split. After computing this table, we can compute the squared difference for an index point by looking up the precomputed squared difference for each of the 8 indexes and summing them together to get the total squared difference. This caching trick allows us to quickly rank many candidates without resorting to distance computations in the original vector space.

LOPQ adds one final detail: for each cluster in the multi-index, LOPQ fits a local rotation to the residuals of the points that fall in that cluster. This rotation is simply a PCA that aligns the major directions of variation in the data to the axes followed by a permutation to heuristically balance the variance across the splits of the product quantization. Note that this is the exact preprocessing step that is usually performed at the top-level multi-index. It tends to make the approximate distance computations more accurate by mitigating errors introduced by assuming that each split of the vector in the production quantization is statistically independent from other splits. Additionally, since a rotation is fit for each cluster, they serve to fit the local data distribution better.

Below is a diagram from the LOPQ paper that illustrates the core ideas of LOPQ. K-means (a) is very effective at allocating cluster centroids, illustrated as red points, that target the distribution of the data, but it has other drawbacks at scale as discussed earlier. In the 2d example shown, we can imagine product quantizing the space with 2 splits, each with 1 dimension. Product Quantization (b) clusters each dimension independently and cluster centroids are specified by pairs of cluster indexes, one for each split. This is effectively a grid over the space. Since the splits are treated as if they were statistically independent, we will, unfortunately, get many clusters that are “wasted” by not targeting the data distribution. We can improve on this situation by rotating the data such that the main dimensions of variation are axis-aligned. This version, called Optimized Product Quantization (c), does a better job of making sure each centroid is useful. LOPQ (d) extends this idea by first coarsely clustering the data and then doing a separate instance of OPQ for each cluster, allowing highly targeted centroids while still reaping the benefits of product quantization in terms of scalability.

LOPQ is state-of-the-art for quantization methods, and you can find more information about the algorithm, as well as benchmarks, here. Additionally, we provide an open-source implementation in Python and Spark which you can apply to your own datasets. The algorithm produces a set of cluster indexes that can be queried efficiently in an inverted index, as described. We have also explored use cases that use these indexes as a hash for fast deduplication of images and large-scale clustering. These extended use cases are studied here.

Conclusion

We have described our system for large-scale visual similarity search at Flickr. Techniques for producing high-quality vector representations for images with deep learning are constantly improving, enabling new ways to search and explore large multimedia collections. These techniques are being applied in other domains as well to, for example, produce vector representations for text, video, and even molecules. Large-scale approximate nearest neighbor search has importance and potential application in these domains as well as many others. Though these techniques are in their infancy, we hope similarity search provides a useful new way to appreciate the amazing collection of images at Flickr and surface photos of interest that may have previously gone undiscovered. We are excited about the future of this technology at Flickr and beyond.

Acknowledgements

Yannis Kalantidis, Huy Nguyen, Stacey Svetlichnaya, Arel Cordero. Special thanks to the rest of the Computer Vision and Machine Learning team and the Vespa search team who manages Yahoo’s internal search engine.

A Year Without a Byte

One of the largest cost drivers in running a service like Flickr is storage. We’ve described multiple techniques to get this cost down over the years: use of COS, creating sizes dynamically on GPUs and perceptual compression. These projects have been very successful, but our storage cost is still significant.
At the beginning of 2016, we challenged ourselves to go further — to go a full year without needing new storage hardware. Using multiple techniques, we got there.

The Cost Story

A little back-of-the-envelope math shows storage costs are a real concern. On a very high-traffic day, Flickr users upload as many as twenty-five million photos. These photos require an average of 3.25 megabytes of storage each, totalling over 80 terabytes of data. Stored naively in a cloud service similar to S3, this day’s worth of data would cost over $30,000 per year, and continue to incur costs every year.

And a very large service will have over two hundred million active users. At a thousand images each, storage in a service similar to S3 would cost over $250 million per year (or $1.25 / user-year) plus network and other expenses. This compounds as new users sign up and existing users continue to take photos at an accelerating rate. Thankfully, our costs, and every large service’s costs, are different than storing naively at S3, but remain significant.



Cost per byte have decreased, but bytes per image from iPhone-type platforms have increased. Cost per image hasn’t changed significantly.

Storage costs do drop over time. For example, S3 costs dropped from $0.15 per gigabyte month in 2009 to $0.03 per gigabyte-month in 2014, and cloud storage vendors have added low-cost options for data that is infrequently accessed. NAS vendors have also delivered large price reductions.

Unfortunately, these lower costs per byte are counteracted by other forces. On iPhones, increasing camera resolution, burst mode and the addition of short animations (Live Photos) have increased bytes-per-image rapidly enough to keep storage cost per image roughly constant. And iPhone images are far from the largest.

In response to these costs, photo storage services have pursued a variety of product options. To name a few: storing lower quality images or re-compressing, charging users for their data usage, incorporating advertising, selling associated products such as prints, and tying storage to purchases of handsets.

There are also a number of engineering approaches to controlling storage costs. We sketched out a few and cover three that we implemented below: adjusting thresholds on our storage systems, rolling out existing savings approaches to more images, and deploying lossless JPG compression.

Adjusting Storage Thresholds

As we dug into the problem, we looked at our storage systems in detail. We discovered that our settings were based on assumptions about high write and delete loads that didn’t hold. Our storage is pretty static. Users only rarely delete or change images once uploaded. We also had two distinct areas of just-in-case space. 5% of our storage was reserved space for snapshots, useful for undoing accidental deletes or writes, and 8.5% was held free in reserve. This resulted in about 13% of our storage going unused. Trade lore states that disks should remain 10% free to avoid performance degradation, but we found 5% to be sufficient for our workload. So we combined our our two just-in-case areas into one and reduced our free space threshold to that level. This was our simplest approach to the problem (by far), but it resulted in a large gain. With a couple simple configuration changes, we freed up more than 8% of our storage.



Adjusting storage thresholds

Extending Existing Approaches

In our earlier posts, we have described dynamic generation of thumbnail sizes and perceptual compression. Combining the two approaches decreased thumbnail storage requirements by 65%, though we hadn’t applied these techniques to many of our images uploaded prior to 2014. One big reason for this: large-scale changes to older files are inherently risky, and require significant time and engineering work to do safely.

Because we were concerned that further rollout of dynamic thumbnail generation would place a heavy load on our resizing infrastructure, we targeted only thumbnails from less-popular images for deletes. Using this approach, we were able to handle our complete resize load with just four GPUs. The process put a heavy load on our storage systems; to minimize the impact we randomized our operations across volumes. The entire process took about four months, resulting in even more significant gains than our storage threshold adjustments.



Decreasing the number of thumbnail sizes

Lossless JPG Compression

Flickr has had a long-standing commitment to keeping uploaded images byte-for-byte intact. This has placed a floor on how much storage reduction we can do, but there are tools that can losslessly compress JPG images. Two well-known options are PackJPG and Lepton, from Dropbox. These tools work by decoding the JPG, then very carefully compressing it using a more efficient approach. This typically shrinks a JPG by about 22%. At Flickr’s scale, this is significant. The downside is that these re-compressors use a lot of CPU. PackJPG compresses at about 2MB/s on a single core, or about fifteen core-years for a single petabyte worth of JPGs. Lepton uses multiple cores and, at 15MB/s, is much faster than packJPG, but uses roughly the same amount of CPU time.

This CPU requirement also complicated on-demand serving. If we recompressed all the images on Flickr, we would need potentially thousands of cores to handle our decompress load. We considered putting some restrictions on access to compressed images, such as requiring users to login to access original images, but ultimately found that if we targeted only rarely accessed private images, decompressions would occur only infrequently. Additionally, restricting the maximum size of images we compressed limited our CPU time per decompress. We rolled this out as a component of our existing serving stack without requiring any additional CPUs, and with only minor impact to user experience.

Running our users’ original photos through lossless compression was probably our highest-risk approach. We can recreate thumbnails easily, but a corrupted source image cannot be recovered. Key to our approach was a re-compress-decompress-verify strategy: every recompressed image was decompressed and compared to its source before removing the uncompressed source image.

This is still a work-in-progress. We have compressed many images but to do our entire corpus is a lengthy process, and we had reached our zero-new-storage-gear goal by mid-year.

On The Drawing Board

We have several other ideas which we’ve investigated but haven’t implemented yet.

In our current storage model, we have originals and thumbnails available for every image, each stored in two datacenters. This model assumes that the images need to be viewable relatively quickly at any point in time. But private images belonging to accounts that have been inactive for more than a few months are unlikely to be accessed. We could “freeze” these images, dropping their thumbnails and recreate them when the dormant user returns. This “thaw” process would take under thirty seconds for a typical account. Additionally, for photos that are private (but not dormant), we could go to a single uncompressed copy of each thumbnail, storing a compressed copy in a second datacenter that would be decompressed as needed.

We might not even need two copies of each dormant original image available on disk. We’ve pencilled out a model where we place one copy on a slower, but underutilized, tape-based system while leaving the other on disk. This would decrease availability during an outage, but as these images belong to dormant users, the effect would be minimal and users would still see their thumbnails. The delicate piece here is the placement of data, as seeks on tape systems are prohibitively slow. Depending on the details of what constitutes a “dormant” photo these techniques could comfortably reduce storage used by over 25%.

We’ve also looked into de-duplication, but we found our duplicate rate is in the 3% range. Users do have many duplicates of their own images on their devices, but these are excluded by our upload tools.  We’ve also looked into using alternate image formats for our thumbnail storage.    WebP can be much more compact than ordinary JPG but our use of perceptual compression gets us close to WebP byte size and permits much faster resize.  The BPG project proposes a dramatically smaller, H.265 based encoding but has IP and other issues.

There are several similar optimizations available for videos. Although Flickr is primarily image-focused, videos are typically much larger than images and consume considerably more storage.

Conclusion



Optimization over several releases

Since 2013 we’ve optimized our usage of storage by nearly 50%.  Our latest efforts helped us get through 2016 without purchasing any additional storage,  and we still have a few more options available.

Peter Norby, Teja Komma, Shijo Joy and Bei Wu formed the core team for our zero-storage-budget project. Many others assisted the effort.

Personalized Group Recommendations on Flickr

There are two primary paradigms for the discovery of digital content. First is the search paradigm, in which the user is actively looking for specific content using search terms and filters (e.g., Google web search, Flickr image search, Yelp restaurant search, etc.). Second is a passive approach, in which the user browses content presented to them (e.g., NYTimes news, Flickr Explore, and Twitter trending topics). Personalization benefits both approaches by providing relevant content that is tailored to users’ tastes (e.g., Google News, Netflix homepage, LinkedIn job search, etc.). We believe personalization can improve the user experience at Flickr by guiding both new as well as more experienced members as they explore photography. Today, we’re excited to bring you personalized group recommendations.

Flickr Groups are great for bringing people together around a common theme, be it a style of photography, camera, place, event, topic, or just some fun. Community members join for several reasons—to consume photos, to get feedback, to play games, to get more views, or to start a discussion about photos, cameras, life or the universe. We see value in connecting people with appropriate groups based on their interests. Hence, we decided to start the personalization journey by providing contextually relevant and personalized content that is tuned to each person’s unique taste.

Of course, in order to respect users’ privacy, group recommendations only consider public photos and public groups. Additionally, recommendations are private to the user. In other words, nobody else sees what is recommended to an individual.

In this post we describe how we are improving Flickr’s group recommendations. In particular, we describe how we are replacing a curated, non-personalized, static list of groups with a dynamic group recommendation engine that automatically generates new results based on user interactions to provide personalized recommendations unique to each person. The algorithms and backend systems we are building are broad and applicable to other scenarios, such as photo recommendations, contact recommendations, content discovery, etc.

Group_recommendations2.png

Figure: Personalized group recommendations

Challenges

One challenge of recommendations is determining a user’s interests. These interests could be user-specified, explicit preferences or could be inferred implicitly from their actions, supported by user feedback. For example:

  • Explicit:
    • Ask users what topics interest them
    • Ask users why they joined a particular group
  • Implicit:
    • Infer user tastes from groups they join, photos they like, and users they follow
    • Infer why users joined a particular group based on their activity, interactions, and dwell time
  • Feedback:
    • Get feedback on recommended items when users perform actions such as “Join” or “Follow” or click “Not interested”

Another challenge of recommendations is figuring out group characteristics. I.e.: what type of group is it? What interests does it serve? What brings Flickr members to this group? We can infer this by analyzing group members, photos posted to the group, discussions and amount of activity in the group.

Once we have figured out user preferences and group characteristics, recommendations essentially becomes a matchmaking process. At a high-level, we want to support 3 use cases:

  • Use Case # 1: Given a group, return all groups that are “similar”
  • Use Case # 2: Given a user, return a list of recommended groups
  • Use Case # 3: Given a photo, return a list of groups that the photo could belong to

Collaborative Filtering

One approach to recommender systems is presenting similar content in the current context of actions. For example, Amazon’s “Customers who bought this item also bought” or LinkedIn’s “People also viewed.” Item-based collaborative filtering can be used for computing similar items.

collaborative_filtering

Figure: Collaborative filtering in action

By Moshanin (Own work) [CC BY-SA 3.0] from Wikipedia

Intuitively, two groups are similar if they have the same content or same set of users. We observed that users often post the same photo to multiple groups. So, to begin, we compute group similarity based on a photo’s presence in multiple groups.  

Consider the following sample matrix M(Gi -> Pj) constructed from group photo pools, where 1 means a corresponding group (Gi) contains an image, and empty (0) means a group does not contain the image.

matrix1

From this, we can compute M.M’ (M’s transpose), which gives us the number of common photos between every pair of groups (Gi, Gj):

matrix2

We use modified cosine similarity to compute a similarity score between every pair of groups:

cosinesimilarity

To make this calculation robust, we only consider groups that have a minimum of X photos and keep only strong relationships (i.e., groups that have at least Y common photos). Finally, we use the similarity scores to come up with the top k-nearest neighbors for each group.

We also compute group similarity based on group membership —i.e., by defining group-user relationship (Gi -> Uj) matrix. It is interesting to note that the results obtained from this relationship are very different compared to (Gi, Pj) matrix. The group-photo relationship tends to capture groups that are similar by content (e.g.,“macro photography”). On the other hand, the group-user relationship gives us groups that the same users have joined but are possibly about very different topics, thus providing us with a diversity of results. We can extend this approach by computing group similarity using other features and relationships (e.g., autotags of photos to cluster groups by themes, geotags of photos to cluster groups by place, frequency of discussion to cluster groups by interaction model, etc.).

Using this we can easily come up with a list of similar groups (Use Case # 1). We can either merge the results obtained by different similarity relationships into a single result set, or keep them separate to power features like “Other groups similar to this group” and “People who joined this group also joined.”

We can also use the same data for recommending groups to users (Use Case # 2). We can look at all the groups that the user has already joined and recommend groups similar to those.

To come up with a list of relevant groups for a photo (Use Case # 3), we can compute photo similarity either by using a similar approach as above or by using Flickr computer vision models for finding photos similar to the query photo. A simple approach would then be to recommend groups that these similar photos belong to.

Implementation

Due to the massive scale (millions of users x 100k groups) of data, we used Yahoo’s Hadoop Stack to implement the collaborative filtering algorithm. We exploited sparsity of entity-item relationship matrices to come up with a more efficient model of computation and used several optimizations for computational efficiency. We only need to compute the similarity model once every 7 days, since signals change slowly.

architecture_diagram

Figure: Computational architecture

(All logos and icons are trademarks of respective entities)

 

Similarity scores and top k-nearest neighbors for each group are published to Redis for quick lookups needed by the serving layer. Recommendations for each user are computed in real-time when the user visits the groups page. Implementation of the serving layer takes care of a few aspects that are important from usability and performance point-of-view:

  • Freshness of results: Users hate to see the same results being offered even though they might be relevant. We have implemented a randomization scheme that returns fresh results every X hours, while making sure that results stay static over a user’s single session.
  • Diversity of results: Diversity of results in recommendations is very important since a user might not want to join a group that is very similar to a group he’s already involved in. We require a good threshold that balances similarity and diversity. To improve diversity further, we combine recommendations from different algorithms. We also cluster the user’s groups into diverse sets before computing recommendations.
  • Dynamic results: Users expect their interactions to have a quick effect on recommendations. We thus incorporate user interactions while making subsequent recommendations so that the system feels dynamic.
  • Performance: Recommendation results are cached so that API response is quick on subsequent visits.

Cold Start

The drawback to collaborative filtering is that it cannot offer recommendations to new users who do not have any associations. For these users, we plan to recommend groups from an algorithmically computed list of top/trending groups alongside manual curation. As users interact with the system by joining groups, the recommendations become more personalized.

Measuring Effectiveness

We use qualitative feedback from user studies and alpha group testing to understand user expectation and to guide initial feature design. However, for continued algorithmic improvements, we need an objective quantitative metric. Recommendation results by their very nature are subjective, so measuring effectiveness is tricky. The usual approach taken is to roll out to a random population of users and measure the outcome of interest for the test group as compared to the control group (ref: A/B testing).

We plan to employ this technique and measure user interaction and engagement to keep improving the recommendation algorithms. Additionally, we plan to measure explicit signals such as when users click “Not interested.” This feedback will also be used to fine-tune future recommendations for users.

measuringeffectiveness

Figure: Measuring user engagement

Future Directions

While we’re seeing good initial results, we’d like to continue improving the algorithms to provide better results to the Flickr community. Potential future directions can be classified broadly into 3 buckets: algorithmic improvements, new product use cases, and new recommendation applications.

If you’d like to help, we’re hiring. Check out our jobs page and get in touch.

Product Engineering: Mehul Patel, Chenfan (Frank) Sun,  Chinmay Kini

We Want You… and Your Teammates

14493569810_7ac064e3c4_oWe’re hiring here at Flickr and we got pretty excited the other week when we saw Stripe’s post: BYOT (Bring Your Own Team). The sum of the parts is greater than the whole and all that. Genius <big hat tip to them>.

In case you didn’t read Stripe’s post, here’s the gist: you’re a team player, you like to make an impact, focus on a tough problem, set a challenging goal, and see the fruits of your labor after blood, sweat, and tears (or, maybe just brainpower). But you’ve got the itch to collaborate, to talk an idea through, break it down, and parallelize tasks or simply to be around your mates through work and play. Turns out you already have your go-to group of colleagues, roommates, siblings, or buddies that push, inspire, and get the best out of you. Well, in that case we may want to hire all of you!

Like Stripe, we understand the importance of team dynamics. So if you’ve already got something good going on, we want in on it too. We love Stripe and are stoked for this initiative of theirs, but if Flickr tickles your fancy (and it does ours :) consider bringing that team of yours this way too, especially if you’ve got a penchant for mobile development. We’d love to chat!

Email us: jobs at flickr.com

Team crop

Photos by: @Chris Martin and @Captain Eric Willis

Introducing yakbak: Record and playback HTTP interactions in NodeJS

Did you know that the new Front End of www.flickr.com is one big Flickr API client? Writing a client for an existing API or service can be a lot of fun, but decoupling and testing that client can be quite tricky. There are many different approaches to taking the backing service out of the equation when it comes to writing tests for client code. Today we’ll discuss the pros and cons of some of these approaches, describe how the Flickr Front End team tests service-dependent libraries, and introduce you to our new NodeJS HTTP playback module: yakbak!

Scenario: Testing a Flickr API Client

Let’s jump into some code, shall we? Suppose we’re testing a (very, very simple) photo search API client:

Currently, this code will make an HTTP request to the Flickr API on every test run. This is less than desirable for several reasons:

  • UGC is unpredictable. In this test, we’re asserting that the response code is an HTTP 200, but obviously our client code needs to provide the response data to be useful. It’s impossible to write a meaningful and predictable test against live content.
  • Traffic is unpredictable. This photos search API call usually takes ~150ms for simple queries, but a more complex query or a call during peak traffic may take longer.
  • Downtime is unpredictable. Every service has downtime (the term is “four nines,” not “one hundred percent” for a reason), and if your service is down, your client tests will fail.
  • Networks are unpredictable. Have you ever tried coding on a plane? Enough said.

We want our test suite to be consistent, predictable, and fast. We’re also only trying to test our client code, not the API. Let’s take a look at some ways to replace the API with a control, allowing us to predictably test the client code.

Approach 1: Stub the HTTP client methods

We’re using superagent as our HTTP client, so we could use a mocking library like sinon to stub out superagent’s Request methods:

With these changes, we never actually make an HTTP request to the API during a test run. Now our test is predictable, controlled, and it runs crazy fast. However, this approach has some major drawbacks:

  • Tightly coupled with superagent. We’re all up in the client’s implementation details here, so if superagent ever changes their API, we’ll need to correct our tests to match. Likewise, if we ever want to use a different HTTP client, we’ll need to correct our tests as well.
  • Difficult to specify the full HTTP response. Here we’re only specifying the statusCode; what about when we need to specify the body or the headers? Talk about verbose.
  • Not necessarily accurate. We’re trusting the test author to provide a fake response that matches what the actual server would send back. What happens if the API changes the response schema? Some unhappy developer will have to manually update the tests to match reality (probably an intern, let’s be honest).

We’ve at least managed to replace the service with a control in our tests, but we can do (slightly) better.

Approach 2: Mock the NodeJS HTTP module

Every NodeJS HTTP client will eventually delegate to the standard NodeJS http module to perform the network request. This means we can intercept the request at a low level by using a tool like nock:

Great! We’re no longer stubbing out superagent and we can still control the HTTP response. This avoids the HTTP client coupling from the previous step, but still has many similar drawbacks:

  • We’re still completely implementation-dependent. If we want to pass a new query string parameter to our service, for example, we’ll also need to add it to the test so that nock will match the request.
  • It’s still laborious to specify the response headers, body, etc.
  • It’s still difficult to make sure the response body always matches reality.

At this point, it’s worth noting that none of these bullet points were an issue back when we were actually making the HTTP request. So, let’s do exactly that (once!).

Approach 3: Record and playback the HTTP interaction

The Ruby community created the excellent VCR gem for recording and replaying HTTP interactions during tests. Recorded HTTP requests exist as “tapes”, which are just files with some sort of format describing the interaction. The basic workflow goes like this:

  1. The client makes an actual HTTP request.
  2. VCR sits in front of the system’s HTTP library and intercepts the request.
  3. If VCR has a tape matching the request, it simply replays the response to the client.
  4. Otherwise, VCR lets the HTTP request through to the service, records the interaction to a new tape on disk and plays it back to the client.

Introducing yakbak

Today we’re open-sourcing yakbak, our take on recording and playing back HTTP interactions in NodeJS. Here’s what our tests look like with a yakbak proxy:

Here we’ve created a standard NodeJS http.Server with our proxy middleware. We’ve also configured our client to point to the proxy server instead of the origin service. Look, no implementation details!

yakbak tries to do things The Node Way™ wherever possible. For example, each yakbak “tape” is actually its own module that simply exports an http.Server handler, which allows us to do some really cool things. For example, it’s trivial to create a server that always responds a certain way. Since the tape’s hash is based solely on the incoming request, we can easily edit the response however we like. We’re also kicking around a handful of enhancements that should make yakbak an even more powerful development tool.

Thanks to yakbak, we’ve been writing fast, consistent, and reliable tests for our HTTP clients and applications. Want to give it a spin? Check it out today: https://github.com/flickr/yakbak

P.S. We’re hiring!

Do you love development tooling and helping keep teams on the latest and greatest technology? Or maybe you just want to help build the best home for your photos on the entire internet? We’re hiring Front End Ops and tons of other great positions. We’d love to hear from you!

Our Justified Layout Goes Open Source

We introduced the justified layout on Flickr.com late in 2011. Our community of photographers loved it for its ability to efficiently display many photos at their native aspect ratio with visually pleasing, consistent whitespace, so we quickly added it to the rest of the website.

Justified Example

It’s been through many iterations and optimizations. From back when we were primarily on the PHP stack to our lovely new JavaScript based isomorphic stack. Last year Eric Socolofsky did a great job explaining how the algorithm works and how it fits into a larger infrastructure for Flickr specifically.

In the years following its launch, we’ve had requests from our front end colleagues in other teams across Yahoo for a reusable package that does photo (or any rectangle) presentation like this, but it’s always been too tightly coupled to our stack to separate it out and hand it over. Until now! Today we’re publishing the justified-layout algorithm wrapped in an npm module for you to use on the server, or client, in your own projects.

Install/Download

npm install justified-layout --save

Or grab it directly from Github.

Using it

It’s really easy to use. No configuration is required. Just pass in an array of aspect ratios representing the photos/boxes you’d like to lay out:

var layoutGeometry = require('justified-layout')([1.33, 1, 0.65] [, config]);

If you only have dimensions and don’t want an extra step to convert them to aspect ratios, you can pass in an array of widths and heights like this:

What it returns

The geometry data for the layout items, in the same order they’re passed in.

This is the extent of what the module provides. There’s no rendering component. It’s up to you to use this data to render boxes the way you want. Use absolute positioning, background positions, canvas, generate a static image on the backend, whatever you like! There’s a very basic implementation used on the demo and docs page.

Configuration

It’s highly likely the defaults don’t satisfy your requirements; they don’t even satisfy ours. There’s a full set of configuration options to customize the output just the way you want. My favorite is the fullWidthBreakoutRowCadence option that we use on album pages. All config options are documented on the docs and demo page.

Compatibility

  • Latest Chrome
  • Latest Safari
  • Latest Firefox
  • Latest Mobile Safari
  • IE 9+
  • Node 0.10+

The future

The justified layout algorithm is just one part of our photo list infrastructure. Following this, we’ll be open sourcing more modules for handling data, handling state, reverse layouts, appending and prepending items for pagination.

We welcome your feedback, issues and contributions on Github.

P.S. Open Source at Flickr

This is the first of quite a bit of code we have in the works for open source release. If working on open source projects appeals to you, we’re hiring!

 

Configuration management for distributed systems (using GitHub and cfg4j)

Norbert Potocki, Software Engineer @ Yahoo Inc.

Warm up: Why configuration management?

When working with large-scale software systems, configuration management becomes crucial; supporting non-uniform environments gets greatly simplified if you decouple code from configuration. While building complex software/products such as Flickr, we had to come up with a simple, yet powerful, way to manage configuration. Popular approaches to solving this problem include using configuration files or having a dedicated configuration service. Our new solution combines the extremely popular GitHub and cfg4j library, giving you a very flexible approach that will work with applications of any size.

Why should I decouple configuration from the code?

  • Faster configuration changes (e.g. flipping feature toggles): Configuration can simply be injected without requiring parts of your code to be reloaded and re-executed. Config-only updates tend to be faster than code deployment.
  • Different configuration for different environments: Running your app on a laptop or in a test environment requires a different set of settings than production instance.
  • Keeping credentials private: If you don’t have a dedicated credential store, it may be convenient to keep credentials as part of configuration. They usually aren’t supposed to be “public,” but the code still may be. Be a good sport and don’t keep credentials in a public GitHub repo. :)

Meet the Gang: Overview of configuration management players

Let’s see what configuration-specific components we’ll be working with today:

image
Figure 1 –  Overview of configuration management components

Configuration repository and editor: Where your configuration lives. We’re using Git for storing configuration files and GitHub as an ad hoc editor.

Push cache : Intermediary store that we use to improve fetch speed and to ease load on GitHub servers.

CD pipeline: Continuous deployment pipeline pushing changes from repository to push cache, and validating config correctness.

Configuration library: Fetches configs from push cache and exposing them to your business logic.

Bootstrap configuration : Initial configuration specifying where your push cache is (so that library knows where to get configuration from).

All these players work as a team to provide an end-to-end configuration management solution.

The Coach: Configuration repository and editor

The first thing you might expect from the configuration repository and editor is ease of use. Let’s enumerate what that means:

  • Configuration should be easy to read and write.
  • It should be straightforward to add a new configuration set.
  • You most certainly want to be able to review changes if your team is bigger than one person.
  • It’s nice to see a history of changes, especially when you’re trying to fix a bug in the middle of the night.
  • Support from popular IDEs – freedom of choice is priceless.
  • Multi-tenancy support (optional) is often pragmatic.

So what options are out there that may satisfy those requirements? The three very popular formats for storing configuration are YAML, Java Property files, and XML files. We use YAML – it is widely supported by multiple programming languages and IDEs, and it’s very readable and easy to understand, even by a non-engineer.

We could use a dedicated configuration store; however, the great thing about files is that they can be easily versioned by version control tools like Git, which we decided to use as it’s widely known and proven.

Git provides us with a history of changes and an easy way to branch off configuration. It also has great support in the form of GitHub which we use both as an editor (built-in support for YAML files) and collaboration tool (pull requests, forks, review tool). Both are nicely glued together by following the Git flow branching model. Here’s an example of a configuration file that we use:

Figure 2 –  configuration file preview

One of the goals was to make managing multiple configuration sets (execution environments) a breeze. We need the ability to add and remove environments quickly. If you look at the screenshot below, you’ll notice a “prod-us-east” directory in the path. For every environment, we store a separate directory with config files in Git. All of them have the exact same structure and only differ in YAML file contents.

This solution makes working with environments simple and comes in very handy during local development or new production fleet rollout (see use cases at the end of this article). Here’s a sample config repo for a project that has only one “feature”:

Figure 3 –  support for multiple environments

Some of the products that we work with at Yahoo have a very granular architecture with hundreds of micro-services working together. For scenarios like this, it’s convenient to store configurations for all services in a single repository. It greatly reduces the overhead of maintaining multiple repositories. We support this use case by having multiple top-level directories, each holding configurations for one service only.

The sprinter: Push cache

The main role of push cache is to decrease the load put on the GitHub server and improve configuration fetch time. Since speed is the only concern here, we decided to keep the push cache simple: it’s just a key-value store. Consul was our choice, in part because it’s fully distributed.

You can install Consul clients on the edge nodes and they will keep being synchronized across the fleet. This greatly improves both the reliability and the performance of the system. If performance is not a concern, any key-value store will do. You can skip using push cache altogether and connect directly to Github, which comes in handy during development (see use cases to learn more about this).

The Manager: CD Pipeline

When the configuration repository is updated, a CD pipeline kicks in. This fetches configuration, converts it into a more optimized format, and pushes it to cache. Additionally, the CD pipeline validates the configuration (once at pull-request stage and again after being merged to master) and controls multi-phase deployment by deploying config change to only 20% of production hosts at one time.

The Mascot: Bootstrap configuration

Before we can connect to the push cache to fetch configuration, we need to know where it is. That’s where bootstrap configuration comes into play. It’s very simple. The config contains the hostname, port to connect to, and the name of the environment to use. You need to put this config with your code or as part of the CD pipeline. This simple yaml file binding Spring profiles to different Consul hosts suffices for our needs:

image
Figure 4 –  bootstrap configuration

The Cool Guy: Configuration library

image

The configuration library takes care of fetching the configuration from push cache and exposing it to your business logic. We use the library called cfg4j (“configuration for java”). This library re-loads configurations from the push cache every few seconds and injects them into configuration objects that our code uses. It also takes care of local caching, merging properties from different repositories, and falling back to user-provided defaults when necessary (read more at http://www.cfg4j.org/).

Briefly summarizing how we use cfg4j’s features:

  • Configuration auto-reloading: Each service reloads configuration every ~30 seconds and auto re-configures itself.
  • Multi-environment support: for our multiple environments (beta, performance, canary, production-us-west, production-us-east, etc.).
  • Local caching: Remedies service interruption when the push cache or configuration repository is down and also improves the performance for obtaining configs.
  • Fallback and merge strategies: Simplifies local development and provides support for multiple configuration repositories.
  • Integration with Dependency Injection containers – because we love DI!

If you want to play with this library yourself, there’s plenty of examples both in its documentation and cfg4j-sample-apps Github repository.

The Heavy Lifter: Configurable code

The most important piece is business logic. To best make use of a configuration service, the business logic has to be able to re-configure itself in runtime. Here are a few rules of thumb and code samples:

  • Use dependency injection for injecting configuration. This is how we do it using Spring Framework (see the bootstrap configuration above for host/port values):

  • Use configuration objects to inject configuration instead of providing configuration directly – here’s where the difference is:

Direct configuration injection (won’t reload as config changes)

Configuration injection via “interface binding” (will reload as config changes):

The exercise: Common use-cases (applying our simple solution)

Configuration during development (local overrides)

When you develop a feature, a main concern is the ability to evolve your code quickly.  A full configuration-management pipeline is not conducive to this. We use the following approaches when doing local development:

  • Add a temporary configuration file to the project and use cfg4j’s MergeConfigurationSource for reading config both from the configuration store and your file. By making your local file a primary configuration source, you provide an override mechanism. If the property is found in your file, it will be used. If not, cfg4j will fall back to using values from configuration store. Here’s an example (reference examples above to get a complete code):

  • Fork the configuration repository, make changes to the fork and use cfg4j’s GitConfigurationSource to access it directly (no push
    cache required):

  • Set up your private push cache, point your service to the cache, and edit values in it directly.

Configuration defaults

When you work with multiple environments, some of them may share a configuration. That’s when using configuration defaults may be convenient. You can do this by creating a “default” environment and using cfg4j’s MergeConfigurationSource for reading config first from the original environment and then (as a fallback) from “default” environment.

Dealing with outages

Configuration repository, push cache, and configuration CD pipeline can experience outages. To minimize the impact of such events, it’s good practice to cache configuration locally (in-memory) after each fetch. cfg4j does that automatically.

Responding to incidents – ultra fast configuration updates (skipping configuration CD pipeline)

Tests can’t always detect all problems. Bugs leak to the production environment and at times it’s important to make a config change as fast as possible to stop the fire. If you’re using push cache, the fastest way to modify config values is to make changes directly within the cache. Consul offers a rich REST API and web ui for updating configuration in the key-value store.

Keeping code and configuration in sync

Verifying that code and configuration are kept in sync happens at the configuration CD pipeline level. One part of the continuous deployment process deploys the code into a temporary execution environment, and points it to the branch that contains the configuration changes. Once the service is up, we execute a batch of functional tests to verify configuration correctness.

The cool down: Summary

The presented solution is the result of work that we put into building huge-scale photo-serving services. We needed a simple, yet flexible, configuration management system. Combining Git, Github, Consul and cfg4j provided a very satisfactory solution that we encourage you to try.

I want to thank the following people for reviewing this article: Bhautik Joshi, Elanna Belanger, Archie Russell.

PS. You can also follow me on Twitter, GitHub, LinkedIn or my private blog.

The 32 Days Of Christmas!

LEGO City Advent Calendar - Day 7

When you have thousands of photos, it can be hard to find the photo you’re looking for. Want to search for that Christmas cat you saw at last year’s party? And what if that party wasn’t on Christmas day, but sometime the week before? To help improve the search ranking and relevance of national, personal, and religious holiday photos, we first have to see when the photos were taken; when, for example, is the Christmas season?

Understanding what people are looking for when they search for their own photos is an important part of improving Flickr. Earlier this year, we began a study (which will be published at CHI 2016 under the same name as this post) by trying to understand how people searched for their personal photos. We showed a group of 74 participants roughly 20 of their own photos on Flickr, and asked them what they’d put into the Flickr search box to find those photos. We did this a total of 1492 times.

It turns out 12% of the time people used a temporal term in searches for their own photos, meaning a word connected to time in some way. These might include a year (2015), a month (January), a season (winter), or a holiday or special event (Thanksgiving, Eid al-Fitr, Easter, Passover, Burning Man). Often, however, the date and time on the photograph didn’t match the search term: the year would be wrong, or people would search for a photograph of snow the weekend after Thanksgiving with the word “winter,” despite the fact that winter doesn’t officially begin until December 21st in the U.S. So we wanted to understand that situation: how often does fall feel like winter?

To answer this, we mapped 78.8 million Flickr photos tagged with a season name to the date the photo was actually taken.

Seasons Tagged by Date

As you’d expect, most of the photographs tagged with a season are taken during that season: 66% of photos tagged “winter” were taken between December 22 and March 20. About 9% of search words are off by two seasons: photos tagged “summer” that were taken between December 21st and March 20th, for example. We expect this may reflect antipodean seasons: while most Flickr users are in the Northern Hemisphere, it doesn’t seem unreasonable that 5% of “summer” photographs might have been taken in the Southern Hemisphere. More interesting, we think, are the off-by-one cases, like fall photographs labeled as “winter,” where we believe that the photo represents the experience of winter, regardless of the objective reality of the calendar. For example, if it snows the day after Thanksgiving, it definitely feels like winter.

On the topic of Thanksgiving, let’s look at photographs tagged “thanksgiving.”

Percentage of Photos Tagged "Thanksgiving"
The six days between November 22nd and 27th—the darkest blue area—cover 65% of the photos. Expanding that range to November 15–30th covers 83%. Expanding to all of November covers 85%, and including October (and thus Canadian Thanksgiving, in gray in early October) brings the total to 90%. But that means that 10% of all photos tagged “thanksgiving” are outside of this range. Every date in that image represents a total of a minimum of 40 photographs taken on that day between 2003 and 2014 inclusive, uploaded to Flickr and tagged “thanksgiving” with the only white spaces being days that don’t exist, like February 30th or April 31st. Manual verification of some of the public photos tagged “thanksgiving” on arbitrarily chosen dates finds these photographs tagged “thanksgiving” included pumpkins or turkeys, autumnal leaves or cornucopias—all images culturally associated with the holiday.

Not all temporal search terms are quite so complicated; some holidays are celebrated and photographed on a single day each year, like Canada Day (July 1st) or Boxing Day (December 26th). While these holidays can be easily translated to date queries, other holidays have more complicated temporal patterns. Have a look at these lunar holidays.

Lunar Holidays Tagged by Date

There are some events that occur on a lunar calendar like Chinese New Year, Easter, Eid (both al-Fitr and al-Adha), and Hanukkah. These events move around in a regular, algorithmically determinable, but sometimes complicated, way. Most of these holidays tend to oscillate as a leap calculation is added periodically to synchronize the lunar timing to the solar calendar. However Eids, on the Hijri calendar, have no such leap correction, and we see photos tagged “Eid” edge forward year after year.

Some holidays and events, like birthdays, happen on every day of the week. But they’re often celebrated, and thus photographed, on Friday, Saturday, and Sunday:

Day of the week tagged Birthday

So to get back to our original question: when are photos tagged “Christmas” actually taken?

Days tagged with Christmas

As you can see, more photos tagged “Christmas” are taken on December 25th than on any other day (19%). Christmas Eve is a close second, at 12%. If you look at other languages, this difference practically goes away: 9.2% of photos tagged “Noel” are taken on Christmas Eve, and 9.6% are taken on Christmas; “navidad” photos are 11.3% on Christmas Eve and 12.0% on Christmas. But Christmas photos are taken throughout December. We can now set a threshold for a definition of Christmas: say if at least 1% of the photos tagged “Christmas” were taken on that day, we’d rank it more relevant. That means that every day from December 1st to January 1st hits that definition, with December 2nd barely scraping in. That makes…32 days of Christmas!

Merry Christmas and Happy Holidays—for all the holidays you celebrate and photograph.

PS: Flickr is hiring! Labs is hiring! Come join us!

Flickr’s experience with iOS 9

In the last couple of months, Apple has released new features as part of iOS 9 that allow a deeper integration between apps and the operating system. Among those features are Spotlight Search integration, Universal Links, and 3D Touch for iPhone 6S and iPhone 6S Plus.

Here at Flickr, we have added support for these new features and we have learned a few lessons that we would love to share.

Spotlight Search

There are two different kinds of content that can be searched through Spotlight: the kind that you explicitly index, and the kind that gets indexed based on the state your app is in. To explicitly index content, you use Core Spotlight, which lets you index multiple items at once. To index content related to your app’s current state, you use NSUserActivity: when a piece of content becomes visible, you start an activity to make iOS aware of this fact. iOS can then determine which pieces of content are more frequently visited, and thus more relevant to the user. NSUserActivity also allows us to mark certain items as public, which means that they might be shown to other iOS users as well.

For a better user experience, we index as much useful information as we can right off the bat. We prefetch all the user’s albums, groups, and people they follow, and add them to the search index using Core Spotlight. Indexing an item looks like this:

// Create the attribute set, which encapsulates the metadata of the item we're indexing
CSSearchableItemAttributeSet *attributeSet = [[CSSearchableItemAttributeSet alloc] initWithItemContentType:(NSString *)kUTTypeImage];
attributeSet.title = photo.title;
attributeSet.contentDescription = photo.searchableDescription;
attributeSet.keywords = photo.keywords;
attributeSet.thumbnailData = UIImageJPEGRepresentation(photo.thumbnail, 0.98);

// Create the searchable item and index it.
CSSearchableItem *searchableItem = [[CSSearchableItem alloc] initWithUniqueIdentifier:[NSString stringWithFormat:@"%@/%@", photo.identifier, photo.searchContentType] domainIdentifier:@"FLKCurrentUserSearchDomain" attributeSet:attributeSet];
[[CSSearchableIndex defaultSearchableIndex] indexSearchableItems:@[ searchableItem ] completionHandler:^(NSError * _Nullable error) {
                       if (error) {
                           // Handle failures.
                       }
              }];

Since we have multiple kinds of data – photos, albums, and groups – we had to create an identifier that is a combination of its type and its actual model ID.

Many users will have a large amount of data to be fetched, so it’s important that we take measures to make sure that the app still performs well. Since searching is unlikely to happen right after the user opens the app (that’s when we start prefetching this data, if needed), all this work is performed by a low-priority NSOperationQueue. If we ever need to fetch images to be used as thumbnails, we request it with low-priority NSURLSessionDownloadTask. These kinds of measures ensure that we don’t affect the performance of any operation or network request triggered by user actions, such as fetching new images and pages when scrolling through content.

Flickr provides a huge amount of public content, including many amazing photos. If anybody searches for “Northern Lights” in Spotlight, shouldn’t we show them our best Aurora Borealis photos? For this public content – photos, public groups, tags and so on – we leverage NSUserActivity, with its new search APIs, to make it all searchable when viewed. Here’s an example:

CSSearchableItemAttributeSet *attributeSet = [[CSSearchableItemAttributeSet alloc] initWithItemContentType:(NSString *) kUTTypeImage];
// Setup attributeSet the same way we did before...
// Set the related unique identifier, so it matches to any existing item indexed with Core Spotlight.     
attributeSet.relatedUniqueIdentifier = [NSString stringWithFormat:@"%@/%@", photo.identifier, photo.searchContentType];
        
self.userActivity = [[NSUserActivity alloc] initWithActivityType:@"FLKSearchableUserActivityType"];
self.userActivity.title = photo.title;
self.userActivity.keywords = [NSSet setWithArray:photo.keywords];
self.userActivity.webpageURL = photo.photoPageURL;
self.userActivity.contentAttributeSet = attributeSet;
self.userActivity.eligibleForSearch = YES;
self.userActivity.eligibleForPublicIndexing = photo.isPublic;
self.userActivity.requiredUserInfoKeys = [NSSet setWithArray:self.userActivity.userInfo.allKeys];
        
[self.userActivity becomeCurrent];

Every time a user opens a photo, public group, location page, etc., we create a new NSUserActivity and make it current. The more often a specific activity is made current, the more relevant iOS considers it. In fact, the more often an activity is made current by any number of different users, the more relevant Apple considers it globally, and the more likely it will show up for other iOS users as well (provided it’s public).

Until now we’ve only seen half the picture. We’ve seen how to index things for Spotlight search; when a user finally does search and taps on a result, how do we take them to the right place in our app? We’ll get to this a bit later, but for now suffice it to say that you’ll get a call to the method application:continueUserActivity:restorationHandler: to our application delegate.

It’s important to note that if we wanted to make use of the userInfo in the NSUserActivity, iOS won’t give it back to you for free in this method. To get it, we have to make sure that we assigned an NSSet to the requiredUserInfoKeys property of our NSUserActivity when we created it. In their documentation, Apple also tells us that if you set the webpageURL property when eligibleForSearch is YES, you need to make sure that you’re pointing to the right web URL corresponding to your content, otherwise you might end up with duplicate results in Spotlight (Apple crawls your site for content to surface in Spotlight, and if it finds the same content at a different URL it’ll think it’s a different piece of content).

Universal Links

In order to support Universal Links, Apple requires that every domain supported by the app host an “apple-app-site-association” file at its root. This is a JSON file that describes which relative paths in your domains can be handled by the app. When a user taps a link from another app in iOS, if your app is able to handle that domain for a specific path, it will open your app and call application:continueUserActivity:restorationHandler:. Otherwise your application won’t be opened – Safari will handle the URL instead.

{
    "applinks": {
        "apps": [],
        "details": {
            "XXXXXXXXXX.com.some.flickr.domain": {
                "paths": [
                    "/",
                    "/photos/*",
                    "/people/*",
                    "/groups/*"
                ]
            }
        }
    }
}

This file has to be hosted on HTTPS with a valid certificate. Its MIME type needs to be “application/pkcs7-mime.” No redirects are allowed when requesting the file. If the only intent is to support Universal Links, no further steps are required. But if you’re also using this file to support Handoffs (introduced in iOS 8), then your file has to be CMS signed by a valid TLS certificate.

In Flickr, we have a few different domains. That means that each one of flickr.com, http://www.flickr.com, m.flickr.com and flic.kr must provide its own JSON association file, whether or not they differ. In our case, the flic.kr domain actually does support different paths, since it’s only used for short URLs; hence, its “apple-app-site-association” is different than the others.

On the client side, only a few steps are required to support Universal Links. First, “Associated Domains” must be enabled under the Capabilities tab of the app’s target settings. For each supported domain, an entry “applinks:” entry must be added. Here is how it looks for Flickr:

Screen Shot 2015-10-28 at 2.00.59 PM

That is it. Now if someone receives a text message with a Flickr link, she will jump right to the Flickr app when she taps on it.

Deep linking into the app

Great! We have Flickr photos showing up as search results and Flickr URLs opening directly in our app. Now we just have to get the user to the proper place within the app. There are different entry points into our app, and we need to make the implementation consistent and avoid code duplication.

iOS has been supporting deep linking for a while already and so has Flickr. To support deep linking, apps could register to handle custom URLs (meaning a custom scheme, such as myscheme://mydata/123). The website corresponding to the app could then publish links directly to the app. For every custom URL published on the Flickr website, our app translates it into a representation of the data to be shown. This representation looks like this:

@interface FLKRoute : NSObject

@property (nonatomic) FLKRouteType type;
@property (nonatomic, copy) NSString *identifier;

@end

It describes the type of data to present, and a unique identifier for that type of data.

- (void)navigateToRoute:(FLKRoute *)route
{
    switch (route.type) {
        case FLKRouteTypePhoto:
            // Navigate to photo screen
            break;
        case FLKRouteTypeAlbum:
           // Navigate to album screen
            break;
        case FLKRouteTypeGroup:
            // Navigate to group screen
            break;
        // ...
        default:
            break;
    }
}

Now, all we have to do is to make sure we are able to translate both NSURLs and NSUserActivity objects into FLKRoute instances. For NSURLs, this translation is straightforward. Our custom URLs follow the same pattern as the corresponding website URLs; their paths correspond exactly. So translating both website URLs and custom URLs is a matter of using NSURLComponents to extract the necessary information to create the FLKRoute object.

As for NSUserActivity objects passed into application:continueUserActivity:restorationHandler:, there are two cases. One arises when the NSUserActivity instance was used to index a public item in the app. Remember that when we created the NSUserActivity object we also assigned its webpageURL? This is really handy because it not only uniquely identifies the data we want to present, but also gives us a NSURL object, which we can handle the same way we handle deep links or Universal Links.

The other case is when the NSUserActivity originated from a CSSearchableItem; we have some more work to do in this case. We need to parse the identifier we created for the item and translate it into a FLKRoute. Remember that our item’s identifier is a combination of its type and the model ID. We can decompose it and then create our route object. Its simplified implementation looks like this:

FLKRoute * FLKRouteFromSearchableItemIdentifier(NSString *searchableItemIdentifier)
{
    NSArray *routeComponents = [searchableItemIdentifier componentsSeparatedByString:@"/"];
    if ([routeComponents count] != 2) { // type + id
        return nil;
    }
    
    // Handle the route type
    NSString *searchableItemContentType = [routeComponents firstObject];
    FLKRouteType type = FLKRouteTypeFromSearchableItemContentType(searchableItemContentType);
    
    // Get the item identifier
    NSString *itemIdentifier = [routeComponents lastObject];
    
    // Build the route object
    FLKRoute *route = [FLKRoute new];
    route.type = type;
    route.parameter = itemIdentifier;
    
    return route;
}

Now we have all our bases covered and we’re sure that we’ll drop the user in the right place when she lands in our app. The final application delegate method looks like this:

- (BOOL)application:(nonnull UIApplication *)application continueUserActivity:(nonnull NSUserActivity *)userActivity restorationHandler:(nonnull void (^)(NSArray * __nullable))restorationHandler
{
    FLKRoute *route;
    NSString *activityType = [userActivity activityType];
    NSURL *url;
    
    if ([activityType isEqualToString:CSSearchableItemActionType]) {
        // Searchable item from Core Spotlight
        NSString *itemIdentifier = [userActivity.userInfo objectForKey:CSSearchableItemActivityIdentifier];
        route = FLKRouteFromSearchableItemIdentifier(itemIdentifier);
        
    } else if ([activityType isEqualToString:@"FLKSearchableUserActivityType"] ||
               [activityType isEqualToString:NSUserActivityTypeBrowsingWeb]) {
        // Searchable item from NSUserActivity or Universal Link
        url = userActivity.webpageURL;
        route = [url flk_route];
        
    }
    
    if (route) {
        [self.router navigateToRoute:route];
        return YES;
    } else if (url) {
        [[UIApplication sharedApplication] openURL:url]; // Fail gracefully
        return YES;
    } else {
        return NO;
    }
}

3D Touch

With the release of iPhone 6S and iPhone 6S Plus, Apple introduced a new gesture that can be used with your iOS app: 3D Touch. One of the coolest features it has brought is the ability to preview content before pushing it onto the navigation stack. This is also known as “peek and pop.”

You can easily see how this feature is implemented in the native Mail app. But you won’t always have a simple UIView hierarchy like Mail’s UITableView, where a tap anywhere on a cell opens a UIViewController. Take Flickr’s notifications screen, for example:

4.0-04-core-five-notifications

If the user taps on a photo in one of these cells, it will open the photo view. But if the user taps on another user’s name, it will open that user’s profile view. Previews of these UIViewControllers should be shown accordingly. But the “peek and pop” mechanism requires you to register a delegate on your UIViewController with registerForPreviewingWithDelegate:sourceView:, which means that you’re working in a much higher layer. Your UIViewController’s view might not even know about its subviews’ structures.

To solve this problem, we used UIView’s method hitTest:withEvent:. As the documentation describes, it will give us the “farthest descendant of the receiver in the view hierarchy.” But not every hitTest will necessarily return the UIView that we want. So we defined a protocol, FLKPeekAndPopTargetView, that must be implemented by any UIView subclass that wants to support peeking and popping from it. That view is then responsible for returning the model used to populate the UIViewController that the user is trying to preview. If the view doesn’t implement this protocol, we query its superview. We keep checking for it until a UIView is found or there aren’t any more superviews available. This is how this logic looks:

+ (id)modelAtLocation:(CGPoint)location inSourceView:(UIView*)sourceView
    
    // Walk up hit-test tree until we find a peek-pop target.
    UIView *testView = [sourceView hitTest:location withEvent:nil];
    id model = nil;
    while(testView && !model) {
      
        // Check if the current testView conforms to the protocol.
        if([testView conformsToProtocol:@protocol(FLKPeekAndPopTargetView)]) {
            
            // Translate location to view coordinates.
            CGPoint locationInView = [testView convertPoint:location fromView:sourceView];
            
            // Get model from peek and pop target.
            model = [((id<FLKPeekAndPopTargetView>)testView) flk_peekAndPopModelAtLocation:locationInView];
            
        } else {
            //Move up view tree to next view
            testView = testView.superview;
        }
    }
    
    return model;
}

With this code in place, all we have to do is to implement UIViewControllerPreviewingDelegate methods in our delegate, perform the hitTest and take the model out of the FLKPeekAndPopTargetView‘s implementor. Here’s is the final implementation:

- (UIViewController *)previewingContext:(id<UIViewControllerPreviewing>)previewingContext
              viewControllerForLocation:(CGPoint)location {
    
    id model = [[self class] modelAtLocation:location inSourceView:previewingContext.sourceView];
    UIViewController *viewController = nil;
    if ([model isKindOfClass:[FLKPhoto class]]) {
        viewController = // ... UIViewController that displays a photo.
    } else if ([model isKindOfClass:[FLKAlbum class]]) {
        viewController = // ... UIViewController that displays an album.
    } else if ([model isKindOfClass:[FLKGroup class]]) {
        viewController = // ... UIViewController that displays a group.
    } // ...
    return viewController;
    
}

- (void)previewingContext:(id<UIViewControllerPreviewing>)previewingContext
     commitViewController:(UIViewController *)viewControllerToCommit {
    
    [self.navigationController pushViewController:viewControllerToCommit animated:YES];
    
}

Last but not least, we added support for Quick Actions. Now the user has the ability to quickly jump into a specific section of the app just by pressing down on the app icon. Defining these Quick Actions statically in the Info.plist file is an easy way to implement this feature, but we decided to go one step further and define these options dynamically. One of the options we provide is “Upload Photo,” which takes the user to the asset picker screen. But if the user has Auto Uploadr turned on, this option isn’t that relevant, so instead we provide a different app icon menu option in its place.

Here’s how you can create Quick Actions:

NSMutableArray<UIApplicationShortcutItem *> *items = [NSMutableArray array];
    
[items addObject:[[UIApplicationShortcutItem alloc] initWithType:@"FLKShortcutItemFeed"
                                                  localizedTitle:NSLocalizedString(@"Feed", nil)]];
    
[items addObject:[[UIApplicationShortcutItem alloc] initWithType:@"FLKShortcutItemTakePhoto"
                                                  localizedTitle:NSLocalizedString(@"Upload Photo", nil)] ];

[items addObject:[[UIApplicationShortcutItem alloc] initWithType:@"FLKShortcutItemNotifications"
                                                  localizedTitle:NSLocalizedString(@"Notifications", nil)]];
    
[items addObject:[[UIApplicationShortcutItem alloc] initWithType:@"FLKShortcutItemSearch"
                                                  localizedTitle:NSLocalizedString(@"Search", nil)]];
    
[[UIApplication sharedApplication] setShortcutItems:items];

And this is how it looks like when the user presses down on the app icon:

IMG_0344

Finally, we have to handle where to take the user after she selects one of these options. This is yet another place where we can make use of our FLKRoute object. To handle the app opening from a Quick Action, we need to implement application:performActionForShortcutItem:completionHandler: in the app delegate.

- (void)application:(UIApplication *)application performActionForShortcutItem:(UIApplicationShortcutItem *)shortcutItem completionHandler:(void (^)(BOOL))completionHandler {
    FLKRoute *route = [shortcutItem flk_route];
     [self.router navigateToRoute:route];
    completionHandler(YES);
}

Conclusion

There is a lot more to consider when shipping these features with an app. For example, with Flickr, there are various platforms the user could be using. It is important to make sure that the Spotlight index is up to date to reflect changes made anywhere. If the user has created a new album and/or left a group from his desktop browser, we need to make sure that those changes are reflected in the app, so the newly-created album can be found through Spotlight, but the newly-departed group cannot.

All of this work should be totally opaque to the user, without hogging the device’s resources and deteriorating the user experience overall. That requires some considerations around threading and network priorities. Network requests for UI-relevant data should not be blocked because we have other network requests happening at the same time. With some careful prioritizing, using NSOperationQueue and NSURLSession, we managed to accomplish this with no major problems.

Finally, we had to consider privacy, one of the pillars of Flickr. We had to be extremely careful not to violate any of the user’s settings. We’re careful to never publicly index private content, such as photos and albums. Also, photos marked “restricted” are not publicly indexed since they might expose content that some users might consider offensive.

In this blog post we went into the basics of integrating iOS 9 Search, Universal Links, and 3D Touch in Flickr for iOS. In order to focus on those features, we simplified some of our examples to demonstrate how you could get started with them in your own app, and to show what challenges we faced.

Flickr September 2014

Like this post? Have a love of online photography? Want to work with us? Flickr is hiring mobile, back-end and front-end engineers, in our San Francisco office. Find out more at flickr.com/jobs.

Perceptual Image Compression at Flickr

Archie Russell, Peter Norby, Saeideh Bakhshi

At Flickr our users really care about image quality.  They also care a lot about how responsive our apps are.  Addressing both of these concerns simultaneously is challenging;  higher quality images have larger file sizes and are slower to transfer.   Slow transfers are especially noticeable on mobile devices.   Flickr had historically aimed for high quality at the expense of larger files, but in late 2014 we implemented a method to both maintain image quality and decrease the byte-size of the images we serve to users.   As image appearance is very important to our users,  we performed an extensive user test before rolling this change out.   Here’s how we did it.

Background:  JPEG Quality Settings

Fig 1.    JPEG settings vs file size for a test image.

JPEG compression has several tuneable knobs.   The q-value is the best known of these; it adjusts the level of spatial detail stored for fine details;  a higher q-value typically keeps more detail.    However,  as q-value gets very close to 100,  file size increases dramatically,  usually without improving image appearance.

If file size and app performance isn’t an issue,  dialing up q-value is an easy way to get really nice-looking images; this is what Flickr has done in the past.    And if appearance isn’t very important,  dialing down q-value is a viable option.    But if you want both,  you’re kind of stuck.   Additionally,  q-value isn’t one-size-fits-all,  some images look great at q-value 80 while others don’t.

Another commonly adjusted setting is chroma-subsampling,  which alters the amount of color information stored in a JPEG file.    With a setting of 4:4:4,  the two chroma (color) channels in a JPG have as much information as the luminance channel.   In an image with a setting of 4:2:0, each chroma channel has only a quarter as much information as in an a 4:4:4 image.

 q=96,  chroma=4:4:4 (125KB) q=70, chroma=4:4:4 (67KB)
q=96, chroma=4:2:0 (62KB)  q=70, chroma=4:2:0 (62KB)

Table 1:   JPEG stored at different quality and chroma levels.   The upper left image is saved at high quality and chroma level; notice the color and detail in the folds of the red flag.   The lower right image has the lowest quality;  notice artifacts along the right edges of the red flag.

Perceptual JPEG Compression

Ideally we’d have an algorithm which automatically tuned all JPEG parameters to make a file smaller, but which would limit perceptible changes to the image.  Technology exists that attempts to do this and can decrease image file size by 30-50%. This compression ratio is highly dependent on image content and dimensions.

compressed: 112KB non-compressed: 224KB

Fig 2. Compressed cropped JPEG is 50% smaller than not-compressed cropped JPEG, above, with no obvious defects.  Compression ratio is similar for a compressed 2048-pixel wide JPEG (475KB) of the entire scene and its corresponding not-compressed JPEG (897KB). 

We were pleased with perceptually compressed images in non-structured examinations.  The compressed images were smaller and nearly indistinguishable from their sources.   But we wanted to really quantify how well the technology worked before considering incorporating it into Flickr.  The standard computational tools for evaluating compression, such as SSIM, are fairly simplistic and don’t do a great job at modeling how a user sees things.  To really evaluate this technology had to use a better measure of perceptibility:  human minds.

The Gamified Taste Test

To test whether our image compression would impact user perception of image quality, we put together a “taste test.”  The taste test is constructed as a game with multiple rounds where users look at both compressed and uncompressed images.  Users accumulate points the longer they play, and get more points for doing well at the game.  We maintained a leaderboard to encourage participation and used only internal testers.The game’s test images came from a diverse collection of 250 images contributed by Flickr staff.  The images came from a variety of cameras and included a number of subjects from photographers with varying skill levels.

sampling of images used in taste test
Fig 3. A sampling of images used in our taste test.

In each round, our test code randomly select a test image, and present two variants of this image side by side.  50% of the time we present the user two identical images; the rest of the time we present one compressed image and one uncompressed image.  We ask the tester if the two images look the same or different and we’d expect a user choosing randomly OR a user unable to distinguish the two cases would answer correctly about half the time.  We randomly swap the location of the compressed images to compensate for user bias to the left or the right.  If testers choose correctly, they are presented with a second question: “Which image did you prefer, and why?”

two kittens in a video game
Fig 4. Screenshot of taste test.

Our test displays images simultaneously to prevent testers noticing a longer load time for the larger, non-compressed image.  The images are presented with either 320, 640, or 1600 pixels on their longest side.  The 320 & 640px images are shown for 12 seconds before being dimmed out.  The intent behind this detail is to represent how real users interact with our images.  The 1600px images stay on screen for 20 seconds, as we expect larger images to be viewed for longer periods of time by real users.   We award 100 points per round, regardless of whether a tester chose correctly and also award a bonus of 400 points when a tester correctly identifies whether images were identical or different.  We update the tester’s score every five tests so that the user perceives an increasing score without being rewarded immediately for any particular behavior.

Taste Test Outcome and Deployment

We ran our taste test for two weeks and analyzed our results.    Although we let users play as long as they liked,  we skipped the first result per user as a “warm-up” and considered only the subsequent ten results,  this limited the potential for users training themselves to spot compression artifacts.   We disregarded users that had fewer than eleven results.

images total results # labeled “identical” by tester % labeled “identical” by tester
two identical images 368 253 68.8%
one compressed, one non-compressed 352 238 67.6%

Table 2.   Taste test results.   Testers select “identical” at nearly the same rate, whether the input is identical or not.

When our testers were presented with two identical images, they thought the images were identical only 68.8% of the time(!), and when presented with a compressed image next to a non-compressed image,  our testers thought the images were identical slightly less often:  67.6% of the time.  This difference was small enough for us,  and our statisticians told us it was statistically insignificant.  Our image pairs were so similar that multiple testers thought all images were identical and reported that the test system was buggy. We inspected the images most often labeled different, and found no significant artifacts in the compressed versions.

So even in this side-by-side test,  perceptual image compression is just barely noticeable when images are presented side-by-side.  As the Flickr website wouldn’t ever show compressed and uncompressed images at the same time, and the use of compression had large benefits in storage footprint and site performance, we elected to go forward.

At the beginning of 2014 we silently rolled out perceptual-based compression on our image thumbnails (we don’t alter the “original” images uploaded by our users).  The slight changes to image appearance went unnoticed by users, but user interactions with Flickr became much faster,  especially for users with slow connections, while our storage footprint became much smaller.  This was a best-case scenario for us.

Evaluating perceptual compression was a considerable task,  but it gave the confidence we needed to apply this compression in production to our users.    This marked the first time Flickr had adjusted image settings in years, and, it was fun.
High Score List
Fig 5.  Taste test high score list

Epilogue

After eighteen months of perceptual compression at Flickr,  we adjusted our settings slightly to shrink images an additional 15%.   For our users on mobile devices,  15% fewer bytes per image makes for a much more responsive experience.We had run a taste test on this newer setting and users were were able to spot our compression slightly more often than with our original settings.   When presented a pair of identical images, our testers declared these images identical 65.2% of the time,  when presented with different images,  of our testers declared the images identical 62% of the time.   It wasn’t as imperceptible as our original approach, but, we decided it was close enough to roll out.

Boy were we wrong!   A few very vocal users spotted the compression and didn’t like it at all.    The Flickr Help Forum had a very lively thread which Petapixel picked up.  We beat our heads against the wall considered our options and came up with a middle path between our initial and follow-on approaches,  giving us smaller, faster-to-load files while still maintaining the appearance our users expect.

Through our use of perceptual compression,  combined with our use of on-the-fly resize and COS,  we’ve been able to decrease our storage footprint dramatically, while simultaneously improving user experience. It’s a win all around but we’re not done yet — we still have a few tricks up our sleeves.