You’ll still need permission if you want to crawl Facebook’s public content
Facebook is updating its policies to explicitly allow a handful of third-party search engines to crawl public content.
Before, Facebook banned robots, spiders, scrapers or harvesting bots from automatically collecting data across the social network’s pages, unless their creators had written permission. This raised the criticism that the social network was trying to have it both ways — it could juice up search engine optimization and be discovered on Google, and crack down on emerging threats from smaller companies that might use the data in innovative ways.
The company’s chief technology officer Bret Taylor countered that criticism on Hackers News today, saying that Facebook’s policies were meant to protect users from “sleazy” crawlers that might grab their data and resell it.
