Skip to main content

You’ll still need permission if you want to crawl Facebook’s public content

posted onJuly 1, 2010
by hitbsecnews

Facebook is updating its policies to explicitly allow a handful of third-party search engines to crawl public content.

Before, Facebook banned robots, spiders, scrapers or harvesting bots from automatically collecting data across the social network’s pages, unless their creators had written permission. This raised the criticism that the social network was trying to have it both ways — it could juice up search engine optimization and be discovered on Google, and crack down on emerging threats from smaller companies that might use the data in innovative ways.

The company’s chief technology officer Bret Taylor countered that criticism on Hackers News today, saying that Facebook’s policies were meant to protect users from “sleazy” crawlers that might grab their data and resell it.

Source

Tags

Privacy

You May Also Like

Recent News

Friday, November 29th

Tuesday, November 19th

Friday, November 8th

Friday, November 1st

Tuesday, July 9th

Wednesday, July 3rd

Friday, June 28th

Thursday, June 27th

Thursday, June 13th

Wednesday, June 12th

Tuesday, June 11th