Tuesday, 4 March 2014

Screen scraping: how to stop the internet's invisible data leeches

Data is your business's most valuable asset, so it's never a good idea to let it slip into the hands of competitors.

Sometimes, however, that can be difficult to prevent due to an automated technique known as 'screen scraping' that has for years provided a way of extracting data from website pages to be indexed over time.

This poses two main problems: first, that data could be used to gain a business advantage - from undercutting prices (in the case of a price comparison website, for example) to obtaining information on product availability.

Persistent scraping can also grind down a website's performance, which recently happened to LinkedIn when hackers used automated software to register thousands of fake accounts in a bid to extract and copy data from member profile pages.

Ashley Stephenson, CEO of Corero Network Security, explains the origins behind the phenomenon, how it could be affecting your business right now and how to defend from it.

TechRadar Pro: What is screen scraping? Can you talk us through some of the techniques, and why somebody would do it?

Ashley Stephenson: Screen scraping is a concept that was pioneered by early terminal emulation programs decades ago. It is a programmatic method to extract data from screens that are primarily designed to be viewed by humans.

Basically the screen scraping program pretends to be a human and "reads" the screen, collecting the interesting data into lists that can be processed automatically. The most common format is name:value pairs. For example, information extracted from a travel site reservation screen might look like the following -

Origin: Boston, Destination:Atlanta, Date:10/12/13, Flight:DL4431, Price:$650

Screen scraping has evolved significantly over the years. A major historical milestone occurred when the screen scraping concept was applied to the Internet and the web crawler was invented.

Web crawlers originally "read" or screen scraped website pages and indexed the information for future reference (e.g. search). This gave rise to the search engine industry. Today webcrawlers are much more sophisticated and websites include information (tags) dedicated to the crawler and never intended to be read by a human.

Another subsequent milestone in the evolution of screen scraping was the development of e-retail screen scraping, perhaps the most well know example being the introduction of price comparison websites.

These sites employ screen scraping programs to periodically visit a list of known e-retail sites to obtain the latest price and availability information for a specific set of products or services. This information is then stored in a database and used to provide aggregated comparative views of the e-retail landscape to interested customers.

In general the previously described screen scraping techniques have been welcomed by website operators who want their sites to be indexed by the leading search engines such as Google or Bing, similarly e-retailers typically want their products to be displayed on the leading comparison shopping sites.

TRP: Have there been any recent developments in competitive screen scraping?

AS: In contrast over the past few years, recent developments in competitive screen scraping are not necessarily so welcome. For a site to be scraped by a search engine crawler is OK if the crawler visits are infrequent.

For a site to be the target of a price comparison site scraper is OK if the information obtained is used fairly. However as the number of specialized search engines continues to increase and the frequency of price check visits skyrockets these automated page views can rise to levels which impact the intended operation of the target site.

More specifically, if the target site is the victim of competitive scraping the information obtained can be used to undermine the business of the site owner. For example, undercutting prices, beating odds, aggressively acquiring event tickets, reserving inventory, etc.

In general, we believe there is a significant increase in the use of automated bots to gather website content to seed other services, fuel competitive intelligence, and aggregate product details like pricing, features and inventory. Increasingly this information is used to get a leg up over the competition, or to increase website hit rates.

For example, in the travel and tourism industry, price scraping is a real issue as travel sites are constantly looking to beat out the competition by offering the 'best price'. Additionally, the idea of inventory scraping is becoming more common. The concept of bots being used to purchase volumes of a high value item to resell, or to increase online pricing to deter potential buyers.

With the high availability of seemingly legal software bundles and services to facilitate the screen scraping process, and the motives we've just described, it's really a pretty powerful combination.

TRP: How long has screen scraping been going on for and is it becoming more or less of a problem for companies?

AS: Screen scraping has been going on for years but it is only more recently that victims, negatively impacted by this type of behaviour, are beginning to react. Some claim copyright infringement and unfair business practices while in contrast, organizations doing the scraping claim freedom of information.

Many website owners have written usage policies on their sites that prohibit aggressive scraping but have no ability to enforce their policies - the problem doesn't seem to be going away anytime soon.

TRP: How does screen scraping impact negatively on a business's IT systems?

AS: Competitive or abusive Screen scraping is just another example of unwanted traffic. Recent studies show that 61% of Internet Traffic is generated by bots. Bad-bot scrapers consume valuable resources and bandwidth intended to serve genuine web site users, this can result in increased latency for real customers, due to large numbers of non-human visits to the site. The business impact manifests itself as additional IT investment needed to serve the same number of customers.

TRP: Ebay introduced an API years ago to combat screen scraping. Is creating an API to provide access to data a recommended form of defense?

AS: Providing a dedicated API allows "good" scrapers access to your data programmatically and voluntarily observes resource utilization limits however it does not stop malicious information harvesting to be used for competitive advantage.

Real defense can be obtained by taking advantage of technology that can identify and block unwanted non-human visitors to your website. This would allow real or 'good' users to access the site for their intended purposes, while blocking the bad crawlers and bots from causing damage.

TRP: How else can an organisation defend itself from screen scraping?

AS: Using techniques such as IP reputation intelligence, geolocation enforcement, spoofed IP source detection, real time threat-level assessment, request-response behaviour analysis and bi-directional deep packet inspection.

Many organizations today are relying on Corero's First Line of Defense technology block unwanted website traffic including excessive scraping. Corero helps identify human visitors vs. non-human bots (e.g. running scripts) and blocks the unwanted offenders real-time.

TRP: Are there any internet rules governing the use (or misuse) of screen scraping?

AS: Screen scraping has been the topic of some pretty high-profile lawsuits for example Craigslist vs. PadMapper, and in the travel space for example, Ryanair vs. Budget Travel.

However, most court cases to date have not been fully resolved to the satisfaction of the victims. The courts often refuse to grant injunctions for said activity most likely because they have no precedent to work with. This is primarily due to the fact that there few if any internet rules really governing this type of activity.

Source:http://www.techradar.com/news/internet/web/screen-scraping-how-to-stop-the-internet-s-invisible-data-leaches-1214404

No comments:

Post a Comment