Back

April 23, 2014

Big Data Analytics for Web Security is Beyond Proactive Monitoring

 

Summary

In April this year, we sat down with some C-level executives of Fortune 100 companies in finance, insurance, logistics, legal Services and IT to discuss trends, developments and solutions regarding DDoS attacks. The executives were primarily concerned about sophisticated site-specific attacks and 1-day attacks, which are difficult for traditional security measures to detect and mitigate. Our CTO, Reggie Yam, and I proposed a new approach: establish a baseline and proactively monitor network traffic to detect anomalies, which may indicate that an attack is about to take place. This is done by examining all logs from multiple sources–similar to big data analytics. In fact, proactive monitoring shares some key characteristics with big data analytics: volume, velocity and variety.

 

Volume

Operators do not like the terms “housekeeping” or “purging”, since that means they have to discard a portion of their systems’ historical data. Nonetheless, it has always been a necessary evil to help systems maintain an optimal level of performance.

Storing data is inexpensive these days, and it is only going to get cheaper. At the World Internet Developer Summit 2014, Internet developers agreed that, contrary to conventional wisdom, it is actually more expensive to delete the data than to store it, since significant time and resources are required for deciding what data should be kept and what should be deleted. While having less data to search through and process allows systems to stay lean and fast, it is difficult to decide what data is discardable.

In my experience, derived from years of frontline operation, the simplest solution is to process the data and store that processed data. Storing just the processed data—such as overall network latency, application response codes, lengths of responses and security hash of those responses—will not put too much stress on servers if the data is only probed once every few minutes or a few times per hour. If needed, compressed raw data can be stored separately for safekeeping.

Based on the profile that has been established, quality controls can be deployed, too. By analyzing historical pageviews at a certain timeframe and location, as well as projected pageviews, operators can set an upper and lower limit that will trigger security events for other systems. For example, since DDoS attacks sometimes cause a slight connection drop while incoming traffic increases, properly implemented proactive monitoring ensures that this activity is detected and analyzed. The system compares this activity against the profile baseline and decides that there is a fair chance that a certain type of attack is taking place. The judgement is based on information recorded in the operation handbook, in which the particular type of attack is well documented—the system automatically pulls that information and translates data into actionable information.

 

 

Velocity

Speed is key when it comes to the Internet. Here are some self-assessment qstion you may ask your teams:

 

– How quickly do you expect to detect a system failure?

– How long does it actually takes?

– How certain are you?

 

Customers today can be unforgiving when it comes to waiting for response, feedback or update—in the Internet world, patience is no longer a celebrated virtue. For many customers, a 5-second delay is just as annoying as a 30-second delay. By enhancing the detection algorithm, errors and abnormalities can be detected with increased speeds and without sacrificing too much in accuracy.

 

Velocity can obviously be improved through hardware upgrades, but software optimizations also help. For example, let’s say that traceroute is being used to confirm the routing path internally. If no timeout is specified, the server can take up to 5 minutes to complete a single traceroute—with e proper parameters, that time can be cut down to less than 10 seconds. Choosing a wise timeout period also helps keep the monitoring run as smooth as possible.

 

Speed is also critical for attackers. DDoS proof-of-concept (PoC) tools are increasingly becoming open-source projects, which means that a global community effort makes it possible to convert PoC tools into widely available high-quality attack tools in less than a day. Compare that to the IT’s traditional approach to vulnerability management, which takes about 2 weeks before a vulnerability is patched. During this time, attackers can exploit the disclosed vulnerability to attack unpatched systems and applications. While operators are waiting for vendors to release an official patch, they may get temporary fixes or workarounds from vendors and other sources, sometimes even before the attacking tools appear.

 

 

Varieties

A monitoring profile is established based on historical data to establish baselines and detect anomalies. There is no single filter that can detect and mitigate all types of DDoS attacks, nor is there a one-size-fits-all monitoring profile. However, developers often structure their data poorly or adopt numerous data standards (which varies a lot from vendor to vendor). By translating information from multiple intelligent sources to form a comprehensive view, proactive monitoring can be taken to a whole new level.

Some monitoring sources can even be provided by attackers—the approach to defense is always to think like attackers. Operators can expand their monitoring matrices through less conventional sources, such as monitoring C&C servers and IRC channels that are commonly used to discuss attacks. You can even infect one of your honeynets with malware.

Finally, evaluate the information gathered from all of these sources and convert them according to threat level into universal, human-readable color codes that can be easily digested by anyone that needs the information.

 

 

Final word

The risk of DDoS attacks is clearly increasing in both frequency and attack complexity. It is becoming increasingly impractical to deploy a security strategy that is focused purely on defenses and filters. Although proactive monitoring takes more time to plan and deploy, it is much more effective than pure-defense, reactive mitigation systems; it also saves significant time, resources and headaches down the line.

Many organizations see the potential of big data, but how many are actually confident enough to deploy big data analytics on their own, not to mention fully utilize it? The same goes for proactive monitoring: everyone agrees that it seems like a good idea, but in practice often over-deploy or do not know where to begin. As with almost any kind of project, it is often easier and cheaper to plan for security during the design phase rather than add it in as an afterthought during the production stage. Be responsible: plan ahead—proactively.

Get the latest cybersecurity news and expert insights direct to your inbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.