When it comes to guarding their networks against intruders, many businesses tend to focus on tough defences at their entry points to try and make it as difficult as possible for hackers to gain access.
But with the tactics used by criminals becoming ever more sophisticated, no solution these days can be treated as 100 per cent impenetrable. Therefore, it is not enough to just have strong walls - companies also need to be able to react quickly and effectively to threats that have already breached their perimeter.
However, this is an area in which many businesses are failing. Speaking to the BBC, Peter Woollacott, head of security firm Huntsman, stated that the time between an initial breach and the attack being detected can be as long as 200 days.
So why are enterprises finding it so difficult to spot intrusions into their network? Major reasons include the fact that company networks are becoming increasingly complicated, while businesses also struggle to find personnel with the right level of skills to counter the hackers.
Mr Woollacott said: "It takes so long because there is a shortage of competent security analysts and there's an enormous amount of technology that's providing you with threat information."
To tackle this, many businesses are now turning to automated tools that can monitor traffic within their networks and alert staff to any unusual activity, such as a user who rarely accesses a customer database suddenly downloading large amounts of data.
Mr Woollacott said: "You need to use machine power to do some of the information collection. Anomaly detection is great, it is very powerful but it needs to be used in conjunction with high-speed algorithms."
This is because the complexity of enterprise networks is now often far beyond what humans can cope with. Today's threat intelligence systems monitor and report on every activity on an intranet, which can can add up to millions, if not billions, of individual events every day that require analysis.
However, companies must not become too dependent on technological solutions alone. Not every anomaly flagged up will be a security risk, so a human element is essential to verify if it is a false alarm or something more serious.
The consequence of failing to deal with this can be huge. Analysts have suggested one of the reasons why US retailer Target failed to prevent its huge data breach in December 2013 - which ultimately compromised upwards of 70 million customer accounts - was because its threat detection systems overwhelmed its security staff with false alarms, allowing the genuine threat to slip through unnoticed.
Firms may also need to reevaluate how they configure their security solutions. The BBC observed that many companies operate a 'castle and moat' system. This means that while they may have very strong perimeter walls, the defences and monitoring tend to look outwards, leaving them vulnerable to attacks from within.
Many attacks in today's environment will seek to exploit this by tunnelling into a network, for instance by utilising social engineering techniques that trick people into opening compromised emails, or using stolen usernames and passwords to access a network, it noted.