History lessons 4: What to do when an anomaly is detected?

David Rogers continues his blog series — commissioned by on-chip monitoring experts UltraSoC (now part of Siemens) examining security through the ages and highlighting lessons for emerging and future technologies.

There are many tales from history where things have been detected that have led to plots being uncovered. Some of this has been driven by prior knowledge, sometimes the actors involved are already under suspicion in some way and in other cases it is pure chance and luck.

Guy Fawkes
Source: Edgar Wilson “Bill” Nye (1850-1896 [Public domain]

The gunpowder plot to blow up England’s parliament in 1605 was ultimately discovered because of a message to a Catholic parliamentarian warning him to stay away from the opening of parliament on November 5th. It was dismissed as a hoax at the time, but the King’s suspicions were raised and he instigated searches of parliament, increasing security. On the night of November 4th, Guy Fawkes was discovered and caught as he was leaving the place where he had stored the gunpowder underneath parliament. It appears that this was genuinely an artefact of the increased vigilance, as a few days before, Guy Fawkes had reported to his co-conspirators that “he found his ‘private marks’ all undisturbed” at the site where the gunpowder was stored. This seems to indicate that Guy Fawkes had taken his own precautions against the discovery and potential sabotage of the plot.

Another interesting story of discovery and detection is the Babington plot against Queen Elizabeth I. Queen Elizabeth’s spymaster, Francis Walsingham, discovered that a group of Catholic plotters led by a man called Anthony Babington were communicating with Mary Queen of Scots in order to depose Elizabeth and put Mary on the English throne. Walsingham first used an agent to change and control the channel by which Mary was communicating, ensuring that messages to and from her were hidden in the corks of beer barrels. This allowed him to have them intercepted and deciphered. The plot was allowed to continue, while Walsingham waited and gathered further evidence through the letters.

In the technology space, detection and response mechanisms exist on the network side mainly. Network traffic analysis tools are now backed by AI and machine learning techniques. The techniques for handling large volumes of network traffic and processing this at scale to discover anomalies have come a long way but are yet to really properly take into account what is going on with the end points and certainly not the innards of them to a chip level.

Attackers already have a variety of ways to evade detection, having fought a cat-and-mouse game for many years. Intrusion detection and anti-virus systems often whitelist domains – so if an attacker is exfiltrating data through a legitimate service – Amazon AWS, or Google for example, it may be that a compromise is never detected. Equally, modern malware often protects its command and control channels by using encryption, a logical thing to do given that many enterprises and tools will be looking for maliciousness within traffic. Another factor is that the barriers to entry have been lowered significantly through free encryption certificate issuing services such as Let’s Encrypt. For a defender, deciding exactly what to look for is driven by external factors and intelligence feeding into systems that look for anomalies.

If something is infiltrated into a device it may also never exfiltrate its data out over a corporate IP-connected network and may never need to connect to a command and control server that way. There are now a multitude of connection types available to devices and many of these will both leave and not be in control of the business. Bluetooth, low-power radio networks and mobile radio connections could all be used at the right time to move data from a compromised device.

Of course the attacker may not want to take any data at all, they might just want to compromise as many devices as possible and lie in wait to turn on some form of destructive attack at a later date, such as a Distributed Denial of Service, ransomware or wiper-style deletion attack.

All of these types of compromise point to the need to have additional intelligence from devices themselves rather than just relying on the network traffic and there is no better place to do this than the foundations of the device itself, inside the hardware.

No matter where anomaly and intrusion detection are taking place, false positives are always going to be a problem and a risk. They could cause a defender to become fatigued with the number of alerts they are getting or to misplace resources. For safety critical systems, taking the wrong action on a security anomaly could create an unsafe situation for a system’s users.

What if the attacker deliberately behaves in a way that causes the system to do something?

Sophisticated attacks may seek to trigger false positives. Bruce Schneier’s book ‘Secrets and Lies’ talks about Mujahedeen attacks on Soviet bases in 1980s Afghanistan, where fence sensors would deliberately be triggered by throwing a rabbit near them. By doing this repeatedly, eventually the sensors would be turned off and next thing there would be a vehicle through the fence.

One could imagine this happening against monitoring at a low level in devices and the trick to dealing with this is to resist the temptation to take immediate action. Events should be appropriately assessed and systems designed in such a way that they do not tip-off or alert the attacker that the system is aware of anything out of the ordinary happening. This in the long-term also allows the defender to potentially gather intelligence on the attacker for later attribution efforts or for forensic purposes. Deciding exactly when to take action relies on taking a measured approach to whether damage or harm is going to be caused. This may be a human decision, but it may also be automated, so making sure the right decision is made is paramount.

‘Babington with his Complices in St. Giles Fields’, 1586
(Public domain)

In the Babington plot, Walsingham even manipulated Mary’s communications, adding text to a letter from her, requesting that the conspirators were named. This caused Babington to reveal their names, leading to the unravelling of the plot.

Manipulating attacker traffic in a system to send back false data or to lead the attacker into blind traps is much more sophisticated and a potentially risky operation, but could be possible, with the defender significantly regaining the initiative over an attacker.

In the case of Mary Queen of Scots, Walsingham waited until exactly the right moment to trap her having taken control of the situation to this point. The evidence in the end was so damning that it caused the linguist who deciphered her messages to draw a gallows on the letter before he passed it to Walsingham.


For more on how historical security measures and failures can help instruct the future of security design for hardware in connected devices, check out the webinar (co-hosted by UltraSoC CSO Aileen Ryan and Copper Horse founder and CEO David Rogers) accompanying this series of blog posts.

Next blog post in the series >> 5/5 The game of defence and attack

Previous blog post in the series << 3/5 Confusing the guards and what it means for future hardware chip design

About the author

David Rogers is Founder and CEO at Copper Horse.

History lessons 1: Doing nothing in a hostile environment is never going to work out well

A second chance to enjoy David Rogers’ popular blog series — originally commissioned by on-chip monitoring experts UltraSoC, now part of Siemens — examining security through the ages and highlighting lessons for emerging and future technologies.

In this blog series, I’m going to mention castles a bit (amongst other things) – so, before I get started, I need to justify that slightly. The castle analogy has often been used when it comes to cybersecurity. It’s attractive – an easily understood concept of walls and layered defences, which can be visualized by a reader. Often the use of ‘walls’ is really used as a meta-physical boundary that doesn’t, in reality, exist and becomes unhelpful by promoting old-school notions of solely using ‘perimeter-based security’. The castle analogy can still be useful if not taken too literally, however there can be no true, direct comparison of cybersecurity to the physical security world of, what was a relatively short period in history. We can however learn much from the way attackers and defenders interacted and crucially, what worked. These lessons can potentially be carried into future security.

One of the first in Britain and the longest continually inhabited castle in the world – Windsor Castle.
Image: David Iliff. License: CC BY 2.5

Castles developed from around the time of the Norman Conquest of Britain in the 11th century. Defences became more or less important, depending where they were, the particular period of history and the belligerents involved in any conflict. The evolution of different castle technologies is interesting to look at from the point of view of which were subverted by some extremely capable adversaries, as well as those which were compromised primarily by guile. Castles were not impenetrable and there are some very good examples which forced their security to be improved and to develop.

Devices and castles

I tend to find myself thinking that, when it comes to the world today, particularly with a large proliferation of quite small, low-powered devices making up the Internet of Things (IoT), that we have lots of little outposts of endpoints that should be more secure, perhaps even castle-like in themselves. In some cases, maybe they should be outposts – within the sphere of protection of something greater which can provide help if needed. Devices come in many different shapes and forms – IoT extends across all business sectors and includes critical things like medical devices, automotive and space applications. They all have differing levels of security requirements and some of these are context specific to the environment they are used in.

Dynamic response and the lack of it

Many castles and fortresses were specifically built because the environment they existed in was hostile. The site itself was extremely imposing; a symbol of authority. If attacked and put under siege, the occupants were not likely to be relieved in a short space of time, but they usually had a garrison of defenders who could repel and harry attackers.

In many ways, the connected devices of today face a similar environment. The moment that a consumer product is put onto the market it faces attack – either by physical tampering and hacker reconnaissance work on the device or through the network when it connects – but unfortunately the device usually doesn’t do anything about it.

It was the hope of forces under siege in a castle that reinforcements would arrive to relieve them. Until that point though, the defenders did not just sit there – they had the ability to respond in a variety of dynamic ways, from cavalry riding forth into the local area outside the castle, through to the ability to leave under cover of darkness via a sally port to raise the alarm or to forage. In some cases, defenders were very lucky – Richard the Lionheart was injured and subsequently died from a crossbow bolt fired from the castle walls he was besieging in Châlus, France.

A well-defended castle could also continue to survive for a long-time, with its own well for water and enough supplies to be largely self-sufficient. One of the key strategic advantages of Edward I’s ring of castles around Wales was that some of them could be re-supplied from the sea and not be completely surrounded like previous castles. One such castle, at Harlech, held out for seven years during the Wars of the Roses.

Artist’s representation of Harlech Castle in the 1400s
Image source (used under fair use): http://carneycastle.com/Harlech/index.htm

Many of the devices of today come with very little protection at all. A device is fundamentally based on a printed circuit board, with some hardware chips placed on it, running software. Many of these devices run the same common operating systems which are often pre-configured to be open and not secured and work from hardware interface standards which in some cases go back to the 1970s – with no security designed-in. With this reality, a device which is available to openly buy and which is connected to the open internet is totally compromised from the start. It is akin to putting a cloth tent in an open field in enemy territory (with the door open) and with no guards, so nowhere near a castle in terms of defence!

The same devices are also entirely static – if something were to happen, they’re not able to respond, even though the problems they face are well understood and likely. They can’t survive safety-related issues or outages because they’re simply not designed to deal with the real world. Having said that, there are some connected products out there that do security well, they follow best practices and are tested properly and follow a proper product security lifecycle. Even for these devices, however, they’re very limited when it comes to being able to respond to threats themselves.

If we’re to deal with the future world, devices need to be able to dynamically respond to emergent threats in a way that can detect, respond appropriately. Doing nothing is not an option. If devices are outposts or castles, they need to be garrisoned appropriately and able to respond until help arrives.

Next blog post in the series >> 2/5 Who has access?

About the author

David Rogers is Founder and CEO at Copper Horse.