History lessons 4: What to do when an anomaly is detected?

David Rogers continues his blog series — commissioned by on-chip monitoring experts UltraSoC (now part of Siemens) examining security through the ages and highlighting lessons for emerging and future technologies.

There are many tales from history where things have been detected that have led to plots being uncovered. Some of this has been driven by prior knowledge, sometimes the actors involved are already under suspicion in some way and in other cases it is pure chance and luck.

Guy Fawkes
Source: Edgar Wilson “Bill” Nye (1850-1896 [Public domain]

The gunpowder plot to blow up England’s parliament in 1605 was ultimately discovered because of a message to a Catholic parliamentarian warning him to stay away from the opening of parliament on November 5th. It was dismissed as a hoax at the time, but the King’s suspicions were raised and he instigated searches of parliament, increasing security. On the night of November 4th, Guy Fawkes was discovered and caught as he was leaving the place where he had stored the gunpowder underneath parliament. It appears that this was genuinely an artefact of the increased vigilance, as a few days before, Guy Fawkes had reported to his co-conspirators that “he found his ‘private marks’ all undisturbed” at the site where the gunpowder was stored. This seems to indicate that Guy Fawkes had taken his own precautions against the discovery and potential sabotage of the plot.

Another interesting story of discovery and detection is the Babington plot against Queen Elizabeth I. Queen Elizabeth’s spymaster, Francis Walsingham, discovered that a group of Catholic plotters led by a man called Anthony Babington were communicating with Mary Queen of Scots in order to depose Elizabeth and put Mary on the English throne. Walsingham first used an agent to change and control the channel by which Mary was communicating, ensuring that messages to and from her were hidden in the corks of beer barrels. This allowed him to have them intercepted and deciphered. The plot was allowed to continue, while Walsingham waited and gathered further evidence through the letters.

In the technology space, detection and response mechanisms exist on the network side mainly. Network traffic analysis tools are now backed by AI and machine learning techniques. The techniques for handling large volumes of network traffic and processing this at scale to discover anomalies have come a long way but are yet to really properly take into account what is going on with the end points and certainly not the innards of them to a chip level.

Attackers already have a variety of ways to evade detection, having fought a cat-and-mouse game for many years. Intrusion detection and anti-virus systems often whitelist domains – so if an attacker is exfiltrating data through a legitimate service – Amazon AWS, or Google for example, it may be that a compromise is never detected. Equally, modern malware often protects its command and control channels by using encryption, a logical thing to do given that many enterprises and tools will be looking for maliciousness within traffic. Another factor is that the barriers to entry have been lowered significantly through free encryption certificate issuing services such as Let’s Encrypt. For a defender, deciding exactly what to look for is driven by external factors and intelligence feeding into systems that look for anomalies.

If something is infiltrated into a device it may also never exfiltrate its data out over a corporate IP-connected network and may never need to connect to a command and control server that way. There are now a multitude of connection types available to devices and many of these will both leave and not be in control of the business. Bluetooth, low-power radio networks and mobile radio connections could all be used at the right time to move data from a compromised device.

Of course the attacker may not want to take any data at all, they might just want to compromise as many devices as possible and lie in wait to turn on some form of destructive attack at a later date, such as a Distributed Denial of Service, ransomware or wiper-style deletion attack.

All of these types of compromise point to the need to have additional intelligence from devices themselves rather than just relying on the network traffic and there is no better place to do this than the foundations of the device itself, inside the hardware.

No matter where anomaly and intrusion detection are taking place, false positives are always going to be a problem and a risk. They could cause a defender to become fatigued with the number of alerts they are getting or to misplace resources. For safety critical systems, taking the wrong action on a security anomaly could create an unsafe situation for a system’s users.

What if the attacker deliberately behaves in a way that causes the system to do something?

Sophisticated attacks may seek to trigger false positives. Bruce Schneier’s book ‘Secrets and Lies’ talks about Mujahedeen attacks on Soviet bases in 1980s Afghanistan, where fence sensors would deliberately be triggered by throwing a rabbit near them. By doing this repeatedly, eventually the sensors would be turned off and next thing there would be a vehicle through the fence.

One could imagine this happening against monitoring at a low level in devices and the trick to dealing with this is to resist the temptation to take immediate action. Events should be appropriately assessed and systems designed in such a way that they do not tip-off or alert the attacker that the system is aware of anything out of the ordinary happening. This in the long-term also allows the defender to potentially gather intelligence on the attacker for later attribution efforts or for forensic purposes. Deciding exactly when to take action relies on taking a measured approach to whether damage or harm is going to be caused. This may be a human decision, but it may also be automated, so making sure the right decision is made is paramount.

‘Babington with his Complices in St. Giles Fields’, 1586
(Public domain)

In the Babington plot, Walsingham even manipulated Mary’s communications, adding text to a letter from her, requesting that the conspirators were named. This caused Babington to reveal their names, leading to the unravelling of the plot.

Manipulating attacker traffic in a system to send back false data or to lead the attacker into blind traps is much more sophisticated and a potentially risky operation, but could be possible, with the defender significantly regaining the initiative over an attacker.

In the case of Mary Queen of Scots, Walsingham waited until exactly the right moment to trap her having taken control of the situation to this point. The evidence in the end was so damning that it caused the linguist who deciphered her messages to draw a gallows on the letter before he passed it to Walsingham.

For more on how historical security measures and failures can help instruct the future of security design for hardware in connected devices, check out the webinar (co-hosted by UltraSoC CSO Aileen Ryan and Copper Horse founder and CEO David Rogers) accompanying this series of blog posts.

Next blog post in the series >> 5/5 The game of defence and attack

Previous blog post in the series << 3/5 Confusing the guards and what it means for future hardware chip design

About the author

David Rogers is Founder and CEO at Copper Horse.

History lessons 3: Confusing the guards and what it means for future hardware chip design

David Rogers continues his blog series — commissioned by on-chip monitoring experts UltraSoC (now part of Siemens) examining security through the ages and highlighting lessons for emerging and future technologies.

The city walls of York
Source: David Rogers

Previously, I talked about how expensive defences can be subverted by a determined and clever adversary. This time I continue the theme of access, but consider the problem of confusion.

In considering the story in the last blog, I was thinking about whether the carpenter’s entry into Conwy Castle should be classed as (what is known in the technology world as a) ‘confused deputy attack’ (it isn’t). This type of attack often happens in web applications in cross-site request forgery (CSRF) attacks in order to confuse the browser, as the agent of the attacker, into getting a website to do something it shouldn’t.

Keeping enemies out

Another example from history can better explain the concept of a confused deputy attack. Firstly, a bit of background. There are many stories in the UK of historic laws and bylaws that stem from medieval times that give an insight into how towns controlled access from people who they would consider to be “enemies”. Some of these are true and others are mere rumour. For example:

  • “Welsh people were allowed to enter the towns by day but kept out at night and forbidden to either trade or carry weapons”
  • “In the city of York, it is legal to murder a Scotsman within the ancient city walls, but only if he is carrying a bow and arrow”
  • “In Carlisle, any Scot found wandering around may be whipped or jailed”
  • “Welshmen are prohibited from entering Chester before the sun rises – and have to leave again before the sun goes down”
  • “It is still technically okay to shoot a Welshman on a Sunday inside the city walls – as long as it’s after midnight and with a crossbow”

As a note – the law commission looked into some of these stories and clarifies that:
“It is illegal to shoot a Welsh or Scottish (or any other) person regardless of the day, location or choice of weaponry. The idea that it may once have been allowed in Chester appears to arise from a reputed City Ordinance of 1403, passed in response to the Glynd?r Rising, and imposing a curfew on Welshmen in the city. However, it is not even clear that this Ordinance ever existed. Sources for the other cities are unclear.”

In York however (a northern English city which was walled to keep the Scots out), we do know that at the Bootham Bar, an entrance to the city, a door knocker was installed in 1501. Scotsmen who wanted to enter the city had to knock first and ask for permission from the Lord Mayor.

Bootham Bar Roman gateway
YORK, YORKSHIRE, UK: JULY 22, 2008: Bootham bar Roman gateway in York city wall .

The confused deputy

We have to assume that the Lord Mayor himself was not there all the time to give permission in person and delegated the authority for checking whether someone could come in to the guards. The guards still had to come to him for sign-off though.

This is where we can explain the concept of the confused deputy more clearly. Imagine that there is a Scottish attacker who wants to get into York to cause some damage. He’s knocked on the Bootham Bar gate door knocker and convinced the guards he’s authorized because he tells them he’s there to do work (he succeeds in confusing them – they become the confused deputy, conferring trust on the Scotsman where there should be none). However, our attacker still has to gain authority – through the Lord Mayor himself.

The guards carry the message to the Lord Mayor that the Scotsman is legitimate and should be allowed to enter. The Lord Mayor assumes trust and authorizes our Scotsman to enter the city to do work.

The attacker didn’t need to convince the Lord Mayor at all, all he had to do was convince the guards and use them to gain the authority he wanted. The Lord Mayor trusted his guards, but wouldn’t trust the attacker – however he’ll never see him. This is how some website and technology attacks work, by escalating the privilege level of access via an unwitting, trusted agent. To avoid this, additional measures need to be in place for the Lord Mayor to independently validate that the Scotsman is not actually an attacker, before providing further authority to him.

One concern about chip-level attacks is that the vast majority of the communications inside the chip are not integrity checked or validated in any way. An attacker can abuse existing authorities to gain trust in other parts of the system. Changing this is going to be a long-term task for the industry as attacks become more sophisticated. In the meantime, we need to put in measures to be on guard and look for unusual activity going on, rather than automatically assuming everything within the ‘city’ is trusted; perhaps the technological equivalent of using a bow and arrow after sundown.


For more on how historical security measures and failures can help instruct the future of security design for hardware in connected devices, check out the webinar (co-hosted by UltraSoC CSO Aileen Ryan and Copper Horse founder and CEO David Rogers) accompanying this series of blog posts.

Previous blog post in the series << 2/5 Who has access?

About the author

David Rogers is Founder and CEO at Copper Horse.

History lessons 2: Who has access?

David Rogers continues his blog series — commissioned by on-chip monitoring experts UltraSoC (now part of Siemens) — examining security through the ages and highlighting lessons for emerging and future technologies.

Conway Castle, North Wales.
Image (edited) source: Adrian J Evans

Conwy Castle is an imposing castle. Built towards the end of the 13th of century in North Wales, as part of Edward I’s Iron Ring around the country, its curtain walls are interspersed with eight round towers, complete with arrow slits and ramparts. Its two barbicans guarded entrances to the castle. It still stands today, within the further walls of the town of Conwy itself with a further 21 towers. What is amazing is that it was built within only five years. It was designed by the best castle designer of the day, Master James of St George, and was state-of-the-art when it came to defensive security. It withstood one siege – when the Welsh besieged King Edward in the castle in 1295. It was on Good Friday in 1401 however, that the most interesting events happened at the castle during Owain Glyndwr’s uprising against the English.

Nearly all of the garrison of the castle were at church in the town attending Mass. There were two guards left behind on the gate. A carpenter from the castle approached the guards saying that he needed to perform some work with two of his assistants. They were admitted and then immediately stabbed both guards. They then quickly let in the rest of their men, locking the gates behind them. When the garrison arrived back from church they were unable to gain access to the castle.

Unfortunately, the cleverness of this takeover was undermined by the fact that there were few stores in the castle and the Welsh were not prepared for it. It also upset the King of England, Henry IV, who immediately besieged the castle. Within three months, with no edible stores, the Welsh were starved out.

Why is this story particularly interesting in a technology context? This kind of strategy has many parallels with the way in which hackers often use guile and skill to attack seemingly impenetrable defences. The attack was planned to happen when the castle would be least defended and a way of gaining access via an authorized method had been found. The guards authenticated that the carpenter was real and he was clearly authorized to be there. The defenders were not correctly using their layers of defence within the castle and showed complacency and over-familiarity.

The story also gives a lesson for attackers looking to compromise and remain in a system. When defences have been subverted, one thing that more advanced attackers do in the technology world is what’s called ‘living off the land’. In this case the attackers were not able to sustain their takeover of the castle because they lacked those resources to hold out for a long time. Indeed, they’d misperceived the real situation. In the technology world, it is good practice to minimize in advance the things that an attacker can use once they’re “in the castle” or onto a system, such as software libraries not used for the core operation of a system. In the case of the story above, it was bad luck for the attackers that the garrison had so few usable supplies and food.

Containing access

We know that Conwy has two barbicans. The purpose of a barbican is to provide additional defence in front of an access point or gate. It functions as a mechanism for control over hostile entrants. Barbicans are typically narrow and often contain traps such as murder holes to throw things down on the enemy, as well as adjacent spaces on the same level and a floor above from which defenders can attack the enemy from the side or from height, whilst safely behind their own defences. The defenders have the advantage because low resources are needed to defend whilst the attacker is narrowly channelled into a place of the defender’s choosing.

Layout of Conwy castle showing the East and West Barbicans
Source: CADW

In technology terms, we see very little of this kind of defensive mechanism. Where there are inputs to a system, typically via an Application Programming Interface (API), inputs are often blindly accepted, in some cases from anyone who accesses the interface. Good practice dictates that input is validated – ie that a number is indeed a number and within the expected range. However, there is clearly an opportunity to go further than that. Where an interface or system is under attack there is an opportunity to defend against that. Examples of attacks go from fuzzing (throwing structured and unstructured data at an interface in the hope of breaching it in some way), repeated brute-force attempts at getting in, or denial of service (DoS) attacks hoping to overload and consume system resources. Abstractly, a system, once it identifies such kinds of attack, could provide some kind of pre-interface – ie a barbican before the data hits a real interface. This gives the opportunity to do something about an attack as it happens – for example, it could choose to drop the data that is sent during a DoS attack rather than consume system resources responding to it. More sophisticated versions could waste an attacker’s time and resources through other clever means. This is a form of ‘active defence’, without actually ever touching an attacker’s system. It is all performed locally on the system that is under attack.

However, all of this depends on whether the system is always on guard. History shows that in the Conwy castle case, the garrison were complacent – even though the Welsh had started to rebel the year before. The ‘trusted’ carpenter should have been let in on his own without anyone else and there should have been additional guards within the main castle such that the attackers were confined to the barbican itself, to be dealt with.

The castles of yore often included  other mechanisms for access control including the use of a portcullis (or sometimes several of them) which could be dropped very quickly if needed to block access or to trap attackers at entry points. Similarly, entrances were often guarded by drawbridges which could be closed, or turning bridges which could easily be destroyed by defenders. Castle buildings often had entrances on the 1st floor and above – well above head-height. This meant that wooden stairs could be destroyed and burnt in a hurry if necessary, causing an attacker further trouble if the castle was under attack. All of these were primarily designed for defending against sieges. As we’ve seen in this blog however, sometimes costly defences can be undermined by guile, intelligence, defender complacency and choosing the right timing.

For more on how historical security measures and failures can help instruct the future of security design for hardware in connected devices, check out the webinar (co-hosted by UltraSoC CSO Aileen Ryan and Copper Horse founder and CEO David Rogers) accompanying this series of blog posts.

Previous blog post in the series << 1/5 Doing nothing in a hostile environment is never going to work out well

Next blog post in the series >> 3/5 Confusing the guards and what it means for future hardware chip design

About the author

David Rogers is Founder and CEO at Copper Horse.

History lessons 1: Doing nothing in a hostile environment is never going to work out well

A second chance to enjoy David Rogers’ popular blog series — originally commissioned by on-chip monitoring experts UltraSoC, now part of Siemens — examining security through the ages and highlighting lessons for emerging and future technologies.

In this blog series, I’m going to mention castles a bit (amongst other things) – so, before I get started, I need to justify that slightly. The castle analogy has often been used when it comes to cybersecurity. It’s attractive – an easily understood concept of walls and layered defences, which can be visualized by a reader. Often the use of ‘walls’ is really used as a meta-physical boundary that doesn’t, in reality, exist and becomes unhelpful by promoting old-school notions of solely using ‘perimeter-based security’. The castle analogy can still be useful if not taken too literally, however there can be no true, direct comparison of cybersecurity to the physical security world of, what was a relatively short period in history. We can however learn much from the way attackers and defenders interacted and crucially, what worked. These lessons can potentially be carried into future security.

One of the first in Britain and the longest continually inhabited castle in the world – Windsor Castle.
Image: David Iliff. License: CC BY 2.5

Castles developed from around the time of the Norman Conquest of Britain in the 11th century. Defences became more or less important, depending where they were, the particular period of history and the belligerents involved in any conflict. The evolution of different castle technologies is interesting to look at from the point of view of which were subverted by some extremely capable adversaries, as well as those which were compromised primarily by guile. Castles were not impenetrable and there are some very good examples which forced their security to be improved and to develop.

Devices and castles

I tend to find myself thinking that, when it comes to the world today, particularly with a large proliferation of quite small, low-powered devices making up the Internet of Things (IoT), that we have lots of little outposts of endpoints that should be more secure, perhaps even castle-like in themselves. In some cases, maybe they should be outposts – within the sphere of protection of something greater which can provide help if needed. Devices come in many different shapes and forms – IoT extends across all business sectors and includes critical things like medical devices, automotive and space applications. They all have differing levels of security requirements and some of these are context specific to the environment they are used in.

Dynamic response and the lack of it

Many castles and fortresses were specifically built because the environment they existed in was hostile. The site itself was extremely imposing; a symbol of authority. If attacked and put under siege, the occupants were not likely to be relieved in a short space of time, but they usually had a garrison of defenders who could repel and harry attackers.

In many ways, the connected devices of today face a similar environment. The moment that a consumer product is put onto the market it faces attack – either by physical tampering and hacker reconnaissance work on the device or through the network when it connects – but unfortunately the device usually doesn’t do anything about it.

It was the hope of forces under siege in a castle that reinforcements would arrive to relieve them. Until that point though, the defenders did not just sit there – they had the ability to respond in a variety of dynamic ways, from cavalry riding forth into the local area outside the castle, through to the ability to leave under cover of darkness via a sally port to raise the alarm or to forage. In some cases, defenders were very lucky – Richard the Lionheart was injured and subsequently died from a crossbow bolt fired from the castle walls he was besieging in Châlus, France.

A well-defended castle could also continue to survive for a long-time, with its own well for water and enough supplies to be largely self-sufficient. One of the key strategic advantages of Edward I’s ring of castles around Wales was that some of them could be re-supplied from the sea and not be completely surrounded like previous castles. One such castle, at Harlech, held out for seven years during the Wars of the Roses.

Artist’s representation of Harlech Castle in the 1400s
Image source (used under fair use): http://carneycastle.com/Harlech/index.htm

Many of the devices of today come with very little protection at all. A device is fundamentally based on a printed circuit board, with some hardware chips placed on it, running software. Many of these devices run the same common operating systems which are often pre-configured to be open and not secured and work from hardware interface standards which in some cases go back to the 1970s – with no security designed-in. With this reality, a device which is available to openly buy and which is connected to the open internet is totally compromised from the start. It is akin to putting a cloth tent in an open field in enemy territory (with the door open) and with no guards, so nowhere near a castle in terms of defence!

The same devices are also entirely static – if something were to happen, they’re not able to respond, even though the problems they face are well understood and likely. They can’t survive safety-related issues or outages because they’re simply not designed to deal with the real world. Having said that, there are some connected products out there that do security well, they follow best practices and are tested properly and follow a proper product security lifecycle. Even for these devices, however, they’re very limited when it comes to being able to respond to threats themselves.

If we’re to deal with the future world, devices need to be able to dynamically respond to emergent threats in a way that can detect, respond appropriately. Doing nothing is not an option. If devices are outposts or castles, they need to be garrisoned appropriately and able to respond until help arrives.

Next blog post in the series >> 2/5 Who has access?

About the author

David Rogers is Founder and CEO at Copper Horse.